{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Tutorial 18: Monte Carlo Verification\n", "\n", "This tutorial demonstrates how to use the Monte Carlo Simulation harness and the statistical analyzer for GNC system verification.\n", "\n", "## 1. Overview\n", "For mission-critical spacecraft software, a single simulation run is insufficient to prove robustness. **OpenGNC** provides a high-performance Monte Carlo suite to analyze performance under stochastic variations in environment, sensor noise, and process disturbances." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. Running a Monte Carlo Simulation\n", "The `MonteCarloSim` class facilitates parallel execution of multiple trials. \n", "\n", "> [!IMPORTANT]\n", "> **Windows & Jupyter Note**: To avoid hangs when using multiprocessing on Windows, simulator functions must be defined in a separate `.py` file and imported. This allows child processes to cleanly import the function." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Starting parallel Monte Carlo (100 runs)...\n", "Completed 100 trials in 0.01 seconds.\n" ] } ], "source": [ "from opengnc.simulation.monte_carlo import MonteCarloSim\n", "from mc_worker import fast_simulator # Imported from external file for Windows compatibility\n", "import numpy as np\n", "import time\n", "\n", "mc = MonteCarloSim(fast_simulator)\n", "\n", "# Execute 100 trials using all available CPU cores\n", "print(\"Starting parallel Monte Carlo (100 runs)...\")\n", "start = time.time()\n", "results = mc.run_parallel(num_runs=100, tf=10.0, dt=0.01)\n", "end = time.time()\n", "\n", "print(f\"Completed {len(results)} trials in {end - start:.2f} seconds.\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3. Statistical Analysis & Verification\n", "Once the trials are complete, you can use the `MonteCarloAnalyzer` to calculate performance margins." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Final Mean Error: 1.0030 m\n", "Final 1-Sigma: 0.0546 m\n", "3-Sigma Upper Bound: 1.1668 m\n", "Maximum allowable error: 2.0m\n", "Successful Trials: 100 / 100\n", "Reliability Score: 100.0%\n" ] } ], "source": [ "analyzer = mc.get_analyzer()\n", "\n", "# 1. Calculate 3-Sigma Bounds\n", "pos_stats = analyzer.get_aggregate_stats(\"pos_error\")\n", "\n", "print(f\"Final Mean Error: {pos_stats['mean'][-1]:.4f} m\")\n", "print(f\"Final 1-Sigma: {pos_stats['std'][-1]:.4f} m\")\n", "print(f\"3-Sigma Upper Bound: {pos_stats['sigma_3_upper'][-1]:.4f} m\")\n", "\n", "# 2. Reliability Analysis (criteria: error < 2.0m)\n", "failure_func = lambda res: np.any(np.abs(res[\"pos_error\"]) > 2.0)\n", "summary = analyzer.summarize_failures(failure_func)\n", "\n", "print(f\"Maximum allowable error: 2.0m\")\n", "print(f\"Successful Trials: {summary['total_runs'] - summary['failures']} / {summary['total_runs']}\")\n", "print(f\"Reliability Score: {summary['reliability'] * 100:.1f}%\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4. Verification Suite Tool\n", "A pre-configured verification suite is available at `benchmarks/run_verification.py` for standard proof-of-mission analysis.\n", "\n", "```bash\n", "python benchmarks/run_verification.py\n", "```" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.11" } }, "nbformat": 4, "nbformat_minor": 4 }