opengnc.optimal_control package

Submodules

opengnc.optimal_control.adaptive_control module

Model Reference Adaptive Control (MRAC) for state-space systems.

class opengnc.optimal_control.adaptive_control.ModelReferenceAdaptiveControl(A_m: ndarray, B_m: ndarray, B: ndarray, Gamma: ndarray, Q_lyap: ndarray, phi_func: Callable[[ndarray], ndarray])[source]

Bases: object

Model Reference Adaptive Controller (MRAC) for linear state-space systems.

Implements a direct MRAC scheme where controller parameters are updated to minimize the error between the plant and a stable reference model.

Plant with parametric uncertainty: $dot{x} = Ax + B(u + Theta^T Phi(x))$

Reference Model: $dot{x}_m = A_m x_m + B_m r$

Adaptation Law (Lyapunov-based): $dot{hat{Theta}} = Gamma Phi(x) e^T P B$ where $e = x - x_m$, and $P$ solves $A_m^T P + P A_m = -Q$.

Parameters:
  • A_m (np.ndarray) – Reference model state matrix (nx x nx).

  • B_m (np.ndarray) – Reference model input matrix (nx x nu).

  • B (np.ndarray) – Plant input matrix (nx x nu).

  • Gamma (np.ndarray) – Adaptation gain matrix (nk x nk).

  • Q_lyap (np.ndarray) – Positive-definite matrix for the Lyapunov equation (nx x nx).

  • phi_func (Callable[[np.ndarray], np.ndarray]) – Regressor function $Phi(x)$ returning a vector of basis functions (nk,).

compute_control(x: ndarray, x_m: ndarray, r: ndarray, kx: ndarray | None = None, kr: ndarray | None = None) ndarray[source]

Compute the adaptive control input and the parameter update rate.

Parameters:
  • x (np.ndarray) – Current plant state vector (nx,).

  • x_m (np.ndarray) – Current reference model state vector (nx,).

  • r (np.ndarray) – Reference command input (nu,).

  • kx (np.ndarray, optional) – Nominal feedback gain s.t. A + B*Kx = Am. Default is zero.

  • kr (np.ndarray, optional) – Nominal feedforward gain s.t. B*Kr = Bm. Default is Identity.

Returns:

Control input vector $u$ (nu,).

Return type:

np.ndarray

update_theta(dt: float) None[source]

Integrate the parameter estimates using the computed update rate.

Should be called once per simulation step after compute_control.

Parameters:

dt (float) – Simulation time step (s).

opengnc.optimal_control.backstepping_control module

Backstepping Controller for generic 2nd order nonlinear systems.

class opengnc.optimal_control.backstepping_control.BacksteppingController(f_func: Callable[[ndarray, ndarray], ndarray], g_func: Callable[[ndarray, ndarray], ndarray], k1: float | ndarray, k2: float | ndarray)[source]

Bases: object

Backstepping Controller for cascaded 2nd-order nonlinear systems.

Designed for systems of the form: $dot{x}_1 = x_2$ $dot{x}_2 = f(x_1, x_2) + g(x_1, x_2) u$

This implementation uses a standard recursive design to achieve global asymptotic stability of a desired trajectory.

Parameters:
  • f_func (Callable[[np.ndarray, np.ndarray], np.ndarray]) – Nonlinear drift function $f(x_1, x_2)$.

  • g_func (Callable[[np.ndarray, np.ndarray], np.ndarray]) – Nonlinear input matrix $g(x_1, x_2)$.

  • k1 (Union[float, np.ndarray]) – Feedback gain for the first error state (position).

  • k2 (Union[float, np.ndarray]) – Feedback gain for the second error state (velocity).

compute_control(x1: ndarray, x2: ndarray, x1_d: ndarray, x1_dot_d: ndarray, x1_ddot_d: ndarray | None = None) ndarray[source]

Compute the backstepping control input for the current state.

Parameters:
  • x1 (np.ndarray) – Measured position/primary state vector (n,).

  • x2 (np.ndarray) – Measured velocity/secondary state vector (n,).

  • x1_d (np.ndarray) – Desired position/trajectory (n,).

  • x1_dot_d (np.ndarray) – Desired velocity (n,).

  • x1_ddot_d (np.ndarray, optional) – Desired acceleration (n,). Defaults to zero.

Returns:

Control input vector $u$ (m,).

Return type:

np.ndarray

opengnc.optimal_control.feedback_linearization module

Feedback Linearization Controller for nonlinear systems.

class opengnc.optimal_control.feedback_linearization.FeedbackLinearization(f_func: Callable[[ndarray], ndarray], g_func: Callable[[ndarray], ndarray])[source]

Bases: object

Feedback Linearization Controller for nonlinear systems.

Transforms a nonlinear system into an equivalent linear system through algebraic state feedback.

System model: $dot{x} = f(x) + g(x) u$

By choosing $u = g(x)^{-1} (v - f(x))$, the closed-loop system becomes: $dot{x} = v$ where $v$ is the pseudo-control input (linear command).

Parameters:
  • f_func (Callable[[np.ndarray], np.ndarray]) – Nonlinear drift function $f(x)$.

  • g_func (Callable[[np.ndarray], np.ndarray]) – Nonlinear input matrix $g(x)$.

compute_control(x: ndarray, v: ndarray) ndarray[source]

Compute the linearizing control input $u$.

Parameters:
  • x (np.ndarray) – Current state vector (nx,).

  • v (np.ndarray) – Desired linear acceleration/pseudo-control vector (nx,).

Returns:

The required control input $u$ (nu,).

Return type:

np.ndarray

opengnc.optimal_control.finite_horizon_lqr module

Finite-Horizon Linear Quadratic Regulator (LQR).

class opengnc.optimal_control.finite_horizon_lqr.FiniteHorizonLQR(A_fn: Callable[[float], ndarray], B_fn: Callable[[float], ndarray], Q_fn: Callable[[float], ndarray], R_fn: Callable[[float], ndarray], Pf: ndarray, T: float)[source]

Bases: object

Finite-Horizon Linear Quadratic Regulator (LQR).

Computes a time-varying optimal control law $u(t) = -K(t)x(t)$ for systems over a fixed time interval $[0, T]$.

Objective: $min J = x(T)^T P_f x(T) + int_{0}^{T} (x^T Q(t) x + u^T R(t) u) dt$

Parameters:
  • A_fn (Callable[[float], np.ndarray]) – Time-varying state matrix $A(t)$ (nx x nx).

  • B_fn (Callable[[float], np.ndarray]) – Time-varying input matrix $B(t)$ (nx x nu).

  • Q_fn (Callable[[float], np.ndarray]) – Time-varying state cost matrix $Q(t)$ (nx x nx).

  • R_fn (Callable[[float], np.ndarray]) – Time-varying input cost matrix $R(t)$ (nu x nu).

  • Pf (np.ndarray) – Terminal state cost matrix at $t=T$ (nx x nx).

  • T (float) – The fixed final time of the optimization horizon.

compute_control(x: ndarray, t: float) ndarray[source]

Compute optimal control input $u = -K(t)x$.

Parameters:
  • x (np.ndarray) – Current state vector (nx,).

  • t (float) – Current time (s).

Returns:

Optimal control input u (nu,).

Return type:

np.ndarray

get_gain(t: float) ndarray[source]

Compute/interpolate the optimal feedback gain K at time t.

Parameters:

t (float) – Current time (s).

Returns:

Feedback gain matrix K (nu x nx).

Return type:

np.ndarray

solve(num_points: int = 100) tuple[ndarray, ndarray][source]

Solve the Differential Riccati Equation (DRE) backwards in time.

$-dot{P} = P A + A^T P - P B R^{-1} B^T P + Q$, with $P(T) = P_f$

Parameters:

num_points (int, optional) – Number of points for the solution trajectory. Default is 100.

Returns:

A tuple of (t_span, P_trajectory) where P_trajectory is (num_points, nx, nx).

Return type:

Tuple[np.ndarray, np.ndarray]

opengnc.optimal_control.geometric_control module

Geometric Controller on SO(3) for attitude tracking.

class opengnc.optimal_control.geometric_control.GeometricController(J: ndarray, kR: float, kW: float)[source]

Bases: object

Geometric Controller on the Special Orthogonal Group SO(3).

Implements a tracking controller directly on the rotation matrix manifold, avoiding singularities and unwinding issues associated with Euler angles and quaternions.

Ref: Lee, T., Leok, M., & McClamroch, N. H. (2010). Geometric Tracking Control of a Quadrotor UAV on SE(3).

Parameters:
  • J (np.ndarray) – Spacecraft inertia tensor matrix (3x3) [kg*m^2].

  • kR (float) – Attitude error gain (proportional).

  • kW (float) – Angular velocity error gain (derivative).

compute_control(R: ndarray, omega: ndarray, R_d: ndarray, omega_d: ndarray, d_omega_d: ndarray | None = None) ndarray[source]

Compute the optimal control torque on SO(3).

Parameters:
  • R (np.ndarray) – Current rotation matrix (3x3).

  • omega (np.ndarray) – Current angular velocity in body frame (3,).

  • R_d (np.ndarray) – Desired rotation matrix (3x3).

  • omega_d (np.ndarray) – Desired angular velocity in body frame (3,).

  • d_omega_d (np.ndarray, optional) – Desired angular acceleration (3,). Defaults to zero.

Returns:

Control torque vector M (3,) [N*m].

Return type:

np.ndarray

opengnc.optimal_control.geometric_control.hat_map(v: ndarray) ndarray[source]

Map a $3times 1$ vector to its corresponding $3times 3$ skew-symmetric matrix.

Parameters:

v (np.ndarray) – $3times 1$ vector.

Returns:

$3times 3$ skew-symmetric matrix.

Return type:

np.ndarray

opengnc.optimal_control.geometric_control.vee_map(R: ndarray) ndarray[source]

Map a $3times 3$ skew-symmetric matrix to its corresponding $3times 1$ vector.

Parameters:

R (np.ndarray) – $3times 3$ skew-symmetric matrix.

Returns:

$3times 1$ vector.

Return type:

np.ndarray

opengnc.optimal_control.h2_control module

H2 Optimal Controller (LQG equivalent).

class opengnc.optimal_control.h2_control.H2Controller(A: ndarray, B: ndarray, C: ndarray, Q_lqr: ndarray, R_lqr: ndarray, Q_lqe: ndarray, R_lqe: ndarray, G_lqe: ndarray | None = None)[source]

Bases: LQG

H2-Optimal Controller.

Solves the standard H2 control problem for linear state-space systems. For systems with Gaussian noise and quadratic performance indices, the H2-optimal controller is equivalent to the Linear Quadratic Gaussian (LQG) controller.

Parameters:
  • A (np.ndarray) – State matrix (nx x nx).

  • B (np.ndarray) – Control input matrix (nx x nu).

  • C (np.ndarray) – Output matrix (ny x nx).

  • Q_lqr (np.ndarray) – State weighting matrix for LQR (nx x nx).

  • R_lqr (np.ndarray) – Control weighting matrix for LQR (nu x nu).

  • Q_lqe (np.ndarray) – Process noise covariance matrix for LQE (nw x nw).

  • R_lqe (np.ndarray) – Measurement noise covariance matrix for LQE (ny x ny).

  • G_lqe (np.ndarray, optional) – Process noise input matrix (nx x nw). Defaults to Identity.

solve() tuple[ndarray, ndarray][source]

Solve LQR and LQE optimal gain design problems.

Returns:

Optimal feedback gain K (nu x nx) and observer gain L (nx x ny).

Return type:

Tuple[np.ndarray, np.ndarray]

opengnc.optimal_control.h_infinity module

H-Infinity Robust Controller state-feedback design.

class opengnc.optimal_control.h_infinity.HInfinityController(A: ndarray, B1: ndarray, B2: ndarray, Q: ndarray, R: ndarray, gamma: float)[source]

Bases: object

H-Infinity Robust State-Feedback Controller.

Designs a control law $u = -Kx$ that minimizes the $H_infty$ norm of the closed-loop transfer function from external disturbances $w$ to performance outputs $z$, ensuring robustness against worst-case disturbances.

System Dynamics: $dot{x} = Ax + B_1 w + B_2 u$ Cost Function: $J = int_{0}^{infty} (x^T Q x + u^T R u - gamma^2 w^T w) dt$

Parameters:
  • A (np.ndarray) – State matrix (nx x nx).

  • B1 (np.ndarray) – Disturbance input matrix (nx x nw).

  • B2 (np.ndarray) – Control input matrix (nx x nu).

  • Q (np.ndarray) – State cost matrix (nx x nx).

  • R (np.ndarray) – Control cost matrix (nu x nu).

  • gamma (float) – Target $L_2$-gain attenuation level (robustness margin).

compute_control(x: ndarray) ndarray[source]

Compute the control input based on the state vector.

Parameters:

x (np.ndarray) – Current state vector (nx,).

Returns:

Control input vector $u = -Kx$ (nu,).

Return type:

np.ndarray

compute_gain() ndarray[source]

Compute the optimal robust feedback gain matrix K.

$K = R^{-1} B_2^T P$

Returns:

Feedback gain matrix K (nu x nx).

Return type:

np.ndarray

solve() ndarray[source]

Solve the H-infinity Algebraic Riccati Equation (ARE).

$A^T P + P A + P ( frac{1}{gamma^2} B_1 B_1^T - B_2 R^{-1} B_2^T ) P + Q = 0$

Returns:

The positive-definite Riccati solution matrix P.

Return type:

np.ndarray

Raises:

ValueError – If no solution exists for the given attenuation level gamma.

opengnc.optimal_control.indi_control module

class opengnc.optimal_control.indi_control.INDIController(g_func: Callable[[ndarray, ndarray], ndarray])[source]

Bases: object

Incremental Nonlinear Dynamic Inversion (INDI) Controller.

A sensor-based control method that reduces model dependency by calculating control increments based on measured accelerations.

System model: $ddot{x} = f(x, dot{x}) + g(x, dot{x}) u$ Discrete Law: $u_k = u_{k-1} + g(x_k, dot{x}_k)^{-1} (v_k - ddot{x}_k)$

where $ddot{x}_k$ is the current measured acceleration and $v_k$ is the desired acceleration command.

Parameters:

g_func (Callable[[np.ndarray, np.ndarray], np.ndarray]) – Nonlinear input matrix function $g(x, dot{x})$.

compute_control(u0: ndarray, x_ddot0: ndarray, v: ndarray, x0: ndarray, x_dot0: ndarray) ndarray[source]

Compute the incremental control input $u$.

Parameters:
  • u0 (np.ndarray) – Previous control input vector (nu,).

  • x_ddot0 (np.ndarray) – Current measured/estimated acceleration vector (nx,).

  • v (np.ndarray) – Desired acceleration pseudo-control vector (nx,).

  • x0 (np.ndarray) – Current state/position vector (nx,).

  • x_dot0 (np.ndarray) – Current velocity vector (nx,).

Returns:

Optimal control input $u$ (nu,).

Return type:

np.ndarray

class opengnc.optimal_control.indi_control.INDIOuterLoopPD(Kp: float | ndarray, Kd: float | ndarray)[source]

Bases: object

Standard PD Outer-Loop for INDI acceleration command generation.

Parameters:
  • Kp (Union[float, np.ndarray]) – Proportional gain matrix or scalar.

  • Kd (Union[float, np.ndarray]) – Derivative gain matrix or scalar.

compute_v(x: ndarray, x_dot: ndarray, x_d: ndarray, x_dot_d: ndarray, x_ddot_d: ndarray | None = None) ndarray[source]

Compute desired acceleration pseudo-control $v$.

Parameters:
  • x (np.ndarray) – Current position and velocity measurements.

  • x_dot (np.ndarray) – Current position and velocity measurements.

  • x_d (np.ndarray) – Desired position and velocity trajectories.

  • x_dot_d (np.ndarray) – Desired position and velocity trajectories.

  • x_ddot_d (np.ndarray, optional) – Desired feedforward acceleration. Defaults to zero.

Returns:

Acceleration command $v$.

Return type:

np.ndarray

opengnc.optimal_control.lqe module

Linear Quadratic Estimator (LQE) / Kalman Filter.

class opengnc.optimal_control.lqe.LQE(A: ndarray, G: ndarray, C: ndarray, Q: ndarray, R: ndarray)[source]

Bases: object

Linear Quadratic Estimator (LQE) / Kalman Filter gain designer.

Computes the optimal steady-state observer gain $L$ for a continuous-time system with additive Gaussian process and measurement noise.

System Model: $dot{x} = Ax + Bu + Gw$ $y = Cx + v$ where $Q = E[ww^T]$ and $R = E[vv^T]$.

The resulting observer dynamics are: $dot{hat{x}} = Ahat{x} + Bu + L(y - Chat{x})$

Parameters:
  • A (np.ndarray) – State matrix (nx x nx).

  • G (np.ndarray) – Process noise input matrix (nx x nw). Often Identity.

  • C (np.ndarray) – Output matrix (ny x nx).

  • Q (np.ndarray) – Process noise covariance matrix (nw x nw).

  • R (np.ndarray) – Measurement noise covariance matrix (ny x ny).

P

Steady-state estimation error covariance matrix.

Type:

np.ndarray

L

Optimal observer gain matrix (nx x ny).

Type:

np.ndarray

compute_gain() ndarray[source]

Compute the optimal observer gain matrix L.

$L = P C^T R^{-1}$

Returns:

Observer gain matrix L (nx x ny).

Return type:

np.ndarray

solve() ndarray[source]

Solve the Estimation Algebraic Riccati Equation (ARE).

$A P + P A^T - P C^T R^{-1} C P + G Q G^T = 0$

Returns:

The unique positive-definite solution matrix P.

Return type:

np.ndarray

opengnc.optimal_control.lqg module

Linear Quadratic Gaussian (LQG) Controller.

class opengnc.optimal_control.lqg.LQG(A: ndarray, B: ndarray, C: ndarray, Q_lqr: ndarray, R_lqr: ndarray, Q_lqe: ndarray, R_lqe: ndarray, G_lqe: ndarray | None = None)[source]

Bases: object

Linear Quadratic Gaussian (LQG) Controller.

Integrates LQR optimal state-feedback with a Linear Quadratic Estimator (LQE/Kalman Filter). Operates on the ‘Separation Principle’, where the controller and observer are designed independently.

System Model: $dot{x} = Ax + Bu + Gw$ $y = Cx + v$

Control Law: $u = -K hat{x}$ Observer: $dot{hat{x}} = Ahat{x} + Bu + L(y - Chat{x})$

Parameters:
  • A (np.ndarray) – State matrix (nx x nx).

  • B (np.ndarray) – Input matrix (nx x nu).

  • C (np.ndarray) – Output matrix (ny x nx).

  • Q_lqr (np.ndarray) – State cost matrix for LQR.

  • R_lqr (np.ndarray) – Input cost matrix for LQR.

  • Q_lqe (np.ndarray) – Process noise covariance matrix for estimator.

  • R_lqe (np.ndarray) – Measurement noise covariance matrix for estimator.

  • G_lqe (np.ndarray, optional) – Process noise input matrix. Defaults to Identity (nx x nx).

compute_control(y: ndarray | None = None, dt: float | None = None, u_last: ndarray | None = None) ndarray[source]

Compute the optimal control input based on the state estimate.

Parameters:
  • y (np.ndarray, optional) – New measurement for estimate update.

  • dt (float, optional) – Time step for estimate update.

  • u_last (np.ndarray, optional) – Previous control input for estimate update.

Returns:

Optimal control input $u = -Khat{x}$ (nu,).

Return type:

np.ndarray

update_estimation(y: ndarray, u: ndarray, dt: float) ndarray[source]

Update the internal state estimate using observer dynamics.

Parameters:
  • y (np.ndarray) – Latest sensor measurement vector (ny,).

  • u (np.ndarray) – The control input vector actually applied (nu,).

  • dt (float) – Time step since the last update (s).

Returns:

The updated state estimate $hat{x}$ (nx,).

Return type:

np.ndarray

opengnc.optimal_control.lqr module

Linear Quadratic Regulator (LQR) Controller.

class opengnc.optimal_control.lqr.LQR(A: ndarray, B: ndarray, Q: ndarray, R: ndarray)[source]

Bases: object

Linear Quadratic Regulator (LQR).

Minimizes: $J = int_0^infty (mathbf{x}^T mathbf{Q} mathbf{x} + mathbf{u}^T mathbf{R} mathbf{u}) dt$

Optimal Law: $mathbf{u} = -mathbf{K} mathbf{x}$

Parameters:
  • A (np.ndarray) – State matrix ($n times n$).

  • B (np.ndarray) – Input matrix ($n times m$).

  • Q (np.ndarray) – Weight matrix ($n times n$, PSD).

  • R (np.ndarray) – Weight matrix ($m times m$, PD).

compute_gain() ndarray[source]

Compute the optimal feedback gain matrix K.

$K = R^{-1} B^T P$

Returns:

Feedback gain matrix K (m x n).

Return type:

np.ndarray

solve() ndarray[source]

Solve the Continuous Algebraic Riccati Equation (CARE).

$A^T P + P A - P B R^{-1} B^T P + Q = 0$

Returns:

The unique positive-definite solution matrix P.

Return type:

np.ndarray

opengnc.optimal_control.mpc module

Linear and Nonlinear Model Predictive Control (MPC) solvers.

class opengnc.optimal_control.mpc.LinearMPC(A: ndarray, B: ndarray, Q: ndarray, R: ndarray, horizon: int, P: ndarray | None = None, u_min: float | ndarray | None = None, u_max: float | ndarray | None = None, x_min: float | ndarray | None = None, x_max: float | ndarray | None = None)[source]

Bases: object

Linear Model Predictive Controller (MPC) using SLSQP optimization.

Solves a finite-horizon optimal control problem for linear discrete-time systems subject to state and input constraints.

Dynamics: $x[k+1] = A x[k] + B u[k]$ Objective: $min sum_{k=0}^{N-1} (x_k^T Q x_k + u_k^T R u_k) + x_N^T P x_N$

Parameters:
  • A (np.ndarray) – State transition matrix (nx x nx).

  • B (np.ndarray) – Input matrix (nx x nu).

  • Q (np.ndarray) – State cost matrix (nx x nx).

  • R (np.ndarray) – Input cost matrix (nu x nu).

  • horizon (int) – Prediction and control horizon N.

  • P (np.ndarray, optional) – Terminal state cost matrix. Defaults to Q.

  • u_min (float or np.ndarray, optional) – Minimum and maximum control input bounds.

  • u_max (float or np.ndarray, optional) – Minimum and maximum control input bounds.

  • x_min (float or np.ndarray, optional) – Minimum and maximum state constraints.

  • x_max (float or np.ndarray, optional) – Minimum and maximum state constraints.

solve(x0: ndarray) ndarray[source]

Solve the MPC optimization problem for a given initial state.

Parameters:

x0 (np.ndarray) – Current state of the system (nx,).

Returns:

Optimal control sequence [u_0, u_1, …, u_{N-1}] of shape (N, nu).

Return type:

np.ndarray

class opengnc.optimal_control.mpc.NonlinearMPC(dynamics_func: Callable[[ndarray, ndarray], ndarray], cost_func: Callable[[ndarray, ndarray], float], terminal_cost_func: Callable[[ndarray], float], horizon: int, nx: int, nu: int, u_min: float | ndarray | None = None, u_max: float | ndarray | None = None, x_min: float | ndarray | None = None, x_max: float | ndarray | None = None)[source]

Bases: object

Nonlinear Model Predictive Controller (NMPC).

Solves a finite-horizon optimal control problem for systems with nonlinear dynamics using single-shooting numerical optimization.

Dynamics: $x[k+1] = f(x[k], u[k])$ Objective: $min sum_{k=0}^{N-1} L(x_k, u_k) + V(x_N)$

Parameters:
  • dynamics_func (Callable[[np.ndarray, np.ndarray], np.ndarray]) – Nonlinear transition function $f(x, u)$.

  • cost_func (Callable[[np.ndarray, np.ndarray], float]) – Stage cost function $L(x, u)$.

  • terminal_cost_func (Callable[[np.ndarray], float]) – Terminal cost function $V(x)$.

  • horizon (int) – Prediction horizon N.

  • nx (int) – State dimension.

  • nu (int) – Input dimension.

  • u_min (float or np.ndarray, optional) – Control input constraints.

  • u_max (float or np.ndarray, optional) – Control input constraints.

  • x_min (float or np.ndarray, optional) – State constraints.

  • x_max (float or np.ndarray, optional) – State constraints.

solve(x0: ndarray) ndarray[source]

Solve the NMPC optimization problem using single shooting.

Parameters:

x0 (np.ndarray) – Initial state of the system (nx,).

Returns:

Optimal control sequence (N, nu).

Return type:

np.ndarray

opengnc.optimal_control.mpc_casadi module

Nonlinear Model Predictive Control (NMPC) using CasADi with Multiple Shooting.

class opengnc.optimal_control.mpc_casadi.CasadiNMPC(nx: int, nu: int, horizon: int, dt: float, dynamics_func: Callable, cost_func: Callable, terminal_cost_func: Callable, u_min: float | ndarray | None = None, u_max: float | ndarray | None = None, x_min: float | ndarray | None = None, x_max: float | ndarray | None = None, discrete: bool = False)[source]

Bases: object

High-performance Nonlinear Model Predictive Controller (NMPC) using CasADi.

Optimizes a finite-horizon cost function subject to nonlinear dynamics and algebraic constraints using a Multiple Shooting formulation for numerical stability and parallelism.

NOTE: Requires the ‘casadi’ package (pip install opengnc[mpc]).

Parameters:
  • nx (int) – State dimension.

  • nu (int) – Input dimension.

  • horizon (int) – Prediction horizon N.

  • dt (float) – Time step (s).

  • dynamics_func (Callable) – System dynamics f(x, u). Returns x_next if discrete, else dx/dt.

  • cost_func (Callable) – Stage cost function L(x, u).

  • terminal_cost_func (Callable) – Terminal cost function V(x).

  • u_min (float or np.ndarray, optional) – Control input constraints.

  • u_max (float or np.ndarray, optional) – Control input constraints.

  • x_min (float or np.ndarray, optional) – State trajectory constraints.

  • x_max (float or np.ndarray, optional) – State trajectory constraints.

  • discrete (bool, optional) – If True, the dynamics function is assumed discrete. If False, RK4 integration is performed internally. Default is False.

solve(x0: ndarray, u_guess: ndarray | None = None) ndarray[source]

Solve the NMPC problem for the given initial state.

Parameters:
  • x0 (np.ndarray) – Current system state (nx,).

  • u_guess (np.ndarray, optional) – Initial guess for control trajectory (N, nu).

Returns:

Optimal control trajectory (N, nu).

Return type:

np.ndarray

class opengnc.optimal_control.mpc_casadi.ca[source]

Bases: object

Placeholder for CasADi types when not installed.

MX

alias of Any

SX

alias of Any

opengnc.optimal_control.passivity_control module

Passivity-Based Controller for Euler-Lagrange mechanical systems.

class opengnc.optimal_control.passivity_control.PassivityBasedController(M_func: Callable[[ndarray], ndarray], C_func: Callable[[ndarray, ndarray], ndarray], G_func: Callable[[ndarray], ndarray], K_d: float | ndarray, Lambda: float | ndarray)[source]

Bases: object

Passivity-Based Controller for Euler-Lagrange mechanical systems.

Exploits the energy and passivity properties of mechanical systems to ensure global stability. This implementation is based on the Slotine & Li adaptive/passivity scheme.

System model: $M(q)ddot{q} + C(q, dot{q})dot{q} + G(q) = u$

Parameters:
  • M_func (Callable[[np.ndarray], np.ndarray]) – Inertia matrix function $M(q)$ (n x n).

  • C_func (Callable[[np.ndarray, np.ndarray], np.ndarray]) – Coriolis and centrifugal matrix function $C(q, dot{q})$ (n x n).

  • G_func (Callable[[np.ndarray], np.ndarray]) – Gravity/damping vector function $G(q)$ (n,).

  • K_d (Union[float, np.ndarray]) – Dissipative (damping) gain matrix (n x n).

  • Lambda (Union[float, np.ndarray]) – Proportional error convergence matrix (n x n).

compute_control(q: ndarray, q_dot: ndarray, q_d: ndarray, q_dot_d: ndarray, q_ddot_d: ndarray | None = None) ndarray[source]

Compute the passivity-based control torque output.

Parameters:
  • q (np.ndarray) – Current generalized coordinates and velocities (n,).

  • q_dot (np.ndarray) – Current generalized coordinates and velocities (n,).

  • q_d (np.ndarray) – Desired coordinate and velocity trajectories (n,).

  • q_dot_d (np.ndarray) – Desired coordinate and velocity trajectories (n,).

  • q_ddot_d (np.ndarray, optional) – Desired feedforward acceleration (n,). Defaults to zero.

Returns:

Control input vector $u$ (n,).

Return type:

np.ndarray

opengnc.optimal_control.sliding_mode module

Sliding Mode Controller (SMC) implementation.

class opengnc.optimal_control.sliding_mode.SlidingModeController(surface_func: Callable[[ndarray, float], float], k_gain: float, equivalent_control_func: Callable[[ndarray, float], float] | None = None, chattering_reduction: bool = True, boundary_layer: float = 0.1)[source]

Bases: object

Sliding Mode Controller (SMC).

Control Law: $u = u_{eq} - K cdot text{sat}(s/Phi)$

Parameters:
  • surface_func (Callable) – Sliding surface $s(mathbf{x}, t)$.

  • k_gain (float) – Switching gain $K$.

  • equivalent_control_func (Callable | None, optional) – $mathbf{u}_{eq}$ component.

  • chattering_reduction (bool, optional) – Use saturation instead of sign. Default True.

  • boundary_layer (float, optional) – Saturation boundary $Phi$.

compute_control(x: ndarray, t: float = 0.0) float | ndarray[source]

Compute the sliding mode control input.

Parameters:
  • x (np.ndarray) – Current state vector.

  • t (float, optional) – Current time (s). Default is 0.0.

Returns:

The computed control input signal.

Return type:

float or np.ndarray

Module contents

class opengnc.optimal_control.BacksteppingController(f_func: Callable[[ndarray, ndarray], ndarray], g_func: Callable[[ndarray, ndarray], ndarray], k1: float | ndarray, k2: float | ndarray)[source]

Bases: object

Backstepping Controller for cascaded 2nd-order nonlinear systems.

Designed for systems of the form: $dot{x}_1 = x_2$ $dot{x}_2 = f(x_1, x_2) + g(x_1, x_2) u$

This implementation uses a standard recursive design to achieve global asymptotic stability of a desired trajectory.

Parameters:
  • f_func (Callable[[np.ndarray, np.ndarray], np.ndarray]) – Nonlinear drift function $f(x_1, x_2)$.

  • g_func (Callable[[np.ndarray, np.ndarray], np.ndarray]) – Nonlinear input matrix $g(x_1, x_2)$.

  • k1 (Union[float, np.ndarray]) – Feedback gain for the first error state (position).

  • k2 (Union[float, np.ndarray]) – Feedback gain for the second error state (velocity).

compute_control(x1: ndarray, x2: ndarray, x1_d: ndarray, x1_dot_d: ndarray, x1_ddot_d: ndarray | None = None) ndarray[source]

Compute the backstepping control input for the current state.

Parameters:
  • x1 (np.ndarray) – Measured position/primary state vector (n,).

  • x2 (np.ndarray) – Measured velocity/secondary state vector (n,).

  • x1_d (np.ndarray) – Desired position/trajectory (n,).

  • x1_dot_d (np.ndarray) – Desired velocity (n,).

  • x1_ddot_d (np.ndarray, optional) – Desired acceleration (n,). Defaults to zero.

Returns:

Control input vector $u$ (m,).

Return type:

np.ndarray

class opengnc.optimal_control.CasadiNMPC(nx: int, nu: int, horizon: int, dt: float, dynamics_func: Callable, cost_func: Callable, terminal_cost_func: Callable, u_min: float | ndarray | None = None, u_max: float | ndarray | None = None, x_min: float | ndarray | None = None, x_max: float | ndarray | None = None, discrete: bool = False)[source]

Bases: object

High-performance Nonlinear Model Predictive Controller (NMPC) using CasADi.

Optimizes a finite-horizon cost function subject to nonlinear dynamics and algebraic constraints using a Multiple Shooting formulation for numerical stability and parallelism.

NOTE: Requires the ‘casadi’ package (pip install opengnc[mpc]).

Parameters:
  • nx (int) – State dimension.

  • nu (int) – Input dimension.

  • horizon (int) – Prediction horizon N.

  • dt (float) – Time step (s).

  • dynamics_func (Callable) – System dynamics f(x, u). Returns x_next if discrete, else dx/dt.

  • cost_func (Callable) – Stage cost function L(x, u).

  • terminal_cost_func (Callable) – Terminal cost function V(x).

  • u_min (float or np.ndarray, optional) – Control input constraints.

  • u_max (float or np.ndarray, optional) – Control input constraints.

  • x_min (float or np.ndarray, optional) – State trajectory constraints.

  • x_max (float or np.ndarray, optional) – State trajectory constraints.

  • discrete (bool, optional) – If True, the dynamics function is assumed discrete. If False, RK4 integration is performed internally. Default is False.

solve(x0: ndarray, u_guess: ndarray | None = None) ndarray[source]

Solve the NMPC problem for the given initial state.

Parameters:
  • x0 (np.ndarray) – Current system state (nx,).

  • u_guess (np.ndarray, optional) – Initial guess for control trajectory (N, nu).

Returns:

Optimal control trajectory (N, nu).

Return type:

np.ndarray

class opengnc.optimal_control.FeedbackLinearization(f_func: Callable[[ndarray], ndarray], g_func: Callable[[ndarray], ndarray])[source]

Bases: object

Feedback Linearization Controller for nonlinear systems.

Transforms a nonlinear system into an equivalent linear system through algebraic state feedback.

System model: $dot{x} = f(x) + g(x) u$

By choosing $u = g(x)^{-1} (v - f(x))$, the closed-loop system becomes: $dot{x} = v$ where $v$ is the pseudo-control input (linear command).

Parameters:
  • f_func (Callable[[np.ndarray], np.ndarray]) – Nonlinear drift function $f(x)$.

  • g_func (Callable[[np.ndarray], np.ndarray]) – Nonlinear input matrix $g(x)$.

compute_control(x: ndarray, v: ndarray) ndarray[source]

Compute the linearizing control input $u$.

Parameters:
  • x (np.ndarray) – Current state vector (nx,).

  • v (np.ndarray) – Desired linear acceleration/pseudo-control vector (nx,).

Returns:

The required control input $u$ (nu,).

Return type:

np.ndarray

class opengnc.optimal_control.FiniteHorizonLQR(A_fn: Callable[[float], ndarray], B_fn: Callable[[float], ndarray], Q_fn: Callable[[float], ndarray], R_fn: Callable[[float], ndarray], Pf: ndarray, T: float)[source]

Bases: object

Finite-Horizon Linear Quadratic Regulator (LQR).

Computes a time-varying optimal control law $u(t) = -K(t)x(t)$ for systems over a fixed time interval $[0, T]$.

Objective: $min J = x(T)^T P_f x(T) + int_{0}^{T} (x^T Q(t) x + u^T R(t) u) dt$

Parameters:
  • A_fn (Callable[[float], np.ndarray]) – Time-varying state matrix $A(t)$ (nx x nx).

  • B_fn (Callable[[float], np.ndarray]) – Time-varying input matrix $B(t)$ (nx x nu).

  • Q_fn (Callable[[float], np.ndarray]) – Time-varying state cost matrix $Q(t)$ (nx x nx).

  • R_fn (Callable[[float], np.ndarray]) – Time-varying input cost matrix $R(t)$ (nu x nu).

  • Pf (np.ndarray) – Terminal state cost matrix at $t=T$ (nx x nx).

  • T (float) – The fixed final time of the optimization horizon.

compute_control(x: ndarray, t: float) ndarray[source]

Compute optimal control input $u = -K(t)x$.

Parameters:
  • x (np.ndarray) – Current state vector (nx,).

  • t (float) – Current time (s).

Returns:

Optimal control input u (nu,).

Return type:

np.ndarray

get_gain(t: float) ndarray[source]

Compute/interpolate the optimal feedback gain K at time t.

Parameters:

t (float) – Current time (s).

Returns:

Feedback gain matrix K (nu x nx).

Return type:

np.ndarray

solve(num_points: int = 100) tuple[ndarray, ndarray][source]

Solve the Differential Riccati Equation (DRE) backwards in time.

$-dot{P} = P A + A^T P - P B R^{-1} B^T P + Q$, with $P(T) = P_f$

Parameters:

num_points (int, optional) – Number of points for the solution trajectory. Default is 100.

Returns:

A tuple of (t_span, P_trajectory) where P_trajectory is (num_points, nx, nx).

Return type:

Tuple[np.ndarray, np.ndarray]

class opengnc.optimal_control.GeometricController(J: ndarray, kR: float, kW: float)[source]

Bases: object

Geometric Controller on the Special Orthogonal Group SO(3).

Implements a tracking controller directly on the rotation matrix manifold, avoiding singularities and unwinding issues associated with Euler angles and quaternions.

Ref: Lee, T., Leok, M., & McClamroch, N. H. (2010). Geometric Tracking Control of a Quadrotor UAV on SE(3).

Parameters:
  • J (np.ndarray) – Spacecraft inertia tensor matrix (3x3) [kg*m^2].

  • kR (float) – Attitude error gain (proportional).

  • kW (float) – Angular velocity error gain (derivative).

compute_control(R: ndarray, omega: ndarray, R_d: ndarray, omega_d: ndarray, d_omega_d: ndarray | None = None) ndarray[source]

Compute the optimal control torque on SO(3).

Parameters:
  • R (np.ndarray) – Current rotation matrix (3x3).

  • omega (np.ndarray) – Current angular velocity in body frame (3,).

  • R_d (np.ndarray) – Desired rotation matrix (3x3).

  • omega_d (np.ndarray) – Desired angular velocity in body frame (3,).

  • d_omega_d (np.ndarray, optional) – Desired angular acceleration (3,). Defaults to zero.

Returns:

Control torque vector M (3,) [N*m].

Return type:

np.ndarray

class opengnc.optimal_control.H2Controller(A: ndarray, B: ndarray, C: ndarray, Q_lqr: ndarray, R_lqr: ndarray, Q_lqe: ndarray, R_lqe: ndarray, G_lqe: ndarray | None = None)[source]

Bases: LQG

H2-Optimal Controller.

Solves the standard H2 control problem for linear state-space systems. For systems with Gaussian noise and quadratic performance indices, the H2-optimal controller is equivalent to the Linear Quadratic Gaussian (LQG) controller.

Parameters:
  • A (np.ndarray) – State matrix (nx x nx).

  • B (np.ndarray) – Control input matrix (nx x nu).

  • C (np.ndarray) – Output matrix (ny x nx).

  • Q_lqr (np.ndarray) – State weighting matrix for LQR (nx x nx).

  • R_lqr (np.ndarray) – Control weighting matrix for LQR (nu x nu).

  • Q_lqe (np.ndarray) – Process noise covariance matrix for LQE (nw x nw).

  • R_lqe (np.ndarray) – Measurement noise covariance matrix for LQE (ny x ny).

  • G_lqe (np.ndarray, optional) – Process noise input matrix (nx x nw). Defaults to Identity.

solve() tuple[ndarray, ndarray][source]

Solve LQR and LQE optimal gain design problems.

Returns:

Optimal feedback gain K (nu x nx) and observer gain L (nx x ny).

Return type:

Tuple[np.ndarray, np.ndarray]

class opengnc.optimal_control.HInfinityController(A: ndarray, B1: ndarray, B2: ndarray, Q: ndarray, R: ndarray, gamma: float)[source]

Bases: object

H-Infinity Robust State-Feedback Controller.

Designs a control law $u = -Kx$ that minimizes the $H_infty$ norm of the closed-loop transfer function from external disturbances $w$ to performance outputs $z$, ensuring robustness against worst-case disturbances.

System Dynamics: $dot{x} = Ax + B_1 w + B_2 u$ Cost Function: $J = int_{0}^{infty} (x^T Q x + u^T R u - gamma^2 w^T w) dt$

Parameters:
  • A (np.ndarray) – State matrix (nx x nx).

  • B1 (np.ndarray) – Disturbance input matrix (nx x nw).

  • B2 (np.ndarray) – Control input matrix (nx x nu).

  • Q (np.ndarray) – State cost matrix (nx x nx).

  • R (np.ndarray) – Control cost matrix (nu x nu).

  • gamma (float) – Target $L_2$-gain attenuation level (robustness margin).

compute_control(x: ndarray) ndarray[source]

Compute the control input based on the state vector.

Parameters:

x (np.ndarray) – Current state vector (nx,).

Returns:

Control input vector $u = -Kx$ (nu,).

Return type:

np.ndarray

compute_gain() ndarray[source]

Compute the optimal robust feedback gain matrix K.

$K = R^{-1} B_2^T P$

Returns:

Feedback gain matrix K (nu x nx).

Return type:

np.ndarray

solve() ndarray[source]

Solve the H-infinity Algebraic Riccati Equation (ARE).

$A^T P + P A + P ( frac{1}{gamma^2} B_1 B_1^T - B_2 R^{-1} B_2^T ) P + Q = 0$

Returns:

The positive-definite Riccati solution matrix P.

Return type:

np.ndarray

Raises:

ValueError – If no solution exists for the given attenuation level gamma.

class opengnc.optimal_control.INDIController(g_func: Callable[[ndarray, ndarray], ndarray])[source]

Bases: object

Incremental Nonlinear Dynamic Inversion (INDI) Controller.

A sensor-based control method that reduces model dependency by calculating control increments based on measured accelerations.

System model: $ddot{x} = f(x, dot{x}) + g(x, dot{x}) u$ Discrete Law: $u_k = u_{k-1} + g(x_k, dot{x}_k)^{-1} (v_k - ddot{x}_k)$

where $ddot{x}_k$ is the current measured acceleration and $v_k$ is the desired acceleration command.

Parameters:

g_func (Callable[[np.ndarray, np.ndarray], np.ndarray]) – Nonlinear input matrix function $g(x, dot{x})$.

compute_control(u0: ndarray, x_ddot0: ndarray, v: ndarray, x0: ndarray, x_dot0: ndarray) ndarray[source]

Compute the incremental control input $u$.

Parameters:
  • u0 (np.ndarray) – Previous control input vector (nu,).

  • x_ddot0 (np.ndarray) – Current measured/estimated acceleration vector (nx,).

  • v (np.ndarray) – Desired acceleration pseudo-control vector (nx,).

  • x0 (np.ndarray) – Current state/position vector (nx,).

  • x_dot0 (np.ndarray) – Current velocity vector (nx,).

Returns:

Optimal control input $u$ (nu,).

Return type:

np.ndarray

class opengnc.optimal_control.INDIOuterLoopPD(Kp: float | ndarray, Kd: float | ndarray)[source]

Bases: object

Standard PD Outer-Loop for INDI acceleration command generation.

Parameters:
  • Kp (Union[float, np.ndarray]) – Proportional gain matrix or scalar.

  • Kd (Union[float, np.ndarray]) – Derivative gain matrix or scalar.

compute_v(x: ndarray, x_dot: ndarray, x_d: ndarray, x_dot_d: ndarray, x_ddot_d: ndarray | None = None) ndarray[source]

Compute desired acceleration pseudo-control $v$.

Parameters:
  • x (np.ndarray) – Current position and velocity measurements.

  • x_dot (np.ndarray) – Current position and velocity measurements.

  • x_d (np.ndarray) – Desired position and velocity trajectories.

  • x_dot_d (np.ndarray) – Desired position and velocity trajectories.

  • x_ddot_d (np.ndarray, optional) – Desired feedforward acceleration. Defaults to zero.

Returns:

Acceleration command $v$.

Return type:

np.ndarray

class opengnc.optimal_control.LQE(A: ndarray, G: ndarray, C: ndarray, Q: ndarray, R: ndarray)[source]

Bases: object

Linear Quadratic Estimator (LQE) / Kalman Filter gain designer.

Computes the optimal steady-state observer gain $L$ for a continuous-time system with additive Gaussian process and measurement noise.

System Model: $dot{x} = Ax + Bu + Gw$ $y = Cx + v$ where $Q = E[ww^T]$ and $R = E[vv^T]$.

The resulting observer dynamics are: $dot{hat{x}} = Ahat{x} + Bu + L(y - Chat{x})$

Parameters:
  • A (np.ndarray) – State matrix (nx x nx).

  • G (np.ndarray) – Process noise input matrix (nx x nw). Often Identity.

  • C (np.ndarray) – Output matrix (ny x nx).

  • Q (np.ndarray) – Process noise covariance matrix (nw x nw).

  • R (np.ndarray) – Measurement noise covariance matrix (ny x ny).

P

Steady-state estimation error covariance matrix.

Type:

np.ndarray

L

Optimal observer gain matrix (nx x ny).

Type:

np.ndarray

compute_gain() ndarray[source]

Compute the optimal observer gain matrix L.

$L = P C^T R^{-1}$

Returns:

Observer gain matrix L (nx x ny).

Return type:

np.ndarray

solve() ndarray[source]

Solve the Estimation Algebraic Riccati Equation (ARE).

$A P + P A^T - P C^T R^{-1} C P + G Q G^T = 0$

Returns:

The unique positive-definite solution matrix P.

Return type:

np.ndarray

class opengnc.optimal_control.LQG(A: ndarray, B: ndarray, C: ndarray, Q_lqr: ndarray, R_lqr: ndarray, Q_lqe: ndarray, R_lqe: ndarray, G_lqe: ndarray | None = None)[source]

Bases: object

Linear Quadratic Gaussian (LQG) Controller.

Integrates LQR optimal state-feedback with a Linear Quadratic Estimator (LQE/Kalman Filter). Operates on the ‘Separation Principle’, where the controller and observer are designed independently.

System Model: $dot{x} = Ax + Bu + Gw$ $y = Cx + v$

Control Law: $u = -K hat{x}$ Observer: $dot{hat{x}} = Ahat{x} + Bu + L(y - Chat{x})$

Parameters:
  • A (np.ndarray) – State matrix (nx x nx).

  • B (np.ndarray) – Input matrix (nx x nu).

  • C (np.ndarray) – Output matrix (ny x nx).

  • Q_lqr (np.ndarray) – State cost matrix for LQR.

  • R_lqr (np.ndarray) – Input cost matrix for LQR.

  • Q_lqe (np.ndarray) – Process noise covariance matrix for estimator.

  • R_lqe (np.ndarray) – Measurement noise covariance matrix for estimator.

  • G_lqe (np.ndarray, optional) – Process noise input matrix. Defaults to Identity (nx x nx).

compute_control(y: ndarray | None = None, dt: float | None = None, u_last: ndarray | None = None) ndarray[source]

Compute the optimal control input based on the state estimate.

Parameters:
  • y (np.ndarray, optional) – New measurement for estimate update.

  • dt (float, optional) – Time step for estimate update.

  • u_last (np.ndarray, optional) – Previous control input for estimate update.

Returns:

Optimal control input $u = -Khat{x}$ (nu,).

Return type:

np.ndarray

update_estimation(y: ndarray, u: ndarray, dt: float) ndarray[source]

Update the internal state estimate using observer dynamics.

Parameters:
  • y (np.ndarray) – Latest sensor measurement vector (ny,).

  • u (np.ndarray) – The control input vector actually applied (nu,).

  • dt (float) – Time step since the last update (s).

Returns:

The updated state estimate $hat{x}$ (nx,).

Return type:

np.ndarray

class opengnc.optimal_control.LQR(A: ndarray, B: ndarray, Q: ndarray, R: ndarray)[source]

Bases: object

Linear Quadratic Regulator (LQR).

Minimizes: $J = int_0^infty (mathbf{x}^T mathbf{Q} mathbf{x} + mathbf{u}^T mathbf{R} mathbf{u}) dt$

Optimal Law: $mathbf{u} = -mathbf{K} mathbf{x}$

Parameters:
  • A (np.ndarray) – State matrix ($n times n$).

  • B (np.ndarray) – Input matrix ($n times m$).

  • Q (np.ndarray) – Weight matrix ($n times n$, PSD).

  • R (np.ndarray) – Weight matrix ($m times m$, PD).

compute_gain() ndarray[source]

Compute the optimal feedback gain matrix K.

$K = R^{-1} B^T P$

Returns:

Feedback gain matrix K (m x n).

Return type:

np.ndarray

solve() ndarray[source]

Solve the Continuous Algebraic Riccati Equation (CARE).

$A^T P + P A - P B R^{-1} B^T P + Q = 0$

Returns:

The unique positive-definite solution matrix P.

Return type:

np.ndarray

class opengnc.optimal_control.LinearMPC(A: ndarray, B: ndarray, Q: ndarray, R: ndarray, horizon: int, P: ndarray | None = None, u_min: float | ndarray | None = None, u_max: float | ndarray | None = None, x_min: float | ndarray | None = None, x_max: float | ndarray | None = None)[source]

Bases: object

Linear Model Predictive Controller (MPC) using SLSQP optimization.

Solves a finite-horizon optimal control problem for linear discrete-time systems subject to state and input constraints.

Dynamics: $x[k+1] = A x[k] + B u[k]$ Objective: $min sum_{k=0}^{N-1} (x_k^T Q x_k + u_k^T R u_k) + x_N^T P x_N$

Parameters:
  • A (np.ndarray) – State transition matrix (nx x nx).

  • B (np.ndarray) – Input matrix (nx x nu).

  • Q (np.ndarray) – State cost matrix (nx x nx).

  • R (np.ndarray) – Input cost matrix (nu x nu).

  • horizon (int) – Prediction and control horizon N.

  • P (np.ndarray, optional) – Terminal state cost matrix. Defaults to Q.

  • u_min (float or np.ndarray, optional) – Minimum and maximum control input bounds.

  • u_max (float or np.ndarray, optional) – Minimum and maximum control input bounds.

  • x_min (float or np.ndarray, optional) – Minimum and maximum state constraints.

  • x_max (float or np.ndarray, optional) – Minimum and maximum state constraints.

solve(x0: ndarray) ndarray[source]

Solve the MPC optimization problem for a given initial state.

Parameters:

x0 (np.ndarray) – Current state of the system (nx,).

Returns:

Optimal control sequence [u_0, u_1, …, u_{N-1}] of shape (N, nu).

Return type:

np.ndarray

class opengnc.optimal_control.ModelReferenceAdaptiveControl(A_m: ndarray, B_m: ndarray, B: ndarray, Gamma: ndarray, Q_lyap: ndarray, phi_func: Callable[[ndarray], ndarray])[source]

Bases: object

Model Reference Adaptive Controller (MRAC) for linear state-space systems.

Implements a direct MRAC scheme where controller parameters are updated to minimize the error between the plant and a stable reference model.

Plant with parametric uncertainty: $dot{x} = Ax + B(u + Theta^T Phi(x))$

Reference Model: $dot{x}_m = A_m x_m + B_m r$

Adaptation Law (Lyapunov-based): $dot{hat{Theta}} = Gamma Phi(x) e^T P B$ where $e = x - x_m$, and $P$ solves $A_m^T P + P A_m = -Q$.

Parameters:
  • A_m (np.ndarray) – Reference model state matrix (nx x nx).

  • B_m (np.ndarray) – Reference model input matrix (nx x nu).

  • B (np.ndarray) – Plant input matrix (nx x nu).

  • Gamma (np.ndarray) – Adaptation gain matrix (nk x nk).

  • Q_lyap (np.ndarray) – Positive-definite matrix for the Lyapunov equation (nx x nx).

  • phi_func (Callable[[np.ndarray], np.ndarray]) – Regressor function $Phi(x)$ returning a vector of basis functions (nk,).

compute_control(x: ndarray, x_m: ndarray, r: ndarray, kx: ndarray | None = None, kr: ndarray | None = None) ndarray[source]

Compute the adaptive control input and the parameter update rate.

Parameters:
  • x (np.ndarray) – Current plant state vector (nx,).

  • x_m (np.ndarray) – Current reference model state vector (nx,).

  • r (np.ndarray) – Reference command input (nu,).

  • kx (np.ndarray, optional) – Nominal feedback gain s.t. A + B*Kx = Am. Default is zero.

  • kr (np.ndarray, optional) – Nominal feedforward gain s.t. B*Kr = Bm. Default is Identity.

Returns:

Control input vector $u$ (nu,).

Return type:

np.ndarray

update_theta(dt: float) None[source]

Integrate the parameter estimates using the computed update rate.

Should be called once per simulation step after compute_control.

Parameters:

dt (float) – Simulation time step (s).

class opengnc.optimal_control.NonlinearMPC(dynamics_func: Callable[[ndarray, ndarray], ndarray], cost_func: Callable[[ndarray, ndarray], float], terminal_cost_func: Callable[[ndarray], float], horizon: int, nx: int, nu: int, u_min: float | ndarray | None = None, u_max: float | ndarray | None = None, x_min: float | ndarray | None = None, x_max: float | ndarray | None = None)[source]

Bases: object

Nonlinear Model Predictive Controller (NMPC).

Solves a finite-horizon optimal control problem for systems with nonlinear dynamics using single-shooting numerical optimization.

Dynamics: $x[k+1] = f(x[k], u[k])$ Objective: $min sum_{k=0}^{N-1} L(x_k, u_k) + V(x_N)$

Parameters:
  • dynamics_func (Callable[[np.ndarray, np.ndarray], np.ndarray]) – Nonlinear transition function $f(x, u)$.

  • cost_func (Callable[[np.ndarray, np.ndarray], float]) – Stage cost function $L(x, u)$.

  • terminal_cost_func (Callable[[np.ndarray], float]) – Terminal cost function $V(x)$.

  • horizon (int) – Prediction horizon N.

  • nx (int) – State dimension.

  • nu (int) – Input dimension.

  • u_min (float or np.ndarray, optional) – Control input constraints.

  • u_max (float or np.ndarray, optional) – Control input constraints.

  • x_min (float or np.ndarray, optional) – State constraints.

  • x_max (float or np.ndarray, optional) – State constraints.

solve(x0: ndarray) ndarray[source]

Solve the NMPC optimization problem using single shooting.

Parameters:

x0 (np.ndarray) – Initial state of the system (nx,).

Returns:

Optimal control sequence (N, nu).

Return type:

np.ndarray

class opengnc.optimal_control.PassivityBasedController(M_func: Callable[[ndarray], ndarray], C_func: Callable[[ndarray, ndarray], ndarray], G_func: Callable[[ndarray], ndarray], K_d: float | ndarray, Lambda: float | ndarray)[source]

Bases: object

Passivity-Based Controller for Euler-Lagrange mechanical systems.

Exploits the energy and passivity properties of mechanical systems to ensure global stability. This implementation is based on the Slotine & Li adaptive/passivity scheme.

System model: $M(q)ddot{q} + C(q, dot{q})dot{q} + G(q) = u$

Parameters:
  • M_func (Callable[[np.ndarray], np.ndarray]) – Inertia matrix function $M(q)$ (n x n).

  • C_func (Callable[[np.ndarray, np.ndarray], np.ndarray]) – Coriolis and centrifugal matrix function $C(q, dot{q})$ (n x n).

  • G_func (Callable[[np.ndarray], np.ndarray]) – Gravity/damping vector function $G(q)$ (n,).

  • K_d (Union[float, np.ndarray]) – Dissipative (damping) gain matrix (n x n).

  • Lambda (Union[float, np.ndarray]) – Proportional error convergence matrix (n x n).

compute_control(q: ndarray, q_dot: ndarray, q_d: ndarray, q_dot_d: ndarray, q_ddot_d: ndarray | None = None) ndarray[source]

Compute the passivity-based control torque output.

Parameters:
  • q (np.ndarray) – Current generalized coordinates and velocities (n,).

  • q_dot (np.ndarray) – Current generalized coordinates and velocities (n,).

  • q_d (np.ndarray) – Desired coordinate and velocity trajectories (n,).

  • q_dot_d (np.ndarray) – Desired coordinate and velocity trajectories (n,).

  • q_ddot_d (np.ndarray, optional) – Desired feedforward acceleration (n,). Defaults to zero.

Returns:

Control input vector $u$ (n,).

Return type:

np.ndarray

class opengnc.optimal_control.SlidingModeController(surface_func: Callable[[ndarray, float], float], k_gain: float, equivalent_control_func: Callable[[ndarray, float], float] | None = None, chattering_reduction: bool = True, boundary_layer: float = 0.1)[source]

Bases: object

Sliding Mode Controller (SMC).

Control Law: $u = u_{eq} - K cdot text{sat}(s/Phi)$

Parameters:
  • surface_func (Callable) – Sliding surface $s(mathbf{x}, t)$.

  • k_gain (float) – Switching gain $K$.

  • equivalent_control_func (Callable | None, optional) – $mathbf{u}_{eq}$ component.

  • chattering_reduction (bool, optional) – Use saturation instead of sign. Default True.

  • boundary_layer (float, optional) – Saturation boundary $Phi$.

compute_control(x: ndarray, t: float = 0.0) float | ndarray[source]

Compute the sliding mode control input.

Parameters:
  • x (np.ndarray) – Current state vector.

  • t (float, optional) – Current time (s). Default is 0.0.

Returns:

The computed control input signal.

Return type:

float or np.ndarray