25 Aug 2023

Applied Optimal Control(07)-Dynamic Programming (Bellman Eq/ HJB)

Keypoints:

  • Dynamic Programming definition and principle (continuous to discrete)
  • Discrete Dynamic Programming
    • Bellman equation: $V_i(x) = \underset{u}\min {L_i(x,u) + V_{i+1}(x’)}$, $x’$ is next step state
    • Discrete-time linear-quadratic case
  • Continuous Dynamic Programming
    • Hamilton-Jacobi-Bellman equation: $-\partial_t V_i(x, t) = \underset{u}\min [{L_i(x,u,t) + \nabla V_x(x,t)^T \cdot f(x,u,t)}]$, $f(x,u,t)$ is dynamics
      • Here $\nabla V_x(x,t)$ is similar to $\lambda$ in Hamiltonian
      • $u^* = arg\underset{u}\min [L_i(x,u,t) + \nabla V_x(x,t)^T \cdot f(x,u,t)]$

Based on the lecture notes by Dr. Marin Kobilarov on “Applied Optimal Control (2021 Fall) at Johns Hopkins University

CV CV CV CV CV CV CV CV CV


Tags:
0 comments