Skip to content

Commit 78d4eaa

Browse files
committed
up tuto lqr
1 parent 8c603c8 commit 78d4eaa

File tree

9 files changed

+119
-237
lines changed

9 files changed

+119
-237
lines changed

docs/make.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ makedocs(;
2828
"NLP manipulations" => "tutorial-nlp.md",
2929
"Indirect simple shooting" => "tutorial-iss.md",
3030
"Goddard: direct, indirect" => "tutorial-goddard.md",
31-
"Linear–quadratic regulator" => "tutorial-lqr-basic.md",
31+
"Linear–quadratic regulator" => "tutorial-lqr.md",
3232
"Minimal action" => "tutorial-mam.md",
3333
"Model Predictive Control" => "tutorial-mpc.md",
3434
],

docs/src/tutorial-continuation.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ We illustrate this using a simple double integrator problem, where the fixed fin
1010

1111
First we load the required packages:
1212

13-
```@example main
13+
```@example main-cont
1414
using DataFrames
1515
using OptimalControl
1616
using NLPModelsIpopt
@@ -20,7 +20,7 @@ using Plots
2020

2121
and write a function that returns the OCP for a given final time:
2222

23-
```@example main
23+
```@example main-cont
2424
function problem(T)
2525
2626
ocp = @def begin
@@ -49,7 +49,7 @@ nothing # hide
4949

5050
Then we perform the continuation with a simple *for* loop, using each solution to initialize the next problem.
5151

52-
```@example main
52+
```@example main-cont
5353
init = ()
5454
data = DataFrame(T=Float64[], Objective=Float64[], Iterations=Int[])
5555
for T ∈ range(1, 2, length=5)
@@ -67,7 +67,7 @@ As a second example, we show how to avoid redefining a new optimal control probl
6767

6868
Let us first define the Goddard problem. Note that the formulation below illustrates all types of constraints, and the problem could be written more compactly.
6969

70-
```@example main
70+
```@example main-cont
7171
# Parameters
7272
r0 = 1
7373
v0 = 0
@@ -127,7 +127,7 @@ sol0 = solve(goddard; display=false)
127127

128128
Then, we perform the continuation on the maximal thrust.
129129

130-
```@example main
130+
```@example main-cont
131131
sol = sol0 # Initialize the solution with the reference solution
132132
data = DataFrame(Tmax=Float64[], Objective=Float64[], Iterations=Int[])
133133
for Tmax_local ∈ range(Tmax_0, Tmax_f, length=6)
@@ -140,7 +140,7 @@ println(data)
140140

141141
We plot now the objective with respect to the maximal thrust, as well as both solutions for `Tmax=3.5` and `Tmax=1`.
142142

143-
```@example main
143+
```@example main-cont
144144
using Plots.PlotMeasures # for leftmargin
145145
146146
plt_obj = plot(data.Tmax, data.Objective;

docs/src/tutorial-discretisation.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ These methods are used to convert a continuous-time optimal control problem (OCP
55

66
Let us import the necessary packages and define the optimal control problem ([Goddard problem](@ref tutorial-goddard)) we will use as an example throughout this tutorial.
77

8-
```@example main
8+
```@example main-disc
99
using BenchmarkTools # for benchmarking
1010
using DataFrames # to store the results
1111
using OptimalControl # to define the optimal control problem and more
@@ -64,14 +64,14 @@ When calling `solve`, the option `disc_method=...` can be used to specify the di
6464

6565
Let us first solve the problem with the default `:trapeze` method and display the solution.
6666

67-
```@example main
67+
```@example main-disc
6868
sol = solve(ocp; disc_method=:trapeze, display=false)
6969
plot(sol; size=(800, 800))
7070
```
7171

7272
Let us now compare different discretization schemes to evaluate their accuracy and performance.
7373

74-
```@example main
74+
```@example main-disc
7575
# Solve the problem with different discretization methods
7676
solutions = []
7777
data = DataFrame(
@@ -102,7 +102,7 @@ end
102102
println(data)
103103
```
104104

105-
```@example main
105+
```@example main-disc
106106
# Plot the results
107107
x_style = (legend=:none,)
108108
p_style = (legend=:none,)
@@ -120,14 +120,14 @@ plot(plt; size=(800, 800))
120120

121121
For some large problems, you may notice that the solver takes a long time before the iterations actually begin. This is due to the computation of sparse derivatives — specifically, the Jacobian of the constraints and the Hessian of the Lagrangian — which can be quite costly. One possible alternative is to set the option `adnlp_backend=:manual`, which uses simpler sparsity patterns. The resulting matrices are faster to compute but are also less sparse, so this represents a trade-off between automatic differentiation (AD) preparation time and the efficiency of the optimization itself.
122122

123-
```@example main
123+
```@example main-disc
124124
solve(ocp; disc_method=:gauss_legendre_3, grid_size=1000, adnlp_backend=:manual)
125125
nothing # hide
126126
```
127127

128128
Let us now compare the performance of the two backends. We will use the `@btimed` macro from the `BenchmarkTools` package to measure the time taken for both the preparation of the NLP problem and the execution of the solver. Besides, we will also collect the number of non-zero elements in the Jacobian and Hessian matrices, which can be useful to understand the sparsity of the problem, thanks to the functions `get_nnzo`, `get_nnzj`, and `get_nnzj` from the `NLPModels` package. The problem is first discretized with the `direct_transcription` method and then solved with the `ipopt` solver, see the [tutorial on direct transcription](@ref tutorial-nlp) for more details.
129129

130-
```@example main
130+
```@example main-disc
131131
# DataFrame to store the results
132132
data = DataFrame(
133133
Backend=Symbol[],
@@ -191,7 +191,7 @@ println(data)
191191

192192
The option `time_grid=...` allows you to provide the full time grid vector `t0, t1, ..., tf`, which is especially useful if a non-uniform grid is desired. In the case of a free initial and/or final time, you should provide a normalized grid ranging from 0 to 1. Note that `time_grid` overrides `grid_size` if both options are specified.
193193

194-
```@example main
194+
```@example main-disc
195195
sol = solve(ocp; time_grid=[0, 0.1, 0.5, 0.9, 1], display=false)
196196
println(time_grid(sol))
197197
```

docs/src/tutorial-goddard.md

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ We import the [Plots.jl](https://docs.juliaplots.org) package to plot the soluti
3939
The [OrdinaryDiffEq.jl](https://docs.sciml.ai/OrdinaryDiffEq) package is used to
4040
define the shooting function for the indirect method and the [MINPACK.jl](https://github.com/sglyon/MINPACK.jl) package permits to solve the shooting equation.
4141

42-
```@example main
42+
```@example main-goddard
4343
using OptimalControl # to define the optimal control problem and more
4444
using NLPModelsIpopt # to solve the problem via a direct method
4545
using OrdinaryDiffEq # to get the Flow function from OptimalControl
@@ -51,7 +51,7 @@ using Plots # to plot the solution
5151

5252
We define the problem
5353

54-
```@example main
54+
```@example main-goddard
5555
const t0 = 0 # initial time
5656
const r0 = 1 # initial altitude
5757
const v0 = 0 # initial speed
@@ -101,14 +101,14 @@ nothing # hide
101101

102102
We then solve it
103103

104-
```@example main
104+
```@example main-goddard
105105
direct_sol = solve(ocp; grid_size=100)
106106
nothing # hide
107107
```
108108

109109
and plot the solution
110110

111-
```@example main
111+
```@example main-goddard
112112
plt = plot(direct_sol, label="direct", size=(800, 800))
113113
```
114114

@@ -119,7 +119,7 @@ bang arc with maximal control, followed by a singular arc, then by a boundary ar
119119
arc is with zero control. Note that the switching function vanishes along the singular and
120120
boundary arcs.
121121

122-
```@example main
122+
```@example main-goddard
123123
t = time_grid(direct_sol) # the time grid as a vector
124124
x = state(direct_sol) # the state as a function of time
125125
u = control(direct_sol) # the control as a function of time
@@ -191,7 +191,7 @@ as well as the associated multiplier for the *order one* state constraint on the
191191

192192
With the help of differential geometry primitives, these expressions are straightforwardly translated into Julia code:
193193

194-
```@example main
194+
```@example main-goddard
195195
# Controls
196196
const u0 = 0 # off control
197197
const u1 = 1 # bang control
@@ -218,7 +218,7 @@ nothing # hide
218218
Then, we define the shooting function according to the optimal structure we have determined,
219219
that is a concatenation of four arcs.
220220

221-
```@example main
221+
```@example main-goddard
222222
x0 = [r0, v0, m0] # initial state
223223
224224
function shoot!(s, p0, t1, t2, t3, tf)
@@ -245,7 +245,7 @@ To solve the problem by an indirect shooting method, we then need a good initial
245245
that is a good approximation of the initial costate, the three switching times and the
246246
final time.
247247

248-
```@example main
248+
```@example main-goddard
249249
η = 1e-3
250250
t13 = t[ abs.(φ.(t)) .≤ η ]
251251
t23 = t[ 0 .≤ (g ∘ x).(t) .≤ η ]
@@ -272,7 +272,7 @@ println("\nNorm of the shooting function: ‖s‖ = ", norm(s), "\n")
272272

273273
We aggregate the data to define the initial guess vector.
274274

275-
```@example main
275+
```@example main-goddard
276276
ξ = [p0..., t1, t2, t3, tf] # initial guess
277277
```
278278

@@ -304,7 +304,7 @@ function fsolve(f, j, x; kwargs...)
304304
end
305305
```
306306

307-
```@example main
307+
```@example main-goddard
308308
using DifferentiationInterface
309309
import ForwardDiff
310310
backend = AutoForwardDiff()
@@ -313,7 +313,7 @@ nothing # hide
313313

314314
Let us define the problem to solve.
315315

316-
```@example main
316+
```@example main-goddard
317317
# auxiliary function with aggregated inputs
318318
nle! = ( s, ξ) -> shoot!(s, ξ[1:3], ξ[4], ξ[5], ξ[6], ξ[7])
319319
@@ -326,7 +326,7 @@ We are now in position to solve the problem with the `hybrj` solver from MINPACK
326326
function, providing the Jacobian.
327327
Let us solve the problem and retrieve the initial costate solution.
328328

329-
```@example main
329+
```@example main-goddard
330330
# resolution of S(ξ) = 0
331331
indirect_sol = fsolve(nle!, jnle!, ξ, show_trace=true)
332332
@@ -355,7 +355,7 @@ println("\nNorm of the shooting function: ‖s‖ = ", norm(s), "\n")
355355
We plot the solution of the indirect solution (in red) over the solution of the direct method
356356
(in blue).
357357

358-
```@example main
358+
```@example main-goddard
359359
f = f1 * (t1, fs) * (t2, fb) * (t3, f0) # concatenation of the flows
360360
flow_sol = f((t0, tf), x0, p0) # compute the solution: state, costate, control...
361361

0 commit comments

Comments
 (0)