You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/tutorial-continuation.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ We illustrate this using a simple double integrator problem, where the fixed fin
10
10
11
11
First we load the required packages:
12
12
13
-
```@example main
13
+
```@example main-cont
14
14
using DataFrames
15
15
using OptimalControl
16
16
using NLPModelsIpopt
@@ -20,7 +20,7 @@ using Plots
20
20
21
21
and write a function that returns the OCP for a given final time:
22
22
23
-
```@example main
23
+
```@example main-cont
24
24
function problem(T)
25
25
26
26
ocp = @def begin
@@ -49,7 +49,7 @@ nothing # hide
49
49
50
50
Then we perform the continuation with a simple *for* loop, using each solution to initialize the next problem.
51
51
52
-
```@example main
52
+
```@example main-cont
53
53
init = ()
54
54
data = DataFrame(T=Float64[], Objective=Float64[], Iterations=Int[])
55
55
for T ∈ range(1, 2, length=5)
@@ -67,7 +67,7 @@ As a second example, we show how to avoid redefining a new optimal control probl
67
67
68
68
Let us first define the Goddard problem. Note that the formulation below illustrates all types of constraints, and the problem could be written more compactly.
Copy file name to clipboardExpand all lines: docs/src/tutorial-discretisation.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ These methods are used to convert a continuous-time optimal control problem (OCP
5
5
6
6
Let us import the necessary packages and define the optimal control problem ([Goddard problem](@ref tutorial-goddard)) we will use as an example throughout this tutorial.
7
7
8
-
```@example main
8
+
```@example main-disc
9
9
using BenchmarkTools # for benchmarking
10
10
using DataFrames # to store the results
11
11
using OptimalControl # to define the optimal control problem and more
@@ -64,14 +64,14 @@ When calling `solve`, the option `disc_method=...` can be used to specify the di
64
64
65
65
Let us first solve the problem with the default `:trapeze` method and display the solution.
66
66
67
-
```@example main
67
+
```@example main-disc
68
68
sol = solve(ocp; disc_method=:trapeze, display=false)
69
69
plot(sol; size=(800, 800))
70
70
```
71
71
72
72
Let us now compare different discretization schemes to evaluate their accuracy and performance.
73
73
74
-
```@example main
74
+
```@example main-disc
75
75
# Solve the problem with different discretization methods
76
76
solutions = []
77
77
data = DataFrame(
@@ -102,7 +102,7 @@ end
102
102
println(data)
103
103
```
104
104
105
-
```@example main
105
+
```@example main-disc
106
106
# Plot the results
107
107
x_style = (legend=:none,)
108
108
p_style = (legend=:none,)
@@ -120,14 +120,14 @@ plot(plt; size=(800, 800))
120
120
121
121
For some large problems, you may notice that the solver takes a long time before the iterations actually begin. This is due to the computation of sparse derivatives — specifically, the Jacobian of the constraints and the Hessian of the Lagrangian — which can be quite costly. One possible alternative is to set the option `adnlp_backend=:manual`, which uses simpler sparsity patterns. The resulting matrices are faster to compute but are also less sparse, so this represents a trade-off between automatic differentiation (AD) preparation time and the efficiency of the optimization itself.
Let us now compare the performance of the two backends. We will use the `@btimed` macro from the `BenchmarkTools` package to measure the time taken for both the preparation of the NLP problem and the execution of the solver. Besides, we will also collect the number of non-zero elements in the Jacobian and Hessian matrices, which can be useful to understand the sparsity of the problem, thanks to the functions `get_nnzo`, `get_nnzj`, and `get_nnzj` from the `NLPModels` package. The problem is first discretized with the `direct_transcription` method and then solved with the `ipopt` solver, see the [tutorial on direct transcription](@ref tutorial-nlp) for more details.
129
129
130
-
```@example main
130
+
```@example main-disc
131
131
# DataFrame to store the results
132
132
data = DataFrame(
133
133
Backend=Symbol[],
@@ -191,7 +191,7 @@ println(data)
191
191
192
192
The option `time_grid=...` allows you to provide the full time grid vector `t0, t1, ..., tf`, which is especially useful if a non-uniform grid is desired. In the case of a free initial and/or final time, you should provide a normalized grid ranging from 0 to 1. Note that `time_grid` overrides `grid_size` if both options are specified.
193
193
194
-
```@example main
194
+
```@example main-disc
195
195
sol = solve(ocp; time_grid=[0, 0.1, 0.5, 0.9, 1], display=false)
Copy file name to clipboardExpand all lines: docs/src/tutorial-goddard.md
+13-13Lines changed: 13 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -39,7 +39,7 @@ We import the [Plots.jl](https://docs.juliaplots.org) package to plot the soluti
39
39
The [OrdinaryDiffEq.jl](https://docs.sciml.ai/OrdinaryDiffEq) package is used to
40
40
define the shooting function for the indirect method and the [MINPACK.jl](https://github.com/sglyon/MINPACK.jl) package permits to solve the shooting equation.
41
41
42
-
```@example main
42
+
```@example main-goddard
43
43
using OptimalControl # to define the optimal control problem and more
44
44
using NLPModelsIpopt # to solve the problem via a direct method
45
45
using OrdinaryDiffEq # to get the Flow function from OptimalControl
@@ -51,7 +51,7 @@ using Plots # to plot the solution
0 commit comments