You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/tutorial-continuation.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ This usually gives better and faster convergence than solving each problem with
12
12
13
13
The most compact syntax to perform a discrete continuation is to use a function that returns the OCP for a given value of the continuation parameter, and solve a sequence of these problems. We illustrate this on a very basic double integrator with increasing fixed final time.
14
14
15
-
First we load the required packages
15
+
First we load the required packages:
16
16
17
17
```@example main
18
18
using OptimalControl
@@ -21,7 +21,7 @@ using Printf
21
21
using Plots
22
22
```
23
23
24
-
and write a function that returns the OCP for a given final time
24
+
and write a function that returns the OCP for a given final time:
Copy file name to clipboardExpand all lines: docs/src/tutorial-discretisation.md
+13-7Lines changed: 13 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,8 @@ When calling `solve`, the option `disc_method=...` can be used to set the discre
5
5
In addition to the default implicit `:trapeze` method (aka Crank-Nicolson), other choices are available, namely implicit `:midpoint` and the Gauss-Legendre collocations with 2 and stages, `:gauss_legendre_2` and `:gauss_legendre_3`, of order 4 and 6 respectively.
6
6
Note that higher order methods will typically lead to larger NLP problems for the same number of time steps, and that accuracy will also depend on the smoothness of the problem.
7
7
8
-
As an example we will use the [Goddard problem](@ref tutorial-goddard)
8
+
As an example we will use the [Goddard problem](@ref tutorial-goddard).
9
+
9
10
```@example main
10
11
using OptimalControl # to define the optimal control problem and more
11
12
using NLPModelsIpopt # to solve the problem via a direct method
Copy file name to clipboardExpand all lines: docs/src/tutorial-goddard.md
+11-17Lines changed: 11 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,9 +34,9 @@ $v(t) \leq v_{\max}$. The initial state is fixed while only the final mass is pr
34
34
as well as constrained arcs due to the path constraint on the velocity (see below).
35
35
36
36
We import the [OptimalControl.jl](https://control-toolbox.org/OptimalControl.jl) package to define the optimal control problem and
37
-
[NLPModelsIpopt.jl](https://github.com/JuliaSmoothOptimizers/NLPModelsIpopt.jl) to solve it.
38
-
We import the [Plots.jl](https://github.com/JuliaPlots/Plots.jl) package to plot the solution.
39
-
The [OrdinaryDiffEq.jl](https://github.com/SciML/OrdinaryDiffEq.jl) package is used to
37
+
[NLPModelsIpopt.jl](https://jso.dev/NLPModelsIpopt.jl) to solve it.
38
+
We import the [Plots.jl](https://docs.juliaplots.org) package to plot the solution.
39
+
The [OrdinaryDiffEq.jl](https://docs.sciml.ai/OrdinaryDiffEq) package is used to
40
40
define the shooting function for the indirect method and the [MINPACK.jl](https://github.com/sglyon/MINPACK.jl) package permits to solve the shooting equation.
41
41
42
42
```@example main
@@ -119,15 +119,11 @@ bang arc with maximal control, followed by a singular arc, then by a boundary ar
119
119
arc is with zero control. Note that the switching function vanishes along the singular and
120
120
boundary arcs.
121
121
122
-
!!! tip "Interactions with an optimal control solution"
123
-
124
-
Please check [`state`](@ref), [`costate`](@ref), [`control`](@ref) and [`variable`](@ref) to get data from the solution. The functions `state`, `costate` and `control` return functions of time and `variable` returns a vector. The function [`time_grid`](@ref) returns the discretized time grid returned by the solver.
125
-
126
122
```@example main
127
-
t = time_grid(direct_sol)
128
-
x = state(direct_sol)
129
-
u = control(direct_sol)
130
-
p = costate(direct_sol)
123
+
t = time_grid(direct_sol) # the time grid as a vector
124
+
x = state(direct_sol) # the state as a function of time
125
+
u = control(direct_sol) # the control as a function of time
126
+
p = costate(direct_sol) # the costate as a function of time
131
127
132
128
H1 = Lift(F1) # H1(x, p) = p' * F1(x)
133
129
φ(t) = H1(x(t), p(t)) # switching function
@@ -193,9 +189,7 @@ as well as the associated multiplier for the *order one* state constraint on the
193
189
194
190
which is the reason why we use the `@Lie` macro to compute Poisson brackets below.
195
191
196
-
With the help of the [differential geometry primitives](https://control-toolbox.org/CTBase.jl/stable/api-diffgeom.html)
197
-
from [CTBase.jl](https://control-toolbox.org/OptimalControl.jl/stable/api-ctbase.html),
198
-
these expressions are straightforwardly translated into Julia code:
192
+
With the help of differential geometry primitives, these expressions are straightforwardly translated into Julia code:
199
193
200
194
```@example main
201
195
# Controls
@@ -284,11 +278,11 @@ We aggregate the data to define the initial guess vector.
284
278
285
279
### MINPACK.jl
286
280
287
-
We can use [NonlinearSolve.jl](https://github.com/SciML/NonlinearSolve.jl) package or, instead, the
281
+
We can use [NonlinearSolve.jl](https://docs.sciml.ai/NonlinearSolve) package or, instead, the
288
282
[MINPACK.jl](https://github.com/sglyon/MINPACK.jl) package to solve
289
283
the shooting equation. To compute the Jacobian of the shooting function we use the
290
-
[DifferentiationInterface.jl](https://gdalle.github.io/DifferentiationInterface.jl/DifferentiationInterface) package with
Copy file name to clipboardExpand all lines: docs/src/tutorial-iss.md
+5-2Lines changed: 5 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,11 +3,14 @@
3
3
In this tutorial we present the indirect simple shooting method on a simple example.
4
4
5
5
Let us start by importing the necessary packages.
6
+
We import the [OptimalControl.jl](https://control-toolbox.org/OptimalControl.jl) package to define the optimal control problem.
7
+
We import the [Plots.jl](https://docs.juliaplots.org) package to plot the solution.
8
+
The [OrdinaryDiffEq.jl](https://docs.sciml.ai/OrdinaryDiffEq) package is used to define the shooting function for the indirect method and the [MINPACK.jl](https://github.com/sglyon/MINPACK.jl) package permits to solve the shooting equation.
9
+
6
10
7
11
```@example main
8
12
using OptimalControl # to define the optimal control problem and its flow
9
13
using OrdinaryDiffEq # to get the Flow function from OptimalControl
10
-
using NonlinearSolve # interface to NLE solvers
11
14
using MINPACK # NLE solver: use to solve the shooting equation
12
15
using Plots # to plot the solution
13
16
```
@@ -154,7 +157,7 @@ nothing # hide
154
157
155
158
### MINPACK.jl
156
159
157
-
We can use [NonlinearSolve.jl](https://github.com/SciML/NonlinearSolve.jl) package or, instead, [MINPACK.jl](https://github.com/sglyon/MINPACK.jl) to solve the shooting equation. To compute the Jacobian of the shooting function we use [DifferentiationInterface.jl](https://gdalle.github.io/DifferentiationInterface.jl/DifferentiationInterface) with [ForwardDiff.jl](https://github.com/JuliaDiff/ForwardDiff.jl) backend.
160
+
We can use [NonlinearSolve.jl](https://docs.sciml.ai/NonlinearSolve) package or, instead, [MINPACK.jl](https://github.com/sglyon/MINPACK.jl) to solve the shooting equation. To compute the Jacobian of the shooting function we use [DifferentiationInterface.jl](https://juliadiff.org/DifferentiationInterface.jl/DifferentiationInterface) with [ForwardDiff.jl](https://juliadiff.org/ForwardDiff.jl) backend.
Copy file name to clipboardExpand all lines: docs/src/tutorial-nlp.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,9 +7,9 @@ CurrentModule = OptimalControl
7
7
We describe here some more advanced operations related to the discretized optimal control problem.
8
8
When calling `solve(ocp)` three steps are performed internally:
9
9
10
-
- first, the OCP is discretized into a DOCP (a nonlinear optimization problem) with [`direct_transcription`](@ref),
11
-
- then, this DOCP is solved (with the internal function [`solve_docp`](@ref)),
12
-
- finally, a functional solution of the OCP is rebuilt from the solution of the discretized problem, with [`OptimalControlSolution`](@ref).
10
+
- first, the OCP is discretized into a DOCP (a nonlinear optimization problem),
11
+
- then, this DOCP is solved with a nonlinear programming (NLP) solver, which returns a solution of the discretized problem,
12
+
- finally, a functional solution of the OCP is rebuilt from the solution of the discretized problem.
13
13
14
14
These steps can also be done separately, for instance if you want to use your own NLP solver.
15
15
@@ -55,7 +55,7 @@ We can now use the solver of our choice to solve it.
55
55
56
56
## Resolution of the NLP problem
57
57
58
-
For a first example we use the `ipopt` solver from [NLPModelsIpopt.jl](https://github.com/JuliaSmoothOptimizers/NLPModelsIpopt.jl) package to solve the NLP problem.
58
+
For a first example we use the `ipopt` solver from [NLPModelsIpopt.jl](https://jso.dev/NLPModelsIpopt.jl) package to solve the NLP problem.
59
59
60
60
```@example main
61
61
using NLPModelsIpopt
@@ -82,14 +82,14 @@ nlp_sol = madnlp(nlp)
82
82
83
83
## Initial guess
84
84
85
-
An initial guess, including warm start, can be passed to [`direct_transcription`](@ref) the same way as for `solve`.
85
+
An initial guess, including warm start, can be passed to [`direct_transcription`](https://control-toolbox.org/OptimalControl.jl/stable/dev-ctdirect.html#CTDirect.direct_transcription-Tuple{Model,%20Vararg{Any}}) the same way as for `solve`.
86
86
87
87
```@example main
88
88
docp, nlp = direct_transcription(ocp; init=sol)
89
89
nothing # hide
90
90
```
91
91
92
-
It can also be changed after the transcription is done, with [`set_initial_guess`](@ref).
92
+
It can also be changed after the transcription is done, with [`set_initial_guess`](https://control-toolbox.org/OptimalControl.jl/stable/dev-ctdirect.html#CTDirect.set_initial_guess-Tuple{CTDirect.DOCP,%20Any,%20Any}).
0 commit comments