You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We can observe that $x(t_f)$ converges to the origin as $t_f$ increases.
94
94
95
-
## Limitations: using matrix formulation
95
+
## Known issues
96
96
97
-
The following definition will lead to an error when solving the problem:
97
+
The following definition will lead to an error when solving the problem. See [Reverse over forward AD issues with ADNLP](https://github.com/control-toolbox/OptimalControl.jl/issues/481).
Copy file name to clipboardExpand all lines: docs/src/tutorial-nlp.md
+18-8Lines changed: 18 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -42,15 +42,21 @@ nothing # hide
42
42
43
43
## Discretization and NLP problem
44
44
45
-
We discretize the problem.
45
+
We discretize the problem with [`direct_transcription`](@extref CTDirect.direct_transcription):
46
46
47
47
```@example main-nlp
48
48
docp = direct_transcription(ocp)
49
+
nothing # hide
50
+
```
51
+
52
+
and get the NLP model with [`model`](@extref CTDirect.model):
53
+
54
+
```@example main-nlp
49
55
nlp = model(docp)
50
56
nothing # hide
51
57
```
52
58
53
-
The DOCP contains information related to the transcription, including a copy of the original OCP, and the NLP is the resulting discretized problem, in our case an `ADNLPModel`.
59
+
The DOCP contains information related to the transcription, including a copy of the original OCP, and the NLP is the resulting discretized nonlinear programming problem, in our case an `ADNLPModel`.
54
60
55
61
We can now use the solver of our choice to solve it.
Then we can rebuild and plot an optimal control problem solution (note that the multipliers are optional, but the OCP costate will not be retrieved if the multipliers are not provided).
73
+
Then, we can build an optimal control problem solution with [`build_OCP_solution`](@extref CTDirect.build_OCP_solution-Tuple{Any}) (note that the multipliers are optional, but the OCP costate will not be retrieved if the multipliers are not provided) and plot it.
68
74
69
75
```@example main-nlp
70
-
sol = build_OCP_solution(docp; primal=nlp_sol.solution, dual=nlp_sol.multipliers)
76
+
sol = build_OCP_solution(docp;
77
+
primal=nlp_sol.solution,
78
+
dual=nlp_sol.multipliers,
79
+
docp_solution=nlp_sol
80
+
)
71
81
plot(sol)
72
82
```
83
+
73
84
## Change the NLP solver
74
85
75
86
Alternatively, we can use [MadNLP.jl](https://madnlp.github.io/MadNLP.jl) to solve anew the NLP problem:
@@ -81,17 +92,16 @@ nlp_sol = madnlp(nlp)
81
92
82
93
## Initial guess
83
94
84
-
An initial guess, including warm start, can be passed to [`direct_transcription`](https://control-toolbox.org/OptimalControl.jl/stable/dev-ctdirect.html#CTDirect.direct_transcription-Tuple{Model,%20Vararg{Any}}) the same way as for `solve`.
95
+
An initial guess, including warm start, can be passed to [`direct_transcription`](@extrefCTDirect.direct_transcription) the same way as for `solve`.
85
96
86
97
```@example main-nlp
87
98
docp = direct_transcription(ocp; init=sol)
88
-
nlp = model(nlp)
89
99
nothing # hide
90
100
```
91
101
92
-
It can also be changed after the transcription is done, with [`set_initial_guess`](https://control-toolbox.org/OptimalControl.jl/stable/dev-ctdirect.html#CTDirect.set_initial_guess-Tuple{CTDirect.DOCP,%20Any,%20Any}).
102
+
It can also be changed after the transcription is done, with [`set_initial_guess`](@extrefCTDirect.set_initial_guess).
0 commit comments