|
1 | 1 | # Performance tips |
2 | 2 |
|
3 | | -The package `ADNLPModels.jl` is designed to easily model optimization problems andto allow an efficient access to the [`NLPModel API`](https://github.com/JuliaSmoothOptimizers/NLPModels.jl). |
| 3 | +The package `ADNLPModels.jl` is designed to easily model optimization problems and to allow an efficient access to the [`NLPModel API`](https://github.com/JuliaSmoothOptimizers/NLPModels.jl). |
4 | 4 | In this tutorial, we will see some tips to ensure the maximum performance of the model. |
5 | 5 |
|
6 | 6 | ## Use in-place constructor |
@@ -79,7 +79,7 @@ v = ones(2) |
79 | 79 | @btime jprod_residual!(nls_in, x, v, Fx) |
80 | 80 | ``` |
81 | 81 |
|
82 | | -## Use only the needed operations |
| 82 | +## Use only the needed backends |
83 | 83 |
|
84 | 84 | It is tempting to define the most generic and efficient `ADNLPModel` from the start. |
85 | 85 |
|
@@ -120,6 +120,15 @@ or, equivalently, using the `matrix_free` keyword argument |
120 | 120 | nlp = ADNLPModel!(f, x0, c_in, lcon, ucon, show_time = true, matrix_free = true) |
121 | 121 | ``` |
122 | 122 |
|
| 123 | +More classic nonlinear optimization solvers like [Ipopt.jl](https://github.com/jump-dev/Ipopt.jl), [KNITRO.jl](https://github.com/jump-dev/KNITRO.jl), or [MadNLP.jl](https://github.com/MadNLP/MadNLP.jl) only require the gradient and sparse Jacobians and Hessians. |
| 124 | +This means that we can set all other backends to `ADNLPModels.EmptyADbackend`. |
| 125 | + |
| 126 | +```@example ex2 |
| 127 | +nlp = ADNLPModel!(f, x0, c_in, lcon, ucon, jprod_backend = ADNLPModels.EmptyADbackend, |
| 128 | + jtprod_backend = ADNLPModels.EmptyADbackend, hprod_backend = ADNLPModels.EmptyADbackend, |
| 129 | + ghjvprod_backend = ADNLPModels.EmptyADbackend, show_time = true) |
| 130 | +``` |
| 131 | + |
123 | 132 | ## Benchmarks |
124 | 133 |
|
125 | 134 | This package implements several backends for each method and it is possible to design your own backend as well. |
|
0 commit comments