Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
37 commits
Select commit Hold shift + click to select a range
453fcc9
Revert "fix Proximal extension"
alyst Mar 22, 2025
abc2847
Revert "fix NLopt extension"
alyst Mar 22, 2025
56cdef1
Revert "fix exporting structs from package extensions"
alyst Mar 22, 2025
421927e
types.jl: move SemOptimizer API into abstract.jl
alyst Mar 22, 2025
84bd7bd
NLoptResult should not be mutable
alyst Mar 22, 2025
930e0e5
SemNLOpt: use f or f => tol pair for constraints
alyst Mar 22, 2025
7bd1007
NLopt: update/simplify docs
alyst Mar 22, 2025
96f5f17
Optim.md: SemOptimizerOptim => SemOptimizer
alyst Mar 22, 2025
869429b
regulariz.md: SemOptimProx => SemOptimizer
alyst Mar 22, 2025
6d88590
engine(): fix signature
Jan 27, 2026
3e2de1d
optimizer_engines(): new method
Jan 27, 2026
a560a01
SemOptimizer() ctor switch to Val(E) dispatch
Jan 27, 2026
8dacd43
SemOptimizer: reattach docstrings to ctor
Jan 27, 2026
06a9a13
constraints.md: cleanups
Jan 27, 2026
519cff1
reg.md: cleanup
Jan 27, 2026
ce87dd4
NLopt.jl: fixup docstring
Jan 27, 2026
51121fd
docs: fixup docstring switch
Jan 27, 2026
de9d2e8
tut/nlopt.md: cleanups
Jan 27, 2026
2b42351
reg.md: fixup
Jan 27, 2026
1818c3f
streamline docstrings
Maximilian-Stefan-Ernst Jan 28, 2026
c002321
refactor docstring access with
Maximilian-Stefan-Ernst Jan 28, 2026
a05d33d
remove direct calls of SemOptimizerOptim and add optimizer to SemFit
Maximilian-Stefan-Ernst Jan 28, 2026
2045144
rename engine related functions
Maximilian-Stefan-Ernst Jan 29, 2026
35bce1c
streamline engine error throwing
Maximilian-Stefan-Ernst Jan 29, 2026
98ca6b8
streamline optimization result methods
Maximilian-Stefan-Ernst Jan 29, 2026
2ddaf84
try fixing the optimizer online docs
Maximilian-Stefan-Ernst Jan 29, 2026
a87823e
fix proximal extension
Maximilian-Stefan-Ernst Jan 30, 2026
35fdbf0
fix tests
Maximilian-Stefan-Ernst Jan 30, 2026
457a5f7
start fixing docs
Maximilian-Stefan-Ernst Jan 30, 2026
1b30ef5
docs: fix opt engine docs
Feb 3, 2026
013596b
export optmizer_engine()
Feb 3, 2026
5a8f483
optimizer_engine_dependencies: allow multiple deps
Feb 3, 2026
39eb8ff
SemOptimizer_impltype(engine)
Feb 3, 2026
59a3ed8
SemOptimizerResult: optim wrapper
Feb 3, 2026
615937f
optimizer.md: rename to SemOptimizerMyopt
Feb 3, 2026
2e3499e
formatting fixes
Feb 3, 2026
da33a89
docs: apply suggestions
Feb 3, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
DelimitedFiles = "8bb1440f-4735-579b-a4ab-409b98df4dab"
Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"
FiniteDiff = "6a86dc24-6348-571c-b903-95158fe2bd41"
InteractiveUtils = "b77e0a4c-d291-57a0-90e8-8db25a27a240"
LazyArtifacts = "4af54fe1-eca0-43a8-85a7-787d91b784e3"
LineSearches = "d3d80556-e9d4-5f37-9878-2ab0fcc64255"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
Expand All @@ -30,6 +31,7 @@ StenoGraphs = "0.2 - 0.3, 0.4.1 - 0.5"
DataFrames = "1"
Distributions = "0.25"
FiniteDiff = "2"
InteractiveUtils = "1.11.0"
LineSearches = "7"
NLSolversBase = "7"
NLopt = "0.6, 1"
Expand Down
9 changes: 8 additions & 1 deletion docs/make.jl
Original file line number Diff line number Diff line change
@@ -1,7 +1,14 @@
using Documenter, StructuralEquationModels
using Documenter, StructuralEquationModels, NLopt, ProximalAlgorithms, ProximalOperators

SEM = StructuralEquationModels
SEMNLOptExt = Base.get_extension(StructuralEquationModels, :SEMNLOptExt)
SEMProximalOptExt = Base.get_extension(StructuralEquationModels, :SEMProximalOptExt)

makedocs(
sitename = "StructuralEquationModels.jl",
modules = [
SEM, SEMNLOptExt, SEMProximalOptExt,
],
pages = [
"index.md",
"Tutorials" => [
Expand Down
48 changes: 29 additions & 19 deletions docs/src/developer/optimizer.md
Original file line number Diff line number Diff line change
@@ -1,64 +1,74 @@
# Custom optimizer types

The optimizer part of a model connects it to the optimization backend.
Let's say we want to implement a new optimizer as `SemOptimizerName`. The first part of the implementation is very similar to loss functions, so we just show the implementation of `SemOptimizerOptim` here as a reference:
The optimizer part of a model connects it to the optimization backend.
Let's say we want to implement a new optimizer as `SemOptimizerMyopt`.
The first part of the implementation is very similar to loss functions,
so we just show the implementation of `SemOptimizerOptim` here as a reference:

```julia
############################################################################################
### Types and Constructor
############################################################################################
mutable struct SemOptimizerName{A, B} <: SemOptimizer{:Name}
struct SemOptimizerMyopt{A, B} <: SemOptimizer{:Myopt}
algorithm::A
options::B
end

SemOptimizer{:Name}(args...; kwargs...) = SemOptimizerName(args...; kwargs...)
SemOptimizer(Val{:Myopt}, args...; kwargs...) = SemOptimizerMyopt(args...; kwargs...)

SemOptimizerName(;
SemOptimizer_impltype(::Val{:Myopt}) = SemOptimizerMyopt

SemOptimizerMyopt(;
algorithm = LBFGS(),
options = Optim.Options(; f_reltol = 1e-10, x_abstol = 1.5e-8),
kwargs...,
) = SemOptimizerName(algorithm, options)
) = SemOptimizerMyopt(algorithm, options)

struct MyOptResult{O <: SemOptimizerMyopt} <: SEM.SemOptimizerResult{O}
optimizer::O
...
end

############################################################################################
### Recommended methods
############################################################################################

update_observed(optimizer::SemOptimizerName, observed::SemObserved; kwargs...) = optimizer
update_observed(optimizer::SemOptimizerMyopt, observed::SemObserved; kwargs...) = optimizer

############################################################################################
### additional methods
############################################################################################

algorithm(optimizer::SemOptimizerName) = optimizer.algorithm
options(optimizer::SemOptimizerName) = optimizer.options
options(optimizer::SemOptimizerMyopt) = optimizer.options
```

Note that your optimizer is a subtype of `SemOptimizer{:Name}`, where you can choose a `:Name` that can later be used as a keyword argument to `fit(engine = :Name)`.
Similarly, `SemOptimizer{:Name}(args...; kwargs...) = SemOptimizerName(args...; kwargs...)` should be defined as well as a constructor that uses only keyword arguments:
Note that your optimizer is a subtype of `SemOptimizer{:Myopt}`,
where you can choose a `:Myopt` that can later be used as a keyword argument to `fit(engine = :Myopt)`.
Similarly, `SemOptimizer{:Myopt}(args...; kwargs...) = SemOptimizerMyopt(args...; kwargs...)`
should be defined as well as a constructor that uses only keyword arguments:

```julia
SemOptimizerName(;
SemOptimizerMyopt(;
algorithm = LBFGS(),
options = Optim.Options(; f_reltol = 1e-10, x_abstol = 1.5e-8),
kwargs...,
) = SemOptimizerName(algorithm, options)
) = SemOptimizerMyopt(algorithm, options)
```
A method for `update_observed` and additional methods might be usefull, but are not necessary.

Now comes the substantive part: We need to provide a method for `fit`:

```julia
function fit(
optim::SemOptimizerName,
optim::SemOptimizerMyopt,
model::AbstractSem,
start_params::AbstractVector;
kwargs...,
)
optimization_result = ...

...

optimization_result = MyoptResult(optim, ...)

return SemFit(minimum, minimizer, start_params, model, optimization_result)
end
```
Expand All @@ -68,7 +78,7 @@ The method has to return a `SemFit` object that consists of the minimum of the o
In addition, you might want to provide methods to access properties of your optimization result:

```julia
optimizer(res::MyOptimizationResult) = ...
n_iterations(res::MyOptimizationResult) = ...
convergence(res::MyOptimizationResult) = ...
algorithm_name(res::MyOptResult) = ...
n_iterations(res::MyOptResult) = ...
convergence(res::MyOptResult) = ...
```
2 changes: 1 addition & 1 deletion docs/src/performance/simulation.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ For example,

new_observed = SemObservedData(;data = data_2, specification = partable)

my_optimizer = SemOptimizerOptim()
my_optimizer = SemOptimizer()

new_optimizer = update_observed(my_optimizer, new_observed)
```
Expand Down
41 changes: 15 additions & 26 deletions docs/src/tutorials/backends/nlopt.md
Original file line number Diff line number Diff line change
@@ -1,47 +1,36 @@
# Using NLopt.jl

[`SemOptimizerNLopt`](@ref) implements the connection to `NLopt.jl`.
It is only available if the `NLopt` package is loaded alongside `StructuralEquationModels.jl` in the running Julia session.
It takes a bunch of arguments:
When [`NLopt.jl`](https://github.com/jump-dev/NLopt.jl) is loaded in the running Julia session,
it can be used by the [`SemOptimizer`](@ref) by specifying `engine = :NLopt`
(see [NLopt-specific options](@ref SEMNLOptExt.SemOptimizerNLopt)).
Among other things, `NLopt` enables constrained optimization of SEMs, which is
explained in the [Constrained optimization](@ref) section.

```julia
• algorithm: optimization algorithm

• options::Dict{Symbol, Any}: options for the optimization algorithm

• local_algorithm: local optimization algorithm

• local_options::Dict{Symbol, Any}: options for the local optimization algorithm

• equality_constraints::Vector{NLoptConstraint}: vector of equality constraints

• inequality_constraints::Vector{NLoptConstraint}: vector of inequality constraints
```
Constraints are explained in the section on [Constrained optimization](@ref).

The defaults are LBFGS as the optimization algorithm and the standard options from `NLopt.jl`.
We can choose something different:
We can override the default *NLopt* algorithm (LFBGS) and instead use
the *augmented lagrangian* method with LBFGS as the *local* optimization algorithm,
stop at a maximum of 200 evaluations and use a relative tolerance of
the objective value of `1e-6` as the stopping criterion for the local algorithm:

```julia
using NLopt

my_optimizer = SemOptimizerNLopt(;
my_optimizer = SemOptimizer(;
engine = :NLopt,
algorithm = :AUGLAG,
options = Dict(:maxeval => 200),
local_algorithm = :LD_LBFGS,
local_options = Dict(:ftol_rel => 1e-6)
)
```

This uses an augmented lagrangian method with LBFGS as the local optimization algorithm, stops at a maximum of 200 evaluations and uses a relative tolerance of the objective value of `1e-6` as the stopping criterion for the local algorithm.

To see how to use the optimizer to actually fit a model now, check out the [Model fitting](@ref) section.

In the NLopt docs, you can find explanations about the different [algorithms](https://nlopt.readthedocs.io/en/latest/NLopt_Algorithms/) and a [tutorial](https://nlopt.readthedocs.io/en/latest/NLopt_Introduction/) that also explains the different options.
In the *NLopt* docs, you can find details about the [optimization algorithms](https://nlopt.readthedocs.io/en/latest/NLopt_Algorithms/),
and the [tutorial](https://nlopt.readthedocs.io/en/latest/NLopt_Introduction/) that demonstrates how to tweak their behavior.

To choose an algorithm, just pass its name without the 'NLOPT\_' prefix (for example, 'NLOPT\_LD\_SLSQP' can be used by passing `algorithm = :LD_SLSQP`).
To choose an algorithm, just pass its name without the `NLOPT_` prefix (for example, `NLOPT_LD_SLSQP` can be used by passing `algorithm = :LD_SLSQP`).

The README of the [julia package](https://github.com/JuliaOpt/NLopt.jl) may also be helpful, and provides a list of options:
The *README* of [*NLopt.jl*](https://github.com/JuliaOpt/NLopt.jl) may also be helpful, and provides a list of options:

- `algorithm`
- `stopval`
Expand Down
20 changes: 10 additions & 10 deletions docs/src/tutorials/backends/optim.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,23 @@
# Using Optim.jl

[`SemOptimizerOptim`](@ref) implements the connection to `Optim.jl`.
It takes two arguments, `algorithm` and `options`.
The defaults are LBFGS as the optimization algorithm and the standard options from `Optim.jl`.
We can load the `Optim` and `LineSearches` packages to choose something different:
[*Optim.jl*](https://github.com/JuliaNLSolvers/Optim.jl) is the default optimization engine of *SEM.jl*,
see [`SEM.SemOptimizerOptim`](@ref) for a full list of its parameters.
It defaults to the LBFGS optimization, but we can load the `Optim` and `LineSearches` packages
and specify BFGS (!not L-BFGS) with a back-tracking linesearch and Hager-Zhang initial step length guess:

```julia
using Optim, LineSearches

my_optimizer = SemOptimizerOptim(
my_optimizer = SemOptimizer(
algorithm = BFGS(
linesearch = BackTracking(order=3),
linesearch = BackTracking(order=3),
alphaguess = InitialHagerZhang()
),
options = Optim.Options(show_trace = true)
)
),
options = Optim.Options(show_trace = true)
)
```

This optimizer will use BFGS (!not L-BFGS) with a back tracking linesearch and a certain initial step length guess. Also, the trace of the optimization will be printed to the console.
Note that we used `options` to print the optimization progress to the console.

To see how to use the optimizer to actually fit a model now, check out the [Model fitting](@ref) section.

Expand Down
22 changes: 11 additions & 11 deletions docs/src/tutorials/concept.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,13 +21,13 @@ So everything that can be used as the 'observed' part has to be of type `SemObse

Here is an overview on the available building blocks:

|[`SemObserved`](@ref) | [`SemImplied`](@ref) | [`SemLossFunction`](@ref) | [`SemOptimizer`](@ref) |
|---------------------------------|-----------------------|---------------------------|-------------------------------|
| [`SemObservedData`](@ref) | [`RAM`](@ref) | [`SemML`](@ref) | [`SemOptimizerOptim`](@ref) |
| [`SemObservedCovariance`](@ref) | [`RAMSymbolic`](@ref) | [`SemWLS`](@ref) | [`SemOptimizerNLopt`](@ref) |
| [`SemObservedMissing`](@ref) | [`ImpliedEmpty`](@ref)| [`SemFIML`](@ref) | |
| | | [`SemRidge`](@ref) | |
| | | [`SemConstant`](@ref) | |
|[`SemObserved`](@ref) | [`SemImplied`](@ref) | [`SemLossFunction`](@ref) | [`SemOptimizer`](@ref) |
|---------------------------------|-----------------------|---------------------------|----------------------------|
| [`SemObservedData`](@ref) | [`RAM`](@ref) | [`SemML`](@ref) | :Optim |
| [`SemObservedCovariance`](@ref) | [`RAMSymbolic`](@ref) | [`SemWLS`](@ref) | :NLopt |
| [`SemObservedMissing`](@ref) | [`ImpliedEmpty`](@ref)| [`SemFIML`](@ref) | :Proximal |
| | | [`SemRidge`](@ref) | |
| | | [`SemConstant`](@ref) | |

The rest of this page explains the building blocks for each part. First, we explain every part and give an overview on the different options that are available. After that, the [API - model parts](@ref) section serves as a reference for detailed explanations about the different options.
(How to stick them together to a final model is explained in the section on [Model Construction](@ref).)
Expand All @@ -52,7 +52,7 @@ Available loss functions are
## The optimizer part aka `SemOptimizer`
The optimizer part of a model connects to the numerical optimization backend used to fit the model.
It can be used to control options like the optimization algorithm, linesearch, stopping criteria, etc.
There are currently three available backends, [`SemOptimizerOptim`](@ref) connecting to the [Optim.jl](https://github.com/JuliaNLSolvers/Optim.jl) backend, [`SemOptimizerNLopt`](@ref) connecting to the [NLopt.jl](https://github.com/JuliaOpt/NLopt.jl) backend and [`SemOptimizerProximal`](@ref) connecting to [ProximalAlgorithms.jl](https://github.com/JuliaFirstOrder/ProximalAlgorithms.jl).
There are currently three available engines (i.e., backends used to carry out the numerical optimization), `:Optim` connecting to the [Optim.jl](https://github.com/JuliaNLSolvers/Optim.jl) backend, `:NLopt` connecting to the [NLopt.jl](https://github.com/JuliaOpt/NLopt.jl) backend and `:Proximal` connecting to [ProximalAlgorithms.jl](https://github.com/JuliaFirstOrder/ProximalAlgorithms.jl).
For more information about the available options see also the tutorials about [Using Optim.jl](@ref) and [Using NLopt.jl](@ref), as well as [Constrained optimization](@ref) and [Regularization](@ref) .

# What to do next
Expand Down Expand Up @@ -102,7 +102,7 @@ SemConstant

```@docs
SemOptimizer
SemOptimizerOptim
SemOptimizerNLopt
SemOptimizerProximal
SEM.SemOptimizerOptim
SEMNLOptExt.SemOptimizerNLopt
SEMProximalOptExt.SemOptimizerProximal
```
Loading
Loading