Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ PrettyTables = "08abe8d2-0d0c-5749-adfa-8a2ac140af0d"
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
SparseArrays = "2f01184e-e22b-5df5-ae63-d93ebab69eaf"
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
StatsAPI = "82ae8749-77ed-4fe6-ae5f-f523153014b0"
StatsBase = "2913bbd2-ae8a-5f71-8c99-4fb6c76f3a91"
StenoGraphs = "78862bba-adae-4a83-bb4d-33c106177f81"
Symbolics = "0c5d862f-8b57-4792-8d23-62f2024744c7"
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Models you can fit include
- Multigroup SEM
- Sums of arbitrary loss functions (everything the optimizer can handle).

# What are the merrits?
# What are the merits?

We provide fast objective functions, gradients, and for some cases hessians as well as approximations thereof.
As a user, you can easily define custom loss functions.
Expand Down
10 changes: 5 additions & 5 deletions docs/src/developer/loss.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ model = SemFiniteDiff(
loss = (SemML, myridge)
)

model_fit = sem_fit(model)
model_fit = fit(model)
```

This is one way of specifying the model - we now have **one model** with **multiple loss functions**. Because we did not provide a gradient for `Ridge`, we have to specify a `SemFiniteDiff` model that computes numerical gradients with finite difference approximation.
Expand Down Expand Up @@ -117,17 +117,17 @@ model_new = Sem(
loss = (SemML, myridge)
)

model_fit = sem_fit(model_new)
model_fit = fit(model_new)
```

The results are the same, but we can verify that the computational costs are way lower (for this, the julia package `BenchmarkTools` has to be installed):

```julia
using BenchmarkTools

@benchmark sem_fit(model)
@benchmark fit(model)

@benchmark sem_fit(model_new)
@benchmark fit(model_new)
```

The exact results of those benchmarks are of course highly depended an your system (processor, RAM, etc.), but you should see that the median computation time with analytical gradients drops to about 5% of the computation without analytical gradients.
Expand Down Expand Up @@ -241,7 +241,7 @@ model_ml = SemFiniteDiff(
loss = MaximumLikelihood()
)

model_fit = sem_fit(model_ml)
model_fit = fit(model_ml)
```

If you want to differentiate your own loss functions via automatic differentiation, check out the [AutoDiffSEM](https://github.com/StructuralEquationModels/AutoDiffSEM) package.
6 changes: 3 additions & 3 deletions docs/src/developer/optimizer.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ algorithm(optimizer::SemOptimizerName) = optimizer.algorithm
options(optimizer::SemOptimizerName) = optimizer.options
```

Note that your optimizer is a subtype of `SemOptimizer{:Name}`, where you can choose a `:Name` that can later be used as a keyword argument to `sem_fit(engine = :Name)`.
Note that your optimizer is a subtype of `SemOptimizer{:Name}`, where you can choose a `:Name` that can later be used as a keyword argument to `fit(engine = :Name)`.
Similarly, `SemOptimizer{:Name}(args...; kwargs...) = SemOptimizerName(args...; kwargs...)` should be defined as well as a constructor that uses only keyword arguments:

´´´julia
Expand All @@ -46,10 +46,10 @@ SemOptimizerName(;
´´´
A method for `update_observed` and additional methods might be usefull, but are not necessary.

Now comes the substantive part: We need to provide a method for `sem_fit`:
Now comes the substantive part: We need to provide a method for `fit`:

```julia
function sem_fit(
function fit(
optim::SemOptimizerName,
model::AbstractSem,
start_params::AbstractVector;
Expand Down
2 changes: 1 addition & 1 deletion docs/src/internals/files.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Source code is in the `"src"` folder:
- `"types.jl"` defines all abstract types and the basic type hierarchy
- `"objective_gradient_hessian.jl"` contains methods for computing objective, gradient and hessian values for different model types as well as generic fallback methods
- The four folders `"observed"`, `"implied"`, `"loss"` and `"diff"` contain implementations of specific subtypes (for example, the `"loss"` folder contains a file `"ML.jl"` that implements the `SemML` loss function).
- `"optimizer"` contains connections to different optimization backends (aka methods for `sem_fit`)
- `"optimizer"` contains connections to different optimization backends (aka methods for `fit`)
- `"optim.jl"`: connection to the `Optim.jl` package
- `"frontend"` contains user-facing functions
- `"specification"` contains functionality for model specification
Expand Down
6 changes: 3 additions & 3 deletions docs/src/performance/mixed_differentiation.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,15 +19,15 @@ model_ridge = SemFiniteDiff(

model_ml_ridge = SemEnsemble(model_ml, model_ridge)

model_ml_ridge_fit = sem_fit(model_ml_ridge)
model_ml_ridge_fit = fit(model_ml_ridge)
```

The results of both methods will be the same, but we can verify that the computation costs differ (the package `BenchmarkTools` has to be installed for this):

```julia
using BenchmarkTools

@benchmark sem_fit(model)
@benchmark fit(model)

@benchmark sem_fit(model_ml_ridge)
@benchmark fit(model_ml_ridge)
```
4 changes: 2 additions & 2 deletions docs/src/performance/mkl.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,9 @@ To check the performance implications for fitting a SEM, you can use the [`Bench
```julia
using BenchmarkTools

@benchmark sem_fit($your_model)
@benchmark fit($your_model)

using MKL

@benchmark sem_fit($your_model)
@benchmark fit($your_model)
```
2 changes: 1 addition & 1 deletion docs/src/performance/simulation.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ models = [model1, model2]
fits = Vector{SemFit}(undef, 2)

Threads.@threads for i in 1:2
fits[i] = sem_fit(models[i])
fits[i] = fit(models[i])
end
```

Expand Down
4 changes: 2 additions & 2 deletions docs/src/performance/starting_values.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# Starting values

The `sem_fit` function has a keyword argument that takes either a vector of starting values or a function that takes a model as input to compute starting values. Current options are `start_fabin3` for fabin 3 starting values [^Hägglund82] or `start_simple` for simple starting values. Additional keyword arguments to `sem_fit` are passed to the starting value function. For example,
The `fit` function has a keyword argument that takes either a vector of starting values or a function that takes a model as input to compute starting values. Current options are `start_fabin3` for fabin 3 starting values [^Hägglund82] or `start_simple` for simple starting values. Additional keyword arguments to `fit` are passed to the starting value function. For example,

```julia
sem_fit(
fit(
model;
start_val = start_simple,
start_covariances_latent = 0.5
Expand Down
2 changes: 1 addition & 1 deletion docs/src/tutorials/collection/multigroup.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ model_ml_multigroup = SemEnsemble(
We now fit the model and inspect the parameter estimates:

```@example mg; ansicolor = true
fit = sem_fit(model_ml_multigroup)
fit = fit(model_ml_multigroup)
update_estimate!(partable, fit)
details(partable)
```
Expand Down
6 changes: 3 additions & 3 deletions docs/src/tutorials/constraints/constraints.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ model = Sem(
data = data
)

model_fit = sem_fit(model)
model_fit = fit(model)

update_estimate!(partable, model_fit)

Expand Down Expand Up @@ -153,7 +153,7 @@ model_constrained = Sem(
data = data
)

model_fit_constrained = sem_fit(constrained_optimizer, model_constrained)
model_fit_constrained = fit(constrained_optimizer, model_constrained)
```

As you can see, the optimizer converged (`:XTOL_REACHED`) and investigating the solution yields
Expand All @@ -162,7 +162,7 @@ As you can see, the optimizer converged (`:XTOL_REACHED`) and investigating the
update_partable!(
partable,
:estimate_constr,
params(model_fit_constrained),
param_labels(model_fit_constrained),
solution(model_fit_constrained),
)

Expand Down
2 changes: 1 addition & 1 deletion docs/src/tutorials/construction/build_by_parts.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,5 +65,5 @@ optimizer = SemOptimizerOptim()

model_ml = Sem(observed, implied_ram, loss_ml)

sem_fit(optimizer, model_ml)
fit(optimizer, model_ml)
```
2 changes: 1 addition & 1 deletion docs/src/tutorials/construction/outer_constructor.md
Original file line number Diff line number Diff line change
Expand Up @@ -131,4 +131,4 @@ model = SemFiniteDiff(
)
```

constructs a model that will use finite difference approximation if you estimate the parameters via `sem_fit(model)`.
constructs a model that will use finite difference approximation if you estimate the parameters via `fit(model)`.
2 changes: 1 addition & 1 deletion docs/src/tutorials/first_model.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ model = Sem(
We can now fit the model via

```@example high_level; ansicolor = true
model_fit = sem_fit(model)
model_fit = fit(model)
```

and compute fit measures as
Expand Down
12 changes: 6 additions & 6 deletions docs/src/tutorials/fitting/fitting.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
As we saw in [A first model](@ref), after you have build a model, you can fit it via

```julia
model_fit = sem_fit(model)
model_fit = fit(model)

# output

Expand Down Expand Up @@ -45,30 +45,30 @@ Structural Equation Model

## Choosing an optimizer

To choose a different optimizer, you can call `sem_fit` with the keyword argument `engine = ...`, and pass additional keyword arguments:
To choose a different optimizer, you can call `fit` with the keyword argument `engine = ...`, and pass additional keyword arguments:

```julia
using Optim

model_fit = sem_fit(model; engine = :Optim, algorithm = BFGS())
model_fit = fit(model; engine = :Optim, algorithm = BFGS())
```

Available options for engine are `:Optim`, `:NLopt` and `:Proximal`, where `:NLopt` and `:Proximal` are only available if the `NLopt.jl` and `ProximalAlgorithms.jl` packages are loaded respectively.

The available keyword arguments are listed in the sections [Using Optim.jl](@ref), [Using NLopt.jl](@ref) and [Regularization](@ref).

Alternative, you can also explicitely define a `SemOptimizer` and pass it as the first argument to `sem_fit`:
Alternative, you can also explicitely define a `SemOptimizer` and pass it as the first argument to `fit`:

```julia
my_optimizer = SemOptimizerOptim(algorithm = BFGS())

sem_fit(my_optimizer, model)
fit(my_optimizer, model)
```

You may also optionally specify [Starting values](@ref).

# API - model fitting

```@docs
sem_fit
fit
```
12 changes: 6 additions & 6 deletions docs/src/tutorials/inspection/inspection.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,13 +42,13 @@ model = Sem(
data = data
)

model_fit = sem_fit(model)
model_fit = fit(model)
```

After you fitted a model,

```julia
model_fit = sem_fit(model)
model_fit = fit(model)
```

you end up with an object of type [`SemFit`](@ref).
Expand Down Expand Up @@ -87,8 +87,8 @@ We can also update the `ParameterTable` object with other information via [`upda
se_bs = se_bootstrap(model_fit; n_boot = 20)
se_he = se_hessian(model_fit)

update_partable!(partable, :se_hessian, params(model_fit), se_he)
update_partable!(partable, :se_bootstrap, params(model_fit), se_bs)
update_partable!(partable, :se_hessian, param_labels(model_fit), se_he)
update_partable!(partable, :se_bootstrap, param_labels(model_fit), se_bs)

details(partable)
```
Expand Down Expand Up @@ -126,11 +126,11 @@ fit_measures
AIC
BIC
χ²
df
dof
minus2ll
nobserved_vars
nsamples
params
param_labels
nparams
p_value
RMSEA
Expand Down
4 changes: 2 additions & 2 deletions docs/src/tutorials/meanstructure.md
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ model = Sem(
meanstructure = true
)

sem_fit(model)
fit(model)
```

If we build the model by parts, we have to pass the `meanstructure = true` argument to every part that requires it (when in doubt, simply consult the documentation for the respective part).
Expand All @@ -112,5 +112,5 @@ ml = SemML(observed = observed, meanstructure = true)

model = Sem(observed, implied_ram, SemLoss(ml))

sem_fit(model)
fit(model)
```
14 changes: 7 additions & 7 deletions docs/src/tutorials/regularization/regularization.md
Original file line number Diff line number Diff line change
Expand Up @@ -120,25 +120,25 @@ Let's fit the regularized model

```@example reg

fit_lasso = sem_fit(optimizer_lasso, model_lasso)
fit_lasso = fit(optimizer_lasso, model_lasso)
```

and compare the solution to unregularizted estimates:

```@example reg
fit = sem_fit(model)
fit = fit(model)

update_estimate!(partable, fit)

update_partable!(partable, :estimate_lasso, params(fit_lasso), solution(fit_lasso))
update_partable!(partable, :estimate_lasso, param_labels(fit_lasso), solution(fit_lasso))

details(partable)
```

Instead of explicitely defining a `SemOptimizerProximal` object, you can also pass `engine = :Proximal` and additional keyword arguments to `sem_fit`:
Instead of explicitely defining a `SemOptimizerProximal` object, you can also pass `engine = :Proximal` and additional keyword arguments to `fit`:

```@example reg
fit = sem_fit(model; engine = :Proximal, operator_g = NormL1(λ))
fit = fit(model; engine = :Proximal, operator_g = NormL1(λ))
```

## Second example - mixed l1 and l0 regularization
Expand All @@ -162,13 +162,13 @@ model_mixed = Sem(
data = data,
)

fit_mixed = sem_fit(model_mixed; engine = :Proximal, operator_g = prox_operator)
fit_mixed = fit(model_mixed; engine = :Proximal, operator_g = prox_operator)
```

Let's again compare the different results:

```@example reg
update_partable!(partable, :estimate_mixed, params(fit_mixed), solution(fit_mixed))
update_partable!(partable, :estimate_mixed, param_labels(fit_mixed), solution(fit_mixed))

details(partable)
```
4 changes: 2 additions & 2 deletions docs/src/tutorials/specification/ram_matrices.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ spec = RAMMatrices(;
A = A,
S = S,
F = F,
params = θ,
param_labels = θ,
vars = [:x1, :x2, :x3, :y1, :y2, :y3, :y4, :y5, :y6, :y7, :y8, :ind60, :dem60, :dem65]
)

Expand Down Expand Up @@ -90,7 +90,7 @@ spec = RAMMatrices(;
A = A,
S = S,
F = F,
params = θ,
param_labels = θ,
vars = [:x1, :x2, :x3, :y1, :y2, :y3, :y4, :y5, :y6, :y7, :y8, :ind60, :dem60, :dem65]
)
```
Expand Down
4 changes: 2 additions & 2 deletions ext/SEMNLOptExt/NLopt.jl
Original file line number Diff line number Diff line change
Expand Up @@ -71,8 +71,8 @@ function SemFit_NLopt(optimization_result, model::AbstractSem, start_val, opt)
)
end

# sem_fit method
function SEM.sem_fit(
# fit method
function SEM.fit(
optim::SemOptimizerNLopt,
model::AbstractSem,
start_params::AbstractVector;
Expand Down
2 changes: 1 addition & 1 deletion ext/SEMProximalOptExt/ProximalAlgorithms.jl
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ mutable struct ProximalResult
result::Any
end

function SEM.sem_fit(
function SEM.fit(
optim::SemOptimizerProximal,
model::AbstractSem,
start_params::AbstractVector;
Expand Down
Loading