Skip to content

Commit 1a643f0

Browse files
committed
Adding some docs.
1 parent 650b38d commit 1a643f0

File tree

8 files changed

+146
-2
lines changed

8 files changed

+146
-2
lines changed

docs/make.jl

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,14 @@ with_logger(logger) do
1111
sitename="DistributedResourceOptimization.jl Documentation",
1212
pages=Any["Home"=>"index.md",
1313
"Getting Started"=>"getting_started.md",
14+
"Algorithms" => [
15+
"ADMM"=>"algorithms/admm.md",
16+
"COHDA"=>"algorithms/cohda.md",
17+
],
18+
"Carrier" => [
19+
"Simple"=>"carrier/simple.md",
20+
"Mango"=>"carrier/mango.md",
21+
],
1422
"API"=>"api.md"],
1523
repo="https://github.com/Digitalized-Energy-Systems/DistributedResourceOptimization.jl",
1624
)

docs/src/algorithms/admm.md

Lines changed: 91 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,91 @@
1+
In DRO.jl every ADMM optimization consists of two components, the ADMM problem form itself, and the local model, which determines local constraints and objectives.
2+
3+
# Problem Form
4+
5+
## Consensus
6+
7+
The single global variable consensus form can be written as
8+
9+
```math
10+
\begin{equation}
11+
\begin{split}
12+
\min_{\{x_i\},\,z}\;\; \sum_{i=1}^N f_i(x_i) \\
13+
\quad\text{s.t.}\quad x_i = z,\;\; i=1,\dots,N,
14+
\end{split}
15+
\end{equation}
16+
```
17+
18+
where ``f_i`` is the local objective of agent ``i``, ``x_i`` the decision variable of this agent.
19+
20+
With the dual variable ``u`` and the penalty ``\rho`` the update iteration reads.
21+
22+
```math
23+
\begin{align}
24+
x_i^{k+1}
25+
&= \arg\min_{x_i} \;
26+
f_i(x_i)
27+
+ \frac{\rho}{2}\big\| x_i - \big(z^k - u_i^k \big) \big\|_2^2
28+
\\
29+
z^{k+1}
30+
&= \arg\min_{z} \;
31+
g(z) + \frac{N \rho}{2}\left\|
32+
z - \Big( \bar{x}^{k+1} + \bar{u}^k \Big)
33+
\right\|_2^2 \\
34+
u_i^{k+1}
35+
&= u_i^k + x_i^{k+1} - z^{k+1}
36+
\end{align}
37+
```
38+
39+
To instantiate a coordinator for the sharing form, use [`create_consensus_target_reach_admm_coordinator`](@ref). To start the neotiation you need to use [`create_admm_start_consensus`](@ref).
40+
41+
42+
## Sharing
43+
44+
Take the sharing problem:
45+
46+
```math
47+
\begin{equation}
48+
\begin{split}
49+
\min_{\{x_i\},\,z}\;\; \sum_{i=1}^N f_i(x_i) \;+\; g(z)\\
50+
\quad\text{s.t.}\quad \sum_{i=1}^N x_i = z,\;\; i=1,\dots,N,
51+
\end{split}
52+
\end{equation}
53+
```
54+
55+
where ``f_i`` is the local objective of agent ``i``, ``x_i`` the decision variable of this agent, and ``g`` the global objective.
56+
57+
With the dual variable ``u`` and the penalty ``\rho`` the generic update iteration reads.
58+
59+
```math
60+
\begin{align}
61+
x_i^{k+1}
62+
&= \arg\min_{x_i}\;
63+
f_i(x_i) + \tfrac{\rho}{2}\,\big\lVert x_i - (z^k - u^k) \big\rVert_2^2,
64+
\\
65+
&i=1,\dots,N,
66+
\\[6pt]
67+
z^{k+1}
68+
&= \arg\min_{z}\;
69+
g(N\cdot z) + \tfrac{N\rho}{2}\,\big\lVert z - \bar{x}^{\,k+1} - u^k \big\rVert_2^2,
70+
\\
71+
\bar{x}^{\,k+1}
72+
&= \tfrac{1}{N}\sum_{i=1}^N x_i^{k+1},
73+
\\[6pt]
74+
u^{k+1}
75+
&= u^k + \bar{x}^{\,k+1} - z^{k+1}.
76+
77+
\end{align}
78+
```
79+
80+
To instantiate a coordinator for the sharing form, use [`create_sharing_admm_coordinator`](@ref). To start the negotiation you can use [`create_admm_start`](@ref).
81+
82+
# Local Models
83+
84+
## Flexibility Actor
85+
86+
Each local actor `ì`` has some flexibility of ``m`` resources and a decision on the provided flexibility ``x_i``. The decision is constrained by
87+
* lower and upper bounds ``l_i \leq x_i \leq u_i``
88+
* coupling constraints ``C_i x_i\leq d_i``
89+
* linear penalites ``S_i`` for priorization
90+
91+
To instantiate a flexibility actor use [`create_admm_flex_actor_one_to_many`](@ref).

docs/src/algorithms/cohda.md

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
COHDA is a distributed optimization heuristic, which solves MC-COP (Multiple Choice Combinatorial Optimization Problem).
2+
3+
COHDA minimizes the distance of the sum of a set of schedules (distributedly chosen) to a target vector.
4+
5+
```math
6+
\begin{equation}
7+
\begin{split}
8+
&\underset{x_{\rm i,j}}{\text{max}}~\left(-\lVert T - \sum_{i=1}^{N}\sum_{j=1}^{M} (U_{\rm i,j}\cdot x_{\rm i,j})\rVert_1\right)\\
9+
&\text{with } \sum_{j=1}^{M}x_{{\rm i,j}} = 1\\
10+
&x_{{\rm i,j}}\in\left\{0,\,1\right\},~i=1,\,\dots,\,N,~j=1,\,\dots,\,M.
11+
\end{split}
12+
\end{equation}
13+
```
14+
15+
To use COHDA [`create_cohda_participant`](@ref) and [`create_cohda_start_message`](@ref) can be used.

docs/src/api.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,5 +46,5 @@ Pages = ["algorithm/admm/consensus_admm.jl"]
4646
```@autodocs
4747
Modules = [DistributedResourceOptimization]
4848
Private = false
49-
Pages = ["algorithm/admm/conflex_actor.jl"]
49+
Pages = ["algorithm/admm/flex_actor.jl"]
5050
```

docs/src/carrier/mango.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
To use the distributed algorithms, the role [`DistributedOptimizationRole`](@ref) can be used to integrate an algorithm.
2+
3+
Some Distributed Algorithms require a coordinator, in this case another role can be used additionally [`CoordinatorRole`](@ref).

docs/src/carrier/simple.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
TBD

src/algorithm/admm/flex_actor.jl

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -44,6 +44,22 @@ function _create_C_and_d(u::Vector{<:Real})
4444
return C, d
4545
end
4646

47+
"""
48+
create_admm_flex_actor_one_to_many(in_capacity::Real, η::Vector{Float64}, P::Union{Nothing,Vector{<:Real}}=nothing)
49+
50+
Creates an ADMM flex actor for a one-to-many resource allocation scenario.
51+
52+
# Arguments
53+
- `in_capacity::Real`: The input capacity of the resource.
54+
- `η::Vector{Float64}`: Vector of efficiency parameters for the ADMM algorithm.
55+
- `P::Union{Nothing,Vector{<:Real}}`: Optional vector of priorities. If not provided, defaults to `nothing`.
56+
57+
# Returns
58+
A flex actor object configured for one-to-many ADMM optimization.
59+
60+
# Notes
61+
This function is typically used in distributed resource optimization problems where a single resource is allocated to multiple consumers using the ADMM algorithm.
62+
"""
4763
function create_admm_flex_actor_one_to_many(in_capacity::Real, η::Vector{Float64}, P::Union{Nothing,Vector{<:Real}}=nothing)
4864
tech_capacity = in_capacity .* η
4965

src/algorithm/admm/sharing_admm.jl

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
export create_sharing_target_distance_admm_coordinator, ADMMSharingGlobalActor, ADMMTargetDistanceObjective, create_admm_sharing_data
1+
export create_sharing_target_distance_admm_coordinator, ADMMSharingGlobalActor, ADMMTargetDistanceObjective, create_admm_sharing_data, create_sharing_admm_coordinator
22

33
using JuMP
44
using OSQP
@@ -85,3 +85,13 @@ end
8585
function create_sharing_target_distance_admm_coordinator()
8686
return ADMMGenericCoordinator(global_actor=ADMMSharingGlobalActor(ADMMTargetDistanceObjective()))
8787
end
88+
89+
"""
90+
create_sharing_admm_coordinator(objective::ADMMGlobalObjective)
91+
92+
# Arguments
93+
- `objective::ADMMGlobalObjective`: The global objective function to be used in the AD
94+
"""
95+
function create_sharing_admm_coordinator(objective::ADMMGlobalObjective)
96+
return ADMMGenericCoordinator(global_actor=ADMMSharingGlobalActor(objective))
97+
end

0 commit comments

Comments
 (0)