Skip to content

Commit 6a74968

Browse files
committed
Improving and updating docs regarding carrier usage.
1 parent a3b73e7 commit 6a74968

File tree

5 files changed

+71
-119
lines changed

5 files changed

+71
-119
lines changed

README.md

Lines changed: 30 additions & 56 deletions
Original file line numberDiff line numberDiff line change
@@ -10,83 +10,57 @@ Currently there are three tested algorithms:
1010
* ADMM sharing variant on flexibility providing resources
1111
* COHDA, Combinatorial Optimization Heuristic for Distributed Agents, which minimizes the distance of schedule sums to a given target schedule
1212

13-
There is one carrier implemented:
13+
There are two carrier implemented:
14+
* A lightweight built-in carrier
1415
* Mango.jl, agent framework for the simulation of distributed systems, DO provides roles to which the specific algorithms can be assigned to
1516

16-
Note that the package is highly work in progress.
17+
Note that the package is still work in progress.
1718

18-
### Using the sharing ADMM with flex actors (e.g. for resource optimization) with Mango.jl
19+
## Examples
1920

20-
```julia
21-
using Mango
22-
using DistributedResourceOptimization
21+
### Using the sharing ADMM with flex actors (e.g. for energy resource optimization)
2322

24-
@role struct HandleOptimizationResultRole
25-
got_it::Bool = false
26-
end
23+
You can use DRO in two different ways, using the express style, just executing the distributed optimization routine without embedding it into a larger system. For that two different ways are available, distributed and coordinated optimization.
2724

28-
function Mango.handle_message(role::HandleOptimizationResultRole, message::OptimizationFinishedMessage, meta::Any)
29-
role.got_it = true
30-
end
25+
#### Coordinated (ADMM Sharing with resource actors)
3126

32-
container = create_tcp_container("127.0.0.1", 5555)
27+
```julia
28+
using DistributedResourceOptimization
3329

34-
# create participant models
3530
flex_actor = create_admm_flex_actor_one_to_many(10, [0.1, 0.5, -1])
3631
flex_actor2 = create_admm_flex_actor_one_to_many(15, [0.1, 0.5, -1])
37-
flex_actor3 = create_admm_flex_actor_one_to_many(10, [0.1, 0.5, -1])
32+
flex_actor3 = create_admm_flex_actor_one_to_many(10, [-1.0, 0.0, 1.0])
3833

39-
# create coordinator with objective
4034
coordinator = create_sharing_target_distance_admm_coordinator()
4135

42-
# create roles to integrate admm in Mango.jl
43-
dor = DistributedOptimizationRole(flex_actor, tid=:custom)
44-
dor2 = DistributedOptimizationRole(flex_actor2, tid=:custom)
45-
dor3 = DistributedOptimizationRole(flex_actor3, tid=:custom)
46-
coord_role = CoordinatorRole(coordinator, tid=:custom, include_self=true)
47-
48-
# role to handle a result
49-
handle = HandleOptimizationResultRole()
50-
handle2 = HandleOptimizationResultRole()
51-
handle3 = HandleOptimizationResultRole()
52-
53-
# create agents
54-
add_agent_composed_of(container, dor, handle)
55-
c = add_agent_composed_of(container, dor2, handle2)
56-
ca = add_agent_composed_of(container, coord_role, dor3, handle3)
57-
58-
# create a topology of the agents
59-
auto_assign!(complete_topology(3, tid=:custom), container)
60-
61-
# run the simulation with start message and wait for result
62-
activate(container) do
63-
wait(send_message(c, StartCoordinatedDistributedOptimization(create_admm_start(create_admm_sharing_data([0.2, 1, -2]))), address(ca)))
64-
wait(coord_role.task)
65-
end
36+
admm_start = create_admm_start(create_admm_sharing_data([-4, 0, 6], [5,1,1]))
37+
38+
start_coordinated_optimization([flex_actor, flex_actor2, flex_actor3], coordinator, admm_start)
6639
```
6740

68-
### Using COHDA with Mango.jl
41+
#### Distributed (COHDA)
6942

7043
```julia
71-
using Mango
7244
using DistributedResourceOptimization
7345

74-
container = create_tcp_container("127.0.0.1", 5555)
75-
76-
# create agents with local model wrapped in the general distributed optimization role
77-
agent_one = add_agent_composed_of(container, DistributedOptimizationRole(
78-
create_cohda_participant(1, [[0.0, 1, 2], [1, 2, 3]])))
79-
agent_two = add_agent_composed_of(container, DistributedOptimizationRole(
80-
create_cohda_participant(2, [[0.0, 1, 2], [1, 2, 3]])))
46+
actor_one = create_cohda_participant(1, [[0.0, 1, 2], [1, 2, 3]])
47+
actor_two = create_cohda_participant(2, [[0.0, 1, 2], [1, 2, 3]])
8148

82-
# create start message
8349
initial_message = create_cohda_start_message([1.2, 2, 3])
8450

85-
# create topology
86-
auto_assign!(complete_topology(2), container)
51+
wait(start_distributed_optimization([actor_one, actor_two], initial_message))
52+
```
53+
54+
If you need more control, e.g. when integrate the optimization into a larger system we recommend using the carrier system directly, e.g with the built-in carrier:
55+
56+
```julia
57+
using DistributedResourceOptimization
58+
59+
container = ActorContainer()
60+
actor_one = SimpleCarrier(container, create_cohda_participant(1, [[0.0, 1, 2], [1, 2, 3]]))
61+
actor_two = SimpleCarrier(container, create_cohda_participant(2, [[0.0, 1, 2], [1, 2, 3]]))
62+
63+
initial_message = create_cohda_start_message([1.2, 2, 3])
8764

88-
# run simulation
89-
activate(container) do
90-
send_message(agent_one, initial_message, address(agent_two))
91-
end
65+
wait(send_to_other(actor_one, initial_message, cid(actor_two)))
9266
```

docs/src/api.md

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -16,15 +16,15 @@ Private = false
1616
Pages = ["carrier/core.jl"]
1717
```
1818

19-
## Simple
19+
### Simple
2020

2121
```@autodocs
2222
Modules = [DistributedResourceOptimization]
2323
Private = false
2424
Pages = ["carrier/simple.jl"]
2525
```
2626

27-
## Mango
27+
### Mango
2828

2929
```@autodocs
3030
Modules = [DistributedResourceOptimization]
@@ -33,23 +33,24 @@ Pages = ["carrier/mango.jl"]
3333
```
3434

3535
# ADMM
36-
## ADMM Sharing
36+
37+
### Sharing
3738

3839
```@autodocs
3940
Modules = [DistributedResourceOptimization]
4041
Private = false
4142
Pages = ["algorithm/admm/sharing_admm.jl"]
4243
```
4344

44-
## ADMM Consensus
45+
### Consensus
4546

4647
```@autodocs
4748
Modules = [DistributedResourceOptimization]
4849
Private = false
4950
Pages = ["algorithm/admm/consensus_admm.jl"]
5051
```
5152

52-
## ADMM Flex
53+
### Actors
5354

5455
```@autodocs
5556
Modules = [DistributedResourceOptimization]

docs/src/getting_started.md

Lines changed: 27 additions & 54 deletions
Original file line numberDiff line numberDiff line change
@@ -1,75 +1,48 @@
1-
### Using the sharing ADMM with flex actors (e.g. for resource optimization) with Mango.jl
1+
## Getting Started
22

3-
```julia
4-
using Mango
5-
using DistributedResourceOptimization
3+
### Using the sharing ADMM with flex actors (e.g. for energy resource optimization)
64

7-
@role struct HandleOptimizationResultRole
8-
got_it::Bool = false
9-
end
5+
You can use DRO in two different ways, using the express style, just executing the distributed optimization routine without embedding it into a larger system. For that two different ways are available, distributed and coordinated optimization.
106

11-
function Mango.handle_message(role::HandleOptimizationResultRole, message::OptimizationFinishedMessage, meta::Any)
12-
role.got_it = true
13-
end
7+
#### Coordinated (ADMM Sharing with resource actors)
148

15-
container = create_tcp_container("127.0.0.1", 5555)
9+
```julia
10+
using DistributedResourceOptimization
1611

17-
# create participant models
1812
flex_actor = create_admm_flex_actor_one_to_many(10, [0.1, 0.5, -1])
1913
flex_actor2 = create_admm_flex_actor_one_to_many(15, [0.1, 0.5, -1])
20-
flex_actor3 = create_admm_flex_actor_one_to_many(10, [0.1, 0.5, -1])
14+
flex_actor3 = create_admm_flex_actor_one_to_many(10, [-1.0, 0.0, 1.0])
2115

22-
# create coordinator with objective
2316
coordinator = create_sharing_target_distance_admm_coordinator()
2417

25-
# create roles to integrate admm in Mango.jl
26-
dor = DistributedOptimizationRole(flex_actor, tid=:custom)
27-
dor2 = DistributedOptimizationRole(flex_actor2, tid=:custom)
28-
dor3 = DistributedOptimizationRole(flex_actor3, tid=:custom)
29-
coord_role = CoordinatorRole(coordinator, tid=:custom, include_self=true)
30-
31-
# role to handle a result
32-
handle = HandleOptimizationResultRole()
33-
handle2 = HandleOptimizationResultRole()
34-
handle3 = HandleOptimizationResultRole()
35-
36-
# create agents
37-
add_agent_composed_of(container, dor, handle)
38-
c = add_agent_composed_of(container, dor2, handle2)
39-
ca = add_agent_composed_of(container, coord_role, dor3, handle3)
40-
41-
# create a topology of the agents
42-
auto_assign!(complete_topology(3, tid=:custom), container)
43-
44-
# run the simulation with start message and wait for result
45-
activate(container) do
46-
wait(send_message(c, StartCoordinatedDistributedOptimization(create_admm_start(create_admm_sharing_data([0.2, 1, -2]))), address(ca)))
47-
wait(coord_role.task)
48-
end
18+
admm_start = create_admm_start(create_admm_sharing_data([-4, 0, 6], [5,1,1]))
19+
20+
start_coordinated_optimization([flex_actor, flex_actor2, flex_actor3], coordinator, admm_start)
4921
```
5022

51-
### Using COHDA with Mango.jl
23+
#### Distributed (COHDA)
5224

5325
```julia
54-
using Mango
5526
using DistributedResourceOptimization
5627

57-
container = create_tcp_container("127.0.0.1", 5555)
58-
59-
# create agents with local model wrapped in the general distributed optimization role
60-
agent_one = add_agent_composed_of(container, DistributedOptimizationRole(
61-
create_cohda_participant(1, [[0.0, 1, 2], [1, 2, 3]])))
62-
agent_two = add_agent_composed_of(container, DistributedOptimizationRole(
63-
create_cohda_participant(2, [[0.0, 1, 2], [1, 2, 3]])))
28+
actor_one = create_cohda_participant(1, [[0.0, 1, 2], [1, 2, 3]])
29+
actor_two = create_cohda_participant(2, [[0.0, 1, 2], [1, 2, 3]])
6430

65-
# create start message
6631
initial_message = create_cohda_start_message([1.2, 2, 3])
6732

68-
# create topology
69-
auto_assign!(complete_topology(2), container)
33+
wait(start_distributed_optimization([actor_one, actor_two], initial_message))
34+
```
35+
36+
If you need more control, e.g. when integrate the optimization into a larger system we recommend using the carrier system directly, e.g with the built-in carrier:
37+
38+
```julia
39+
using DistributedResourceOptimization
40+
41+
container = ActorContainer()
42+
actor_one = SimpleCarrier(container, create_cohda_participant(1, [[0.0, 1, 2], [1, 2, 3]]))
43+
actor_two = SimpleCarrier(container, create_cohda_participant(2, [[0.0, 1, 2], [1, 2, 3]]))
44+
45+
initial_message = create_cohda_start_message([1.2, 2, 3])
7046

71-
# run simulation
72-
activate(container) do
73-
send_message(agent_one, initial_message, address(agent_two))
74-
end
47+
wait(send_to_other(actor_one, initial_message, cid(actor_two)))
7548
```

docs/src/index.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,8 @@ Currently there are three tested algorithms:
77
* ADMM sharing variant on flexibility providing resources
88
* COHDA, Combinatorial Optimization Heuristic for Distributed Agents, which minimizes the distance of schedule sums to a given target schedule
99

10-
There is one carrier implemented:
10+
There are two carrier implemented:
11+
* A lightweight built-in carrier
1112
* Mango.jl, agent framework for the simulation of distributed systems, DO provides roles to which the specific algorithms can be assigned to
1213

13-
Note that the package is highly work in progress.
14-
15-
However, DRO is available on the general Julia registry, and can therfore be installed calling `]add DistributedResourceOptimization`.
14+
Note that the package is still work in progress.

src/algorithm/admm/consensus_admm.jl

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -42,6 +42,11 @@ function create_consensus_target_reach_admm_coordinator()
4242
return ADMMGenericCoordinator(global_actor=ADMMConsensusGlobalActor())
4343
end
4444

45+
"""
46+
create_admm_start_consensus(target::Vector{<:Real})
47+
48+
Create an `ADMMStart` message for consensus ADMM with the specified target vector.
49+
"""
4550
function create_admm_start_consensus(target::Vector{<:Real})
4651
return ADMMStart(target, length(target))
4752
end

0 commit comments

Comments
 (0)