Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions doc/modules/ROOT/nav.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,17 @@
* Coroutines
** xref:coroutines/tasks.adoc[Tasks]
** xref:coroutines/launching.adoc[Launching Tasks]
** xref:coroutines/when-all.adoc[Concurrent Composition]
** xref:coroutines/affinity.adoc[Executor Affinity]
** xref:coroutines/cancellation.adoc[Cancellation]
* Execution
** xref:execution/thread-pool.adoc[Thread Pool]
** xref:execution/contexts.adoc[Execution Contexts]
** xref:execution/executors.adoc[Executors]
** xref:execution/strand.adoc[Strands]
** xref:execution/frame-allocation.adoc[Frame Allocation]
* Synchronization
** xref:synchronization/async-mutex.adoc[Async Mutex]
* Buffers
** xref:buffers/index.adoc[Buffer Types]
** xref:buffers/sequences.adoc[Buffer Sequences]
Expand Down
3 changes: 2 additions & 1 deletion doc/modules/ROOT/pages/coroutines/affinity.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -230,5 +230,6 @@ Do NOT use `run_on` when:

== Next Steps

* xref:when-all.adoc[Concurrent Composition] — Running multiple tasks in parallel
* xref:cancellation.adoc[Cancellation] — Stop token propagation
* xref:../execution/executors.adoc[Executors] — The execution model in depth
* xref:../execution/strand.adoc[Strands] — Serializing coroutine execution
2 changes: 1 addition & 1 deletion doc/modules/ROOT/pages/coroutines/cancellation.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -264,5 +264,5 @@ Do NOT use cancellation when:

== Next Steps

* xref:when-all.adoc[Concurrent Composition] — Cancellation with `when_all`
* xref:../execution/executors.adoc[Executors] — Understand the execution model
* xref:reference:boost/capy.adoc[API Reference] — Full reference documentation
19 changes: 11 additions & 8 deletions doc/modules/ROOT/pages/coroutines/launching.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -124,18 +124,20 @@ runner(make_task()); // Won't compile (deleted move)
This design ensures the frame allocator is active when your task is created,
enabling frame recycling optimization.

== Custom Frame Allocators
== Stop Token Support

By default, `run_async` uses a recycling allocator that caches deallocated
frames. For custom allocation strategies:
Pass a stop token for cooperative cancellation:

[source,cpp]
----
my_pool_allocator alloc{pool};
run_async(ex, alloc)(my_task());
std::stop_source source;
run_async(ex, source.get_token())(cancellable_task());

// Later: request cancellation
source.request_stop();
----

The allocator is used for all coroutine frames in the launched call tree.
See xref:cancellation.adoc[Cancellation] for details on stop token propagation.

== When NOT to Use run_async

Expand Down Expand Up @@ -165,11 +167,12 @@ Do NOT use `run_async` when:
| Success + error handlers
| `run_async(ex)(task, on_success, on_error)`

| Custom allocator
| `run_async(ex, alloc)(task)`
| With stop token
| `run_async(ex, stop_token)(task)`
|===

== Next Steps

* xref:when-all.adoc[Concurrent Composition] — Run multiple tasks in parallel
* xref:affinity.adoc[Executor Affinity] — Control where coroutines execute
* xref:../execution/frame-allocation.adoc[Frame Allocation] — Optimize memory usage
3 changes: 2 additions & 1 deletion doc/modules/ROOT/pages/coroutines/tasks.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -170,7 +170,7 @@ Tasks are appropriate when:
Tasks are NOT appropriate when:

* The operation is purely synchronous — just use a regular function
* You need parallel execution — tasks are sequential; use parallel composition
* You need parallel execution — tasks are sequential; use `when_all` for concurrency
* You need to detach and forget — tasks must be awaited or explicitly launched

== Summary
Expand All @@ -197,4 +197,5 @@ Tasks are NOT appropriate when:
Now that you understand tasks, learn how to run them:

* xref:launching.adoc[Launching Tasks] — Start tasks with `run_async`
* xref:when-all.adoc[Concurrent Composition] — Run tasks in parallel with `when_all`
* xref:affinity.adoc[Executor Affinity] — Control where tasks execute
274 changes: 274 additions & 0 deletions doc/modules/ROOT/pages/coroutines/when-all.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,274 @@
//
// Copyright (c) 2025 Vinnie Falco (vinnie.falco@gmail.com)
//
// Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
//
// Official repository: https://github.com/cppalliance/capy
//

= Concurrent Composition

This page explains how to run multiple tasks concurrently using `when_all`.

NOTE: Code snippets assume `using namespace boost::capy;` is in effect.

== The Problem

Tasks are sequential by default. When you await multiple tasks:

[source,cpp]
----
task<void> sequential()
{
int a = co_await fetch_a(); // Wait for A
int b = co_await fetch_b(); // Then wait for B
int c = co_await fetch_c(); // Then wait for C
// Total time: A + B + C
}
----

Each task waits for the previous one to complete. For independent operations,
this wastes time.

== when_all

The `when_all` function launches multiple tasks concurrently and waits for
all of them to complete:

[source,cpp]
----
#include <boost/capy/when_all.hpp>

task<void> concurrent()
{
auto [a, b, c] = co_await when_all(
fetch_a(),
fetch_b(),
fetch_c()
);
// Total time: max(A, B, C)
}
----

All three fetches run in parallel. The `co_await` completes when the slowest
one finishes.

== Return Value

`when_all` returns a tuple of results, with void types filtered out:

[source,cpp]
----
// All non-void: get a tuple of all results
auto [x, y] = co_await when_all(
task_returning_int(), // task<int>
task_returning_string() // task<std::string>
);
// x is int, y is std::string

// Mixed with void: void tasks don't contribute
auto [value] = co_await when_all(
task_returning_int(), // task<int>
task_void(), // task<void> - no contribution
task_void() // task<void> - no contribution
);
// value is int (only non-void result)

// All void: returns void
co_await when_all(
task_void(),
task_void()
);
// No tuple, no return value
----

Results appear in the same order as the input tasks.

== Error Handling

Exceptions propagate from child tasks to the parent. When a task throws:

1. The exception is captured
2. Stop is requested for sibling tasks
3. All tasks are allowed to complete (or respond to stop)
4. The first exception is rethrown

[source,cpp]
----
task<void> handle_errors()
{
try {
co_await when_all(
might_fail(),
another_task(),
third_task()
);
} catch (std::exception const& e) {
// First exception from any child
std::cerr << "Error: " << e.what() << "\n";
}
}
----

=== First-Error Semantics

Only the first exception is captured; subsequent exceptions are discarded.
This matches the behavior of most concurrent frameworks.

=== Stop Propagation

When an error occurs, `when_all` requests stop for all sibling tasks. Tasks
that support cancellation can respond by exiting early:

[source,cpp]
----
task<void> cancellable_work()
{
auto token = co_await get_stop_token();
for (int i = 0; i < 1000; ++i)
{
if (token.stop_requested())
co_return; // Exit early
co_await do_chunk(i);
}
}

task<void> example()
{
// If failing_task throws, cancellable_work sees stop_requested
co_await when_all(
failing_task(),
cancellable_work()
);
}
----

== Parent Stop Token

`when_all` forwards the parent's stop token to children. If the parent is
cancelled, all children see the request:

[source,cpp]
----
task<void> parent()
{
// Parent has a stop token from run_async
co_await when_all(
child_a(), // Sees parent's stop token
child_b() // Sees parent's stop token
);
}

std::stop_source source;
run_async(ex, source.get_token())(parent());

// Later: cancel everything
source.request_stop();
----

== Execution Model

All child tasks inherit the parent's executor affinity:

[source,cpp]
----
task<void> parent() // Running on executor ex
{
co_await when_all(
child_a(), // Runs on ex
child_b() // Runs on ex
);
}
----

Children are launched via `dispatch()` on the executor, which may run them
inline or queue them depending on the executor implementation.

=== No Parallelism by Default

With a single-threaded executor, tasks interleave but don't run truly in
parallel:

[source,cpp]
----
thread_pool pool(1); // Single thread
run_async(pool.get_executor())(parent());

// Tasks interleave at suspension points, but only one runs at a time
----

For true parallelism, use a multi-threaded pool:

[source,cpp]
----
thread_pool pool(4); // Four threads
run_async(pool.get_executor())(parent());

// Tasks may run on different threads
----

== Example: Parallel HTTP Fetches

[source,cpp]
----
task<std::string> fetch(http_client& client, std::string url)
{
co_return co_await client.get(url);
}

task<void> fetch_all(http_client& client)
{
auto [home, about, contact] = co_await when_all(
fetch(client, "https://example.com/"),
fetch(client, "https://example.com/about"),
fetch(client, "https://example.com/contact")
);

std::cout << "Home: " << home.size() << " bytes\n";
std::cout << "About: " << about.size() << " bytes\n";
std::cout << "Contact: " << contact.size() << " bytes\n";
}
----

== When NOT to Use when_all

Use `when_all` when:

* Operations are independent
* You want to reduce total wait time
* You need all results before proceeding

Do NOT use `when_all` when:

* Operations depend on each other — use sequential `co_await`
* You need results as they complete — consider `when_any` (not yet available)
* Memory is constrained — concurrent tasks consume more memory

== Summary

[cols="1,3"]
|===
| Feature | Description

| `when_all(tasks...)`
| Launch tasks concurrently, wait for all

| Return type
| Tuple of non-void results in input order

| Error handling
| First exception propagated, siblings get stop

| Affinity
| Children inherit parent's executor

| Stop propagation
| Parent and sibling stop tokens forwarded
|===

== Next Steps

* xref:cancellation.adoc[Cancellation] — Stop token propagation
* xref:../execution/thread-pool.adoc[Thread Pool] — Multi-threaded execution
* xref:affinity.adoc[Executor Affinity] — Control where tasks run
2 changes: 1 addition & 1 deletion doc/modules/ROOT/pages/execution/contexts.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -315,5 +315,5 @@ Do NOT use `execution_context` directly when:
== Next Steps

* xref:thread-pool.adoc[Thread Pool] — Using the thread pool execution context
* xref:strand.adoc[Strands] — Serializing coroutine execution
* xref:frame-allocation.adoc[Frame Allocation] — Optimize coroutine memory
* xref:reference:boost/capy.adoc[API Reference] — Full reference documentation
2 changes: 1 addition & 1 deletion doc/modules/ROOT/pages/execution/executors.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -225,4 +225,4 @@ Do NOT use executors directly when:
== Next Steps

* xref:contexts.adoc[Execution Contexts] — Service management and thread pools
* xref:reference:boost/capy.adoc[API Reference] — Full reference documentation
* xref:strand.adoc[Strands] — Serializing coroutine execution
2 changes: 1 addition & 1 deletion doc/modules/ROOT/pages/execution/frame-allocation.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -208,4 +208,4 @@ Do NOT use custom allocators when:
== Next Steps

* xref:../utilities/containers.adoc[Containers] — Type-erased storage
* xref:reference:boost/capy.adoc[API Reference] — Full reference documentation
* xref:../performance-tuning/high-performance-allocators.adoc[High-Performance Allocators] — System-wide memory optimization
Loading
Loading