diff --git a/Evolution/NNNN-retry-backoff.md b/Evolution/NNNN-retry-backoff.md new file mode 100644 index 00000000..ae82e50e --- /dev/null +++ b/Evolution/NNNN-retry-backoff.md @@ -0,0 +1,213 @@ +# Retry & Backoff + +* Proposal: [NNNN](NNNN-retry-backoff.md) +* Authors: [Philipp Gabriel](https://github.com/ph1ps) +* Review Manager: TBD +* Status: **Implemented** + +## Introduction + +This proposal introduces a `retry` function and a suite of backoff strategies for Swift Async Algorithms, enabling robust retries of failed asynchronous operations with customizable delays and error-driven decisions. + +Swift forums thread: [Discussion thread topic for that proposal](https://forums.swift.org/t/pitch-retry-backoff/82483) + +## Motivation + +Retry logic with backoff is a common requirement in asynchronous programming, especially for operations subject to transient failures such as network requests. Today, developers must reimplement retry loops manually, leading to fragmented and error-prone solutions across the ecosystem. + +Providing a standard `retry` function and reusable backoff strategies in Swift Async Algorithms ensures consistent, safe and well-tested patterns for handling transient failures. + +## Proposed solution + +This proposal introduces a retry function that executes an asynchronous operation up to a specified number of attempts, with customizable delays and error-based retry decisions between attempts. + +```swift +@available(AsyncAlgorithms 1.1, *) +nonisolated(nonsending) public func retry( + maxAttempts: Int, + tolerance: ClockType.Instant.Duration? = nil, + clock: ClockType = ContinuousClock(), + operation: () async throws(ErrorType) -> Result, + strategy: (ErrorType) -> RetryAction = { _ in .backoff(.zero) } +) async throws -> Result where ClockType: Clock, ErrorType: Error +``` + +```swift +@available(AsyncAlgorithms 1.1, *) +public struct RetryAction { + public static var stop: Self + public static func backoff(_ duration: Duration) -> Self +} +``` + +Additionally, this proposal includes a suite of backoff strategies that can be used to generate delays between retry attempts. The core strategies provide different patterns for calculating delays: constant intervals, linear growth, and exponential growth. + +```swift +@available(AsyncAlgorithms 1.1, *) +public enum Backoff { + public static func constant(_ constant: Duration) -> some BackoffStrategy + public static func constant(_ constant: Duration) -> some BackoffStrategy + public static func linear(increment: Duration, initial: Duration) -> some BackoffStrategy + public static func exponential(factor: Int128, initial: Duration) -> some BackoffStrategy +} +``` + +These strategies can be modified to enforce minimum or maximum delays, or to add jitter for preventing the thundering herd problem. + +```swift +@available(AsyncAlgorithms 1.1, *) +extension BackoffStrategy { + public func minimum(_ minimum: Duration) -> some BackoffStrategy + public func maximum(_ maximum: Duration) -> some BackoffStrategy +} +@available(AsyncAlgorithms 1.1, *) +extension BackoffStrategy where Duration == Swift.Duration { + public func fullJitter(using generator: RNG = SystemRandomNumberGenerator()) -> some BackoffStrategy + public func equalJitter(using generator: RNG = SystemRandomNumberGenerator()) -> some BackoffStrategy +} +``` + +`BackoffStrategy` is a protocol with an associated type `Duration` which is required to conform to `DurationProtocol`: + +```swift +@available(AsyncAlgorithms 1.1, *) +public protocol BackoffStrategy { + associatedtype Duration: DurationProtocol + mutating func nextDuration() -> Duration +} +``` + +Linear, exponential, and jitter backoff require the use of `Swift.Duration` rather than any type conforming to `DurationProtocol` due to limitations of `DurationProtocol` to do more complex mathematical operations, such as adding or multiplying with reporting overflows or generating random values. Constant, minimum and maximum are able to use `DurationProtocol`. + +## Detailed design + +### Retry + +The retry algorithm follows this sequence: +1. Execute the operation +2. If successful, return the result +3. If failed and this was not the final attempt: + - Call the `strategy` closure with the error + - If the strategy returns `.stop`, rethrow the error immediately + - If the strategy returns `.backoff`, suspend for the given duration + - Return to step 1 +4. If failed on the final attempt, rethrow the error without consulting the strategy + +Given this sequence, there are four termination conditions (when retrying will be stopped): +- The operation completes without throwing an error +- The operation has been attempted `maxAttempts` times +- The strategy closure returns `.stop` +- The clock throws + +#### Cancellation + +`retry` does not introduce special cancellation handling. If your code cooperatively cancels by throwing, ensure your strategy returns `.stop` for that error. Otherwise, retries continue unless the clock throws on cancellation (which, at the time of writing, both `ContinuousClock` and `SuspendingClock` do). + +### Backoff + +All proposed strategies conform to `BackoffStrategy` which allows for builder-like syntax like this: +```swift +var backoff = Backoff + .exponential(factor: 2, initial: .milliseconds(100)) + .maximum(.seconds(5)) + .fullJitter() +``` + +#### Custom backoff + +Adopters may choose to create their own strategies. There is no requirement to conform to `BackoffStrategy`, since retry and backoff are decoupled; however, to use the provided modifiers (`minimum`, `maximum`, `fullJitter`, `equalJitter`), a strategy must conform. + +Each call to `nextDuration()` returns the delay for the next retry attempt. Strategies are naturally stateful. For instance, they may track the number of invocations or the previously returned duration to calculate the next delay. + +#### Standard backoff + +As previously mentioned this proposal introduces several common backoff strategies which include: + +- **Constant**: $f(n) = constant$ +- **Linear**: $f(n) = initial + increment * n$ +- **Exponential**: $f(n) = initial * factor ^ n$ +- **Minimum**: $f(n) = max(minimum, g(n))$ where $g(n)$ is the base strategy +- **Maximum**: $f(n) = min(maximum, g(n))$ where $g(n)$ is the base strategy +- **Full Jitter**: $f(n) = random(0, g(n))$ where $g(n)$ is the base strategy +- **Equal Jitter**: $f(n) = random(g(n) / 2, g(n))$ where $g(n)$ is the base strategy + +##### Sendability + +The proposed backoff strategies are not marked `Sendable`. +They are not meant to be shared across isolation domains, because their state evolves with each call to `nextDuration()`. +Re-creating the strategies when they are used in different domains is usually the correct approach. + +### Case studies + +The most common use cases encountered for recovering from transient failures are either: +- a system requiring its user to come up with a reasonable duration to let the system cool off +- a system providing its own duration which the user is supposed to honor to let the system cool off + +Both of these use cases can be implemented using the proposed algorithm, respectively: + +```swift +let rng = SystemRandomNumberGenerator() // or a seeded RNG for unit tests +var backoff = Backoff + .exponential(factor: 2, initial: .milliseconds(100)) + .maximum(.seconds(10)) + .fullJitter(using: rng) + +let response = try await retry(maxAttempts: 5) { + try await URLSession.shared.data(from: url) +} strategy: { error in + return .backoff(backoff.nextDuration()) +} +``` + +```swift +let response = try await retry(maxAttempts: 5) { + let (data, response) = try await URLSession.shared.data(from: url) + if + let response = response as? HTTPURLResponse, + response.statusCode == 429, + let retryAfter = response.value(forHTTPHeaderField: "Retry-After"), + let seconds = Double(retryAfter) + { + throw TooManyRequestsError(retryAfter: seconds) + } + return (data, response) +} strategy: { error in + if let error = error as? TooManyRequestsError { + return .backoff(.seconds(error.retryAfter)) + } else { + return .stop + } +} +``` +(For demonstration purposes only, a network server is used as the remote system.) + +## Effect on API resilience + +This proposal introduces a purely additive API with no impact on existing functionality or API resilience. + +## Future directions + +The jitter variants introduced by this proposal support custom `RandomNumberGenerator` by **copying** it in order to perform the necessary mutations. +This is not optimal and does not match the standard library's signatures of e.g. `shuffle()` or `randomElement()` which take an **`inout`** random number generator. +Due to the composability of backoff algorithms proposed here, this is not possible to adopt in current Swift. +If Swift gains the capability to "store" `inout` variables, the jitter variants should adopt this by adding new `inout` overloads and deprecating the copying overloads. + +## Alternatives considered + +### Passing attempt number to `BackoffStrategy ` + +Another option considered was to pass the current attempt number into the `BackoffStrategy`. + +Although this initially seems useful, it conflicts with the idea of strategies being stateful. A strategy is supposed to track its own progression (e.g. by counting invocations or storing the last duration). If the attempt number were provided externally, strategies would become "semi-stateful": mutating because of internal components such as a `RandomNumberGenerator`, but at the same time relying on an external counter instead of their own stored history. This dual model is harder to reason about and less consistent, so it was deliberately avoided. + +If adopters require access to the attempt number, they are free to implement this themselves, since the strategy is invoked each time a failure occurs, making it straightforward to maintain an external attempt counter. + +### Retry on `AsyncSequence` + +An alternative considered was adding retry functionality directly to `AsyncSequence` types, similar to how Combine provides retry on `Publisher`. However, after careful consideration, this was not included in the current proposal due to the lack of compelling real-world use cases. + +If specific use cases emerge in the future that demonstrate clear value for async sequence retry functionality, this could be considered in a separate proposal or amended to this proposal. + +## Acknowledgments + +Thanks to [Philippe Hausler](https://github.com/phausler), [Franz Busch](https://github.com/FranzBusch) and [Honza Dvorsky](https://github.com/czechboy0) for their thoughtful feedback and suggestions that helped refine the API design and improve its clarity and usability. diff --git a/Sources/AsyncAlgorithms/Retry/Backoff.swift b/Sources/AsyncAlgorithms/Retry/Backoff.swift new file mode 100644 index 00000000..9d9deb82 --- /dev/null +++ b/Sources/AsyncAlgorithms/Retry/Backoff.swift @@ -0,0 +1,293 @@ +#if compiler(>=6.2) +/// A protocol for defining backoff strategies that generate delays between retry attempts. +/// +/// Each call to `nextDuration()` returns the delay for the next retry attempt. Strategies are +/// naturally stateful. For instance, they may track the number of invocations or the previously +/// returned duration to calculate the next delay. +/// +/// ## Example +/// +/// ```swift +/// var strategy = Backoff.exponential(factor: 2, initial: .milliseconds(100)) +/// strategy.nextDuration() // 100ms +/// strategy.nextDuration() // 200ms +/// strategy.nextDuration() // 400ms +/// ``` +@available(AsyncAlgorithms 1.1, *) +public protocol BackoffStrategy { + associatedtype Duration: DurationProtocol + mutating func nextDuration() -> Duration +} + +@available(AsyncAlgorithms 1.1, *) +@usableFromInline struct ConstantBackoffStrategy: BackoffStrategy { + @usableFromInline let constant: Duration + @usableFromInline init(constant: Duration) { + precondition(constant >= .zero, "Constant must be greater than or equal to 0") + self.constant = constant + } + @inlinable func nextDuration() -> Duration { + return constant + } +} + +@available(AsyncAlgorithms 1.1, *) +@usableFromInline struct LinearBackoffStrategy: BackoffStrategy { + @usableFromInline var current: Duration + @usableFromInline let increment: Duration + @usableFromInline var hasOverflown = false + @usableFromInline init(increment: Duration, initial: Duration) { + precondition(initial >= .zero, "Initial must be greater than or equal to 0") + precondition(increment >= .zero, "Increment must be greater than or equal to 0") + self.current = initial + self.increment = increment + } + @inlinable mutating func nextDuration() -> Duration { + if hasOverflown { + return Duration(attoseconds: .max) + } else { + let (next, hasOverflown) = current.attoseconds.addingReportingOverflow(increment.attoseconds) + if hasOverflown { + self.hasOverflown = true + return nextDuration() + } else { + defer { current = Duration(attoseconds: next) } + return current + } + } + } +} + +@available(AsyncAlgorithms 1.1, *) +@usableFromInline struct ExponentialBackoffStrategy: BackoffStrategy { + @usableFromInline var current: Duration + @usableFromInline let factor: Int128 + @usableFromInline var hasOverflown = false + @usableFromInline init(factor: Int128, initial: Duration) { + precondition(initial >= .zero, "Initial must be greater than or equal to 0") + self.current = initial + self.factor = factor + } + @inlinable mutating func nextDuration() -> Duration { + if hasOverflown { + return Duration(attoseconds: .max) + } else { + let (next, hasOverflown) = current.attoseconds.multipliedReportingOverflow(by: factor) + if hasOverflown { + self.hasOverflown = true + return nextDuration() + } else { + defer { current = Duration(attoseconds: next) } + return current + } + } + } +} + +@available(AsyncAlgorithms 1.1, *) +@usableFromInline struct MinimumBackoffStrategy: BackoffStrategy { + @usableFromInline var base: Base + @usableFromInline let minimum: Base.Duration + @usableFromInline init(base: Base, minimum: Base.Duration) { + self.base = base + self.minimum = minimum + } + @inlinable mutating func nextDuration() -> Base.Duration { + return max(minimum, base.nextDuration()) + } +} + +@available(AsyncAlgorithms 1.1, *) +@usableFromInline struct MaximumBackoffStrategy: BackoffStrategy { + @usableFromInline var base: Base + @usableFromInline let maximum: Base.Duration + @usableFromInline init(base: Base, maximum: Base.Duration) { + self.base = base + self.maximum = maximum + } + @inlinable mutating func nextDuration() -> Base.Duration { + return min(maximum, base.nextDuration()) + } +} + +@available(AsyncAlgorithms 1.1, *) +@usableFromInline struct FullJitterBackoffStrategy: BackoffStrategy where Base.Duration == Swift.Duration { + @usableFromInline var base: Base + @usableFromInline var generator: RNG + @usableFromInline init(base: Base, generator: RNG) { + self.base = base + self.generator = generator + } + @inlinable mutating func nextDuration() -> Base.Duration { + return .init(attoseconds: Int128.random(in: 0...base.nextDuration().attoseconds, using: &generator)) + } +} + +@available(AsyncAlgorithms 1.1, *) +@usableFromInline struct EqualJitterBackoffStrategy: BackoffStrategy where Base.Duration == Swift.Duration { + @usableFromInline var base: Base + @usableFromInline var generator: RNG + @usableFromInline init(base: Base, generator: RNG) { + self.base = base + self.generator = generator + } + @inlinable mutating func nextDuration() -> Base.Duration { + let base = base.nextDuration() + return .init(attoseconds: Int128.random(in: (base / 2).attoseconds...base.attoseconds, using: &generator)) + } +} + +@available(AsyncAlgorithms 1.1, *) +public enum Backoff { + /// Creates a constant backoff strategy that always returns the same delay. + /// + /// Formula: `f(n) = constant` + /// + /// - Precondition: `constant` must be greater than or equal to zero. + /// + /// - Parameter constant: The fixed duration to wait between retry attempts. + /// - Returns: A backoff strategy that always returns the constant duration. + @inlinable public static func constant(_ constant: Duration) -> some BackoffStrategy { + return ConstantBackoffStrategy(constant: constant) + } + + /// Creates a constant backoff strategy that always returns the same delay. + /// + /// Formula: `f(n) = constant` + /// + /// - Precondition: `constant` must be greater than or equal to zero. + /// + /// - Parameter constant: The fixed duration to wait between retry attempts. + /// - Returns: A backoff strategy that always returns the constant duration. + /// + /// ## Example + /// + /// ```swift + /// var backoff = Backoff.constant(.milliseconds(100)) + /// backoff.nextDuration() // 100ms + /// backoff.nextDuration() // 100ms + /// ``` + @inlinable public static func constant(_ constant: Duration) -> some BackoffStrategy { + return ConstantBackoffStrategy(constant: constant) + } + + /// Creates a linear backoff strategy where delays increase by a fixed increment. + /// + /// Formula: `f(n) = initial + increment * n` + /// + /// - Precondition: `initial` and `increment` must be greater than or equal to zero. + /// + /// - Parameters: + /// - increment: The amount to increase the delay by on each attempt. + /// - initial: The initial delay for the first retry attempt. + /// - Returns: A backoff strategy with linearly increasing delays. + /// + /// ## Example + /// + /// ```swift + /// var backoff = Backoff.linear(increment: .milliseconds(100), initial: .milliseconds(100)) + /// backoff.nextDuration() // 100ms + /// backoff.nextDuration() // 200ms + /// backoff.nextDuration() // 300ms + /// ``` + @inlinable public static func linear(increment: Duration, initial: Duration) -> some BackoffStrategy { + return LinearBackoffStrategy(increment: increment, initial: initial) + } + + /// Creates an exponential backoff strategy where delays grow exponentially. + /// + /// Formula: `f(n) = initial * factor^n` + /// + /// - Precondition: `initial` must be greater than or equal to zero. + /// + /// - Parameters: + /// - factor: The multiplication factor for each retry attempt. + /// - initial: The initial delay for the first retry attempt. + /// - Returns: A backoff strategy with exponentially increasing delays. + /// + /// ## Example + /// + /// ```swift + /// var backoff = Backoff.exponential(factor: 2, initial: .milliseconds(100)) + /// backoff.nextDuration() // 100ms + /// backoff.nextDuration() // 200ms + /// backoff.nextDuration() // 400ms + /// ``` + @inlinable public static func exponential(factor: Int128, initial: Duration) -> some BackoffStrategy { + return ExponentialBackoffStrategy(factor: factor, initial: initial) + } +} + +@available(AsyncAlgorithms 1.1, *) +extension BackoffStrategy { + /// Applies a minimum duration constraint to this backoff strategy. + /// + /// Formula: `f(n) = max(minimum, g(n))` where `g(n)` is the base strategy + /// + /// This modifier ensures that no delay returned by the strategy is less than + /// the specified minimum duration. + /// + /// - Parameter minimum: The minimum duration to enforce. + /// - Returns: A backoff strategy that never returns delays shorter than the minimum. + /// + /// ## Example + /// + /// ```swift + /// var backoff = Backoff + /// .exponential(factor: 2, initial: .milliseconds(100)) + /// .minimum(.milliseconds(200)) + /// backoff.nextDuration() // 200ms (enforced minimum) + /// ``` + @inlinable public func minimum(_ minimum: Duration) -> some BackoffStrategy { + return MinimumBackoffStrategy(base: self, minimum: minimum) + } + + /// Applies a maximum duration constraint to this backoff strategy. + /// + /// Formula: `f(n) = min(maximum, g(n))` where `g(n)` is the base strategy + /// + /// This modifier ensures that no delay returned by the strategy exceeds + /// the specified maximum duration, effectively capping exponential growth. + /// + /// - Parameter maximum: The maximum duration to enforce. + /// - Returns: A backoff strategy that never returns delays longer than the maximum. + /// + /// ## Example + /// + /// ```swift + /// var backoff = Backoff + /// .exponential(factor: 2, initial: .milliseconds(100)) + /// .maximum(.seconds(5)) + /// // Delays will cap at 5 seconds instead of growing indefinitely + /// ``` + @inlinable public func maximum(_ maximum: Duration) -> some BackoffStrategy { + return MaximumBackoffStrategy(base: self, maximum: maximum) + } + + /// Applies full jitter to this backoff strategy. + /// + /// Formula: `f(n) = random(0, g(n))` where `g(n)` is the base strategy + /// + /// Jitter prevents the thundering herd problem where multiple clients retry + /// simultaneously, reducing server load spikes and improving system stability. + /// + /// - Parameter generator: The random number generator to use. Defaults to `SystemRandomNumberGenerator()`. + /// - Returns: A backoff strategy with full jitter applied. + @inlinable public func fullJitter(using generator: RNG = SystemRandomNumberGenerator()) -> some BackoffStrategy where Duration == Swift.Duration { + return FullJitterBackoffStrategy(base: self, generator: generator) + } + + /// Applies equal jitter to this backoff strategy. + /// + /// Formula: `f(n) = random(g(n) / 2, g(n))` where `g(n)` is the base strategy + /// + /// Jitter prevents the thundering herd problem where multiple clients retry + /// simultaneously, reducing server load spikes and improving system stability. + /// + /// - Parameter generator: The random number generator to use. Defaults to `SystemRandomNumberGenerator()`. + /// - Returns: A backoff strategy with equal jitter applied. + @inlinable public func equalJitter(using generator: RNG = SystemRandomNumberGenerator()) -> some BackoffStrategy where Duration == Swift.Duration { + return EqualJitterBackoffStrategy(base: self, generator: generator) + } +} +#endif diff --git a/Sources/AsyncAlgorithms/Retry/Retry.swift b/Sources/AsyncAlgorithms/Retry/Retry.swift new file mode 100644 index 00000000..94af2f67 --- /dev/null +++ b/Sources/AsyncAlgorithms/Retry/Retry.swift @@ -0,0 +1,167 @@ +#if compiler(>=6.2) +@available(AsyncAlgorithms 1.1, *) +public struct RetryAction { + @usableFromInline enum Action { + case backoff(Duration) + case stop + } + @usableFromInline let action: Action + @usableFromInline init(action: Action) { + self.action = action + } + + /// Indicates that retrying should stop immediately and the error should be rethrown. + @inlinable public static var stop: Self { + return .init(action: .stop) + } + + /// Indicates that retrying should continue after waiting for the specified duration. + /// + /// - Parameter duration: The duration to wait before the next retry attempt. + @inlinable public static func backoff(_ duration: Duration) -> Self { + return .init(action: .backoff(duration)) + } +} + +/// Executes an asynchronous operation with retry logic and customizable backoff strategies. +/// +/// This function executes an asynchronous operation up to a specified number of attempts, +/// with customizable delays and error-based retry decisions between attempts. +/// +/// The retry logic follows this sequence: +/// 1. Execute the operation +/// 2. If successful, return the result +/// 3. If failed and this was not the final attempt: +/// - Call the strategy closure with the error +/// - If the strategy returns `.stop`, rethrow the error immediately +/// - If the strategy returns `.backoff`, suspend for the given duration +/// - Return to step 1 +/// 4. If failed on the final attempt, rethrow the error without consulting the strategy +/// +/// Given this sequence, there are four termination conditions (when retrying will be stopped): +/// - The operation completes without throwing an error +/// - The operation has been attempted `maxAttempts` times +/// - The strategy closure returns `.stop` +/// - The clock throws +/// +/// ## Cancellation +/// +/// `retry` does not introduce special cancellation handling. If your code cooperatively +/// cancels by throwing, ensure your strategy returns `.stop` for that error. Otherwise, +/// retries continue unless the clock throws on cancellation (which, at the time of writing, +/// both `ContinuousClock` and `SuspendingClock` do). +/// +/// - Precondition: `maxAttempts` must be greater than 0. +/// +/// - Parameters: +/// - maxAttempts: The maximum number of attempts to make. +/// - tolerance: The tolerance for the sleep operation between retries. +/// - clock: The clock to use for timing delays between retries. +/// - isolation: The actor isolation to maintain during execution. +/// - operation: The asynchronous operation to retry. +/// - strategy: A closure that determines the retry action based on the error. +/// Defaults to immediate retry with no delay. +/// - Returns: The result of the successful operation. +/// - Throws: The error from the operation if all retry attempts fail or if the strategy returns `.stop`. +/// +/// ## Example +/// +/// ```swift +/// var backoff = Backoff.exponential(factor: 2, initial: .milliseconds(100)) +/// let result = try await retry(maxAttempts: 3, clock: ContinuousClock()) { +/// try await someNetworkOperation() +/// } strategy: { error in +/// return .backoff(backoff.nextDuration()) +/// } +/// ``` +@available(AsyncAlgorithms 1.1, *) +@inlinable nonisolated(nonsending) public func retry( + maxAttempts: Int, + tolerance: ClockType.Instant.Duration? = nil, + clock: ClockType, + operation: () async throws(ErrorType) -> Result, + strategy: (ErrorType) -> RetryAction = { _ in .backoff(.zero) } +) async throws -> Result where ClockType: Clock, ErrorType: Error { + precondition(maxAttempts > 0, "Must have at least one attempt") + for _ in 0..( + maxAttempts: Int, + tolerance: ContinuousClock.Instant.Duration? = nil, + operation: () async throws(ErrorType) -> Result, + strategy: (ErrorType) -> RetryAction = { _ in .backoff(.zero) } +) async throws -> Result where ErrorType: Error { + return try await retry( + maxAttempts: maxAttempts, + tolerance: tolerance, + clock: ContinuousClock(), + operation: operation, + strategy: strategy + ) +} +#endif diff --git a/Tests/AsyncAlgorithmsTests/Support/SplitMix64.swift b/Tests/AsyncAlgorithmsTests/Support/SplitMix64.swift new file mode 100644 index 00000000..4b675500 --- /dev/null +++ b/Tests/AsyncAlgorithmsTests/Support/SplitMix64.swift @@ -0,0 +1,16 @@ +// Taken from: https://github.com/swiftlang/swift/blob/main/benchmark/utils/TestsUtils.swift#L257-L271 +public struct SplitMix64: RandomNumberGenerator { + private var state: UInt64 + + public init(seed: UInt64) { + self.state = seed + } + + public mutating func next() -> UInt64 { + self.state &+= 0x9e37_79b9_7f4a_7c15 + var z: UInt64 = self.state + z = (z ^ (z &>> 30)) &* 0xbf58_476d_1ce4_e5b9 + z = (z ^ (z &>> 27)) &* 0x94d0_49bb_1331_11eb + return z ^ (z &>> 31) + } +} diff --git a/Tests/AsyncAlgorithmsTests/TestBackoff.swift b/Tests/AsyncAlgorithmsTests/TestBackoff.swift new file mode 100644 index 00000000..5e6b1d09 --- /dev/null +++ b/Tests/AsyncAlgorithmsTests/TestBackoff.swift @@ -0,0 +1,118 @@ +import AsyncAlgorithms +import Testing + +@Suite struct BackoffTests { + + @available(AsyncAlgorithms 1.1, *) + @Test func overflowSafety() { + var strategy = Backoff.exponential(factor: 2, initial: .seconds(5)).maximum(.seconds(120)) + for _ in 0..<100 { + _ = strategy.nextDuration() + } + } + + @available(AsyncAlgorithms 1.1, *) + @Test func constantBackoff() { + var strategy = Backoff.constant(.milliseconds(5)) + #expect(strategy.nextDuration() == .milliseconds(5)) + #expect(strategy.nextDuration() == .milliseconds(5)) + } + + @available(AsyncAlgorithms 1.1, *) + @Test func linearBackoff() { + var strategy = Backoff.linear(increment: .milliseconds(2), initial: .milliseconds(1)) + #expect(strategy.nextDuration() == .milliseconds(1)) + #expect(strategy.nextDuration() == .milliseconds(3)) + #expect(strategy.nextDuration() == .milliseconds(5)) + #expect(strategy.nextDuration() == .milliseconds(7)) + } + + @available(AsyncAlgorithms 1.1, *) + @Test func exponentialBackoff() { + var strategy = Backoff.exponential(factor: 2, initial: .milliseconds(1)) + #expect(strategy.nextDuration() == .milliseconds(1)) + #expect(strategy.nextDuration() == .milliseconds(2)) + #expect(strategy.nextDuration() == .milliseconds(4)) + #expect(strategy.nextDuration() == .milliseconds(8)) + } + + @available(AsyncAlgorithms 1.1, *) + @Test func fullJitter() { + var strategy = Backoff.constant(.milliseconds(100)).fullJitter(using: SplitMix64(seed: 42)) + #expect(strategy.nextDuration() == Duration(attoseconds: 15991039287692012)) // 15.99 ms + #expect(strategy.nextDuration() == Duration(attoseconds: 34419071652363758)) // 34.41 ms + #expect(strategy.nextDuration() == Duration(attoseconds: 86822807654653238)) // 86.82 ms + #expect(strategy.nextDuration() == Duration(attoseconds: 80063187671350344)) // 80.06 ms + } + + @available(AsyncAlgorithms 1.1, *) + @Test func equalJitter() { + var strategy = Backoff.constant(.milliseconds(100)).equalJitter(using: SplitMix64(seed: 42)) + #expect(strategy.nextDuration() == Duration(attoseconds: 57995519643846006)) // 57.99 ms + #expect(strategy.nextDuration() == Duration(attoseconds: 67209535826181879)) // 67.20 ms + #expect(strategy.nextDuration() == Duration(attoseconds: 93411403827326619)) // 93.41 ms + #expect(strategy.nextDuration() == Duration(attoseconds: 90031593835675172)) // 90.03 ms + } + + @available(AsyncAlgorithms 1.1, *) + @Test func minimum() { + var strategy = Backoff.exponential(factor: 2, initial: .milliseconds(1)).minimum(.milliseconds(2)) + #expect(strategy.nextDuration() == .milliseconds(2)) // 1 clamped to min 2 + #expect(strategy.nextDuration() == .milliseconds(2)) // 2 unchanged + #expect(strategy.nextDuration() == .milliseconds(4)) // 4 unchanged + #expect(strategy.nextDuration() == .milliseconds(8)) // 8 unchanged + } + + @available(AsyncAlgorithms 1.1, *) + @Test func maximum() { + var strategy = Backoff.exponential(factor: 2, initial: .milliseconds(1)).maximum(.milliseconds(5)) + #expect(strategy.nextDuration() == .milliseconds(1)) // 1 unchanged + #expect(strategy.nextDuration() == .milliseconds(2)) // 2 unchanged + #expect(strategy.nextDuration() == .milliseconds(4)) // 4 unchanged + #expect(strategy.nextDuration() == .milliseconds(5)) // 8 unchanged clamped to max 5 + } + + #if os(macOS) || (os(iOS) && targetEnvironment(macCatalyst)) || os(Linux) || os(FreeBSD) || os(OpenBSD) || os(Windows) + @available(AsyncAlgorithms 1.1, *) + @Test func constantPrecondition() async { + await #expect(processExitsWith: .success) { + _ = Backoff.constant(.milliseconds(1)) + } + await #expect(processExitsWith: .failure) { + _ = Backoff.constant(.milliseconds(-1)) + } + } + + @available(AsyncAlgorithms 1.1, *) + @Test func linearPrecondition() async { + await #expect(processExitsWith: .success) { + _ = Backoff.linear(increment: .milliseconds(1), initial: .milliseconds(1)) + } + await #expect(processExitsWith: .failure) { + _ = Backoff.linear(increment: .milliseconds(1), initial: .milliseconds(-1)) + } + await #expect(processExitsWith: .failure) { + _ = Backoff.linear(increment: .milliseconds(-1), initial: .milliseconds(1)) + } + await #expect(processExitsWith: .failure) { + _ = Backoff.linear(increment: .milliseconds(-1), initial: .milliseconds(-1)) + } + } + + @available(AsyncAlgorithms 1.1, *) + @Test func exponentialPrecondition() async { + await #expect(processExitsWith: .success) { + _ = Backoff.exponential(factor: 1, initial: .milliseconds(1)) + } + await #expect(processExitsWith: .failure) { + _ = Backoff.exponential(factor: 1, initial: .milliseconds(-1)) + } + await #expect(processExitsWith: .success) { + _ = Backoff.exponential(factor: -1, initial: .milliseconds(1)) + } + await #expect(processExitsWith: .failure) { + _ = Backoff.exponential(factor: -1, initial: .milliseconds(-1)) + } + } + #endif +} diff --git a/Tests/AsyncAlgorithmsTests/TestRetry.swift b/Tests/AsyncAlgorithmsTests/TestRetry.swift new file mode 100644 index 00000000..6a4a6e2c --- /dev/null +++ b/Tests/AsyncAlgorithmsTests/TestRetry.swift @@ -0,0 +1,170 @@ +@testable import AsyncAlgorithms +import Testing + +@Suite struct RetryTests { + + @available(AsyncAlgorithms 1.1, *) + @Test func singleAttempt() async throws { + var operationAttempts = 0 + var strategyAttempts = 0 + await #expect(throws: Failure.self) { + try await retry(maxAttempts: 1) { + operationAttempts += 1 + throw Failure() + } strategy: { _ in + strategyAttempts += 1 + return .backoff(.zero) + } + } + #expect(operationAttempts == 1) + #expect(strategyAttempts == 0) + } + + @available(AsyncAlgorithms 1.1, *) + @Test func customCancellation() async throws { + struct CustomCancellationError: Error {} + let task = Task { + try await retry(maxAttempts: 3) { + if Task.isCancelled { + throw CustomCancellationError() + } + throw Failure() + } strategy: { error in + if error is CustomCancellationError { + return .stop + } else { + return .backoff(.zero) + } + } + } + task.cancel() + await #expect(throws: CustomCancellationError.self) { + try await task.value + } + } + + @available(AsyncAlgorithms 1.1, *) + @Test func defaultCancellation() async throws { + let task = Task { + try await retry(maxAttempts: 3) { + throw Failure() + } + } + task.cancel() + await #expect(throws: CancellationError.self) { + try await task.value + } + } + + @available(AsyncAlgorithms 1.1, *) + @Test func successOnFirstAttempt() async throws { + func doesNotActuallyThrow() throws { } + var operationAttempts = 0 + var strategyAttempts = 0 + try await retry(maxAttempts: 3) { + operationAttempts += 1 + try doesNotActuallyThrow() + } strategy: { _ in + strategyAttempts += 1 + return .backoff(.zero) + } + #expect(operationAttempts == 1) + #expect(strategyAttempts == 0) + } + + @available(AsyncAlgorithms 1.1, *) + @Test func successOnSecondAttempt() async throws { + var operationAttempts = 0 + var strategyAttempts = 0 + try await retry(maxAttempts: 3) { + operationAttempts += 1 + if operationAttempts == 1 { + throw Failure() + } + } strategy: { _ in + strategyAttempts += 1 + return .backoff(.zero) + } + #expect(operationAttempts == 2) + #expect(strategyAttempts == 1) + } + + @available(AsyncAlgorithms 1.1, *) + @Test func maxAttemptsExceeded() async throws { + var operationAttempts = 0 + var strategyAttempts = 0 + await #expect(throws: Failure.self) { + try await retry(maxAttempts: 3) { + operationAttempts += 1 + throw Failure() + } strategy: { _ in + strategyAttempts += 1 + return .backoff(.zero) + } + } + #expect(operationAttempts == 3) + #expect(strategyAttempts == 2) + } + + @available(AsyncAlgorithms 1.1, *) + @Test func nonRetryableError() async throws { + struct RetryableError: Error {} + struct NonRetryableError: Error {} + var operationAttempts = 0 + var strategyAttempts = 0 + await #expect(throws: NonRetryableError.self) { + try await retry(maxAttempts: 5) { + operationAttempts += 1 + if operationAttempts == 2 { + throw NonRetryableError() + } + throw RetryableError() + } strategy: { error in + strategyAttempts += 1 + if error is NonRetryableError { + return .stop + } + return .backoff(.zero) + } + } + #expect(operationAttempts == 2) + #expect(strategyAttempts == 2) + } + + @available(AsyncAlgorithms 1.1, *) + @MainActor @Test func customClock() async throws { + let clock = ManualClock() + let (stream, continuation) = AsyncStream.makeStream() + let operationAttempts = ManagedCriticalState(0) + let task = Task { @MainActor in + try await retry(maxAttempts: 3, clock: clock) { + operationAttempts.withCriticalRegion { $0 += 1 } + continuation.yield() + throw Failure() + } strategy: { _ in + return .backoff(.steps(1)) + } + } + var iterator = stream.makeAsyncIterator() + _ = await iterator.next()! + #expect(operationAttempts.withCriticalRegion { $0 } == 1) + clock.advance() + _ = await iterator.next()! + #expect(operationAttempts.withCriticalRegion { $0 } == 2) + clock.advance() + _ = await iterator.next()! + #expect(operationAttempts.withCriticalRegion { $0 } == 3) + await #expect(throws: Failure.self) { + try await task.value + } + } + + #if os(macOS) || (os(iOS) && targetEnvironment(macCatalyst)) || os(Linux) || os(FreeBSD) || os(OpenBSD) || os(Windows) + @available(AsyncAlgorithms 1.1, *) + @Test func zeroAttempts() async { + await #expect(processExitsWith: .failure) { + try await retry(maxAttempts: 0) { } + } + } + #endif +}