Skip to content

[FEATURE] Prompt caching support for all models #1140

@dbschmigelski

Description

@dbschmigelski

Problem Statement

This is a tracking ticket similar to Prompt caching support for LiteLLM but for all models.

As a user, I would like to leverage the SystemContent mechanism to apply prompt caching in all models.

Proposed Solution

No response

Use Case

To improve latency and reduce cost, Strands should support Prompt caching. As an abstraction layer, this should be done in a provider-agnostic way by using the SystemContentBlock.

Alternatives Solutions

No response

Additional Context

No response

Sub-issues

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions