Skip to content

[pull] main from danny-avila:main#55

Merged
pull[bot] merged 9 commits intoinnFactory:mainfrom
danny-avila:main
Mar 20, 2026
Merged

[pull] main from danny-avila:main#55
pull[bot] merged 9 commits intoinnFactory:mainfrom
danny-avila:main

Conversation

@pull
Copy link
Copy Markdown

@pull pull bot commented Mar 20, 2026

See Commits and Changes for more details.


Created by pull[bot] (v2.0.0-alpha.4)

Can you help keep this open source service alive? 💖 Please sponsor : )

* feat: summarization node

This commit introduces several enhancements related to summarization within the system. Key changes include the addition of new events for summarization processes, such as ON_SUMMARIZE_START, ON_SUMMARIZE_DELTA, and ON_SUMMARIZE_COMPLETE. The HandlerRegistry and ModelEndHandler classes have been updated to handle these events appropriately, allowing for better management of summarization tasks.

Additionally, the AgentContext class has been modified to support summarization settings, including enabling summarization and configuring related parameters. The Graph class has been updated to integrate summarization nodes and manage the flow of messages during summarization.

New tests have been added to ensure the correct functionality of summarization features, including preserving summarization settings across resets and verifying the behavior of summary-related events. Overall, these changes aim to improve the system's ability to generate and manage conversation summaries effectively.

feat: enhance summarization error handling

This commit improves error handling in the summarization process by adding specific error messages for failed summarization attempts and empty outputs. The SummarizeCompleteEvent interface is updated to include an optional error field, allowing for better tracking of issues during summarization. These changes aim to provide clearer feedback and improve the robustness of the summarization functionality.

feat: enhance summarization logic and message formatting

This commit improves the summarization logic by refining the conditions under which summarization is triggered, ensuring that it only fires when appropriate runtime data is available. Additionally, a new function, formatMessageForSummary, is introduced to format messages for summarization input, enhancing the readability of structured content. The changes aim to provide clearer and more accurate summarization outputs while preserving important context from previous summaries.

feat: enhance summarization cycle management and deduplication

This commit introduces several improvements to the summarization process within the AgentContext. New properties are added to track the number of summarization cycles, the last message count at which summarization was triggered, and the count of summarizations during the current run. Additionally, a deduplication mechanism is implemented through a hash function to prevent redundant summarizations of the same message set. These changes aim to optimize the summarization logic, enhance performance, and prevent infinite loops during summarization cycles.

feat: integrate cross-run summary handling in AgentContext and message formatting

This commit enhances the summarization process by introducing an initialSummary property in AgentContext, allowing for the restoration of cross-run summaries before system message initialization. The formatAgentMessages function is updated to return summary metadata instead of creating a SystemMessage, ensuring that the agent's own system message can include the summary. Additionally, tests are updated to validate the new summary handling, improving the overall management of conversation summaries across runs.

feat: summarization message formatting and character limits

This commit introduces several constants and functions to improve the summarization process. New character limits are defined for tool arguments, tool results, and message content to ensure concise representation. The formatMessageForSummary function is updated to truncate messages appropriately, and a new function, formatMessagesForSummarization, is added to manage character budgets across multiple messages. These changes aim to enhance the clarity and efficiency of the summarization output while preserving essential information.

feat: enhance summarization handling and introduce streaming support

This commit improves the summarization process by implementing a new streaming mechanism for summarization nodes, allowing real-time updates during summarization. The `handleSummarizeStream` function is added to manage streaming events and dispatch updates to clients. Additionally, the summarization node configuration is enriched with metadata for better tracing. The default summarization prompt is refined for clarity, and tests are introduced to validate the new streaming functionality and ensure proper accumulation of summary content. These changes aim to enhance the user experience and performance of the summarization feature.

feat: implement smart truncation and metadata summary for improved summarization

This commit introduces a new `smartTruncate` function that enhances text truncation by preserving both the beginning and end of the text, ensuring meaningful context is retained. Additionally, a `generateMetadataStub` function is added to provide a fallback summary when LLM attempts fail, preserving essential metadata about the messages. The summarization parameters are updated to include `maxSummaryTokens`, allowing for better control over output size. Tests are also updated to validate these new features and ensure robust error handling during summarization processes.

feat: implement context pruning and overflow recovery mechanisms

This commit introduces position-based context pruning for tool results, enhancing the management of message content by soft-trimming and hard-clearing based on message age. A new configuration for context pruning is added, allowing customization of pruning behavior. Additionally, overflow recovery mechanisms are implemented to handle context overflow errors during tool calls, enabling retries with truncated results. The AgentContext and related classes are updated to support new properties for minimum reserved tokens and maximum tool result characters, improving overall summarization and tool result handling. Tests are included to validate these new features and ensure robust error handling.

feat: implement fallback provider mechanism and enhance summarization prompts

This commit introduces a new method, `tryFallbackProviders`, to streamline the handling of fallback providers during invocation attempts, improving error recovery. Additionally, the default summarization prompt is updated to a structured format that ensures comprehensive context capture, while a new prompt for updating existing summaries is added. Tests are included to validate the new prompts and their functionality, enhancing the overall robustness of the summarization process.

feat: enhance token management and pruning logic in AgentContext

This commit introduces several improvements to the token management system within the AgentContext. Key changes include the addition of a new property for tracking tool schema tokens, which allows for more precise budgeting of token usage. The update to the token map now directly reflects the real token counts without inflating the first index with instruction tokens, enhancing accuracy in token calculations. Additionally, new methods for rebuilding the token map after summarization and providing a detailed breakdown of token usage are implemented, improving diagnostics and overall management of token budgets. Tests are updated to validate these changes and ensure robust functionality.

feat: implement orphan tool block sanitization and enhance message pruning

This commit introduces the `sanitizeOrphanToolBlocks` function to clean up orphan tool_use blocks and ToolMessages in the message context, preventing structural validation errors during model invocation. The function ensures that only valid tool calls and results are retained, enhancing the integrity of the message flow. Additionally, updates to the `repairOrphanedToolMessages` function now include tracking of dropped messages, improving the overall message management during pruning. Tests are added to validate the new sanitization logic and ensure robust handling of message contexts.

feat: enhance event handling and logging for summarization and run steps

This commit introduces several improvements to the event handling and logging mechanisms within the summarization and run processes. Key changes include the addition of the `ON_SUMMARIZE_COMPLETE` event to track the completion of summarization, and the implementation of an `emitAgentLog` function for diagnostic logging across various scopes. The `dispatchRunStep` method is updated to streamline event dispatching for run steps, ensuring consistent handling of events. Additionally, the summarization prompts are refined for clarity, and tests are added to validate the new logging and event handling functionalities, enhancing observability and robustness in the system.

feat: add reserve ratio configuration for improved token management in pruning

This commit introduces a new `reserveRatio` parameter to the pruning logic, allowing for a fraction of the effective token budget to be reserved as headroom. This enhancement ensures that pruning triggers before the context fills to 100%, providing better management of token usage and compensating for approximate token counting. The default reserve ratio is set to 5% when not configured. Updates to related functions and tests are included to validate the new behavior and ensure robust functionality.

feat: enhance emergency truncation logic for tool results and inputs

This commit improves the emergency truncation mechanism within the StandardGraph class by implementing a more comprehensive approach to handle both ToolMessage content and AI message tool-call inputs. The new logic ensures that oversized inputs are truncated effectively using a head+tail strategy, preserving essential information while adhering to character limits. Additionally, the truncateToolInput function is introduced to facilitate this process. Updates to related functions and tests are included to validate the new behavior and ensure robust functionality.

feat: enhance ensureThinkingBlockInMessages to handle Bedrock whitespace artifacts

This commit adds a test case to ensure that AI messages are not modified when the reasoning_content is not the first block due to whitespace artifacts emitted by Bedrock. The logic in the ensureThinkingBlockInMessages function is updated to check all content blocks for thinking or reasoning types, ensuring proper handling of messages with tool calls. This enhancement improves the robustness of message processing in scenarios where unexpected whitespace may occur.

fix: ensureThinkingBlockInMessages to handle Bedrock reasoning and tool call scenarios

This commit adds multiple test cases to ensure that the `ensureThinkingBlockInMessages` function correctly handles various scenarios involving AI messages and tool calls, particularly in the context of Bedrock's reasoning models. The logic is updated to detect reasoning content in both message arrays and additional_kwargs, ensuring that follow-up tool calls in a thinking-enabled chain are not incorrectly converted. This enhancement improves the robustness of message processing and maintains the integrity of AI interactions.

fix: update ensureThinkingBlockInMessages to include RunnableConfig for enhanced logging

This commit modifies the `ensureThinkingBlockInMessages` function to accept an optional `RunnableConfig` parameter, allowing for structured agent logging. Additionally, the `StandardGraph` class is updated to pass the configuration to the function, improving the logging capabilities during message processing. These changes enhance observability and facilitate better debugging of message handling scenarios.

chore: remove unused `createSystemRunnable` in Graph

feat: implement durable summary management in AgentContext

This commit introduces a durable summary mechanism within the AgentContext class, allowing summaries to persist across reset calls. The new properties `_durableSummaryText` and `_durableSummaryTokenCount` are added to store the latest summary, ensuring that conversation context is maintained even after resets. Additionally, tests are updated to verify that the summary survives resets and token counts remain consistent. A new diagnostic script is also added to trace the lifecycle of the initial summary through the agent pipeline.

chore remove refs from jsdocs

refactor: clean up event handling logic in ModelEndHandler

This commit removes unused variables and redundant checks in the ModelEndHandler class, streamlining the event handling process. The logic for tool call handling and node summarization has been simplified, enhancing code readability and maintainability.

refactor: streamline summarization event handling and enhance streaming support

This commit removes the unused `handleSummarizeStream` function from the `ChatModelStreamHandler`, simplifying the event handling logic. Additionally, it introduces a `stepId` parameter in the summarization functions to facilitate real-time dispatching of `ON_SUMMARIZE_DELTA` events during streaming. The streaming mechanism is updated to collect chunks of text and dispatch events accordingly, improving the responsiveness of the summarization output. These changes enhance code clarity and maintainability while improving the user experience during summarization.

feat: add synthetic spacing delta dispatch in summarization stages

This commit introduces a synthetic spacing delta dispatch between summarization stages to enhance visual separation of chunk summaries. The implementation checks for a valid `stepId` and configuration before dispatching a custom event, improving the user experience during the summarization process. This change aims to provide clearer visual feedback in the client interface.

chore: imports

refactor: update pruning logic to use raw maxContextTokens for better tool result handling

This commit modifies the pre-flight truncation logic in the createPruneMessages function to utilize raw maxContextTokens instead of effectiveMaxTokens. This change ensures that the truncation thresholds accurately reflect the model's capacity, preventing unnecessary truncation of small tool results that could impact post-LLM enrichment. Additionally, a new test file for summarization utilities is added, covering various truncation and summarization scenarios to enhance code coverage and reliability.

feat: enhance summarization logic with context budget checks

This commit updates the summarization logic in the StandardGraph class to include additional checks for context length and remaining context tokens. It introduces a warning log when summarization is skipped due to exceeding the context budget, improving observability and debugging capabilities. These changes aim to ensure that the summarization process is more robust and informative, enhancing the overall user experience.

feat: enhance summarization node with reasoning handling and content extraction

This commit introduces improvements to the summarization node by adding support for reasoning content extraction and refining the handling of chunk content. It includes a new `provider` parameter for chunk content extraction, ensuring that reasoning tokens are appropriately identified and excluded from the final summary. Additionally, error logging has been enhanced to capture specific error messages during summarization failures, improving debugging capabilities. These changes aim to provide a more robust and informative summarization process.

feat: update summarization structure to use content arrays and improve text handling

This commit refines the summarization logic by replacing the `text` field with a `content` array in the `SummaryContentBlock`, allowing for more flexible content management. It enhances the extraction of text from message content, ensuring that reasoning blocks are appropriately handled. Additionally, it introduces a new function to strip broken surrogates from strings, improving the robustness of text processing. These changes aim to provide a more comprehensive and reliable summarization experience.

refactor: remove minReserveTokens from AgentContext and enhance summarization logic

This commit removes the `minReserveTokens` property from the `AgentContext` class and its associated logic, streamlining the summarization configuration. It also updates the summarization handling to utilize content arrays for better text management and improves the overflow recovery method to return the new attempt count. Additionally, new tests are added to verify the behavior of the `shouldSkipSummarization` method, ensuring robust summarization logic. These changes aim to enhance the overall efficiency and clarity of the summarization process.

feat: enhance AgentContext and summarization event structure

This commit introduces a new `DEFAULT_RESERVE_RATIO` in the `AgentContext` to improve context token management during summarization. It updates the calculation of available tokens by incorporating the reserve ratio, ensuring more accurate context handling. Additionally, it adds an `id` field to the `SummarizeCompleteEvent` interface and modifies the event dispatching in the summarization node to include this identifier, enhancing event tracking and clarity in the summarization process.

fix: improve null safety in summarization handling

This commit enhances the null safety of the summarization logic by using optional chaining and non-null assertions. It updates the `getSummaryText` function to handle undefined summaries gracefully and ensures that summary properties are accessed safely throughout the tests. These changes aim to prevent runtime errors and improve the robustness of the summarization process.

refactor: update jest configuration and improve null safety in run processing

This commit modifies the Jest configuration to set `maxWorkers` to '50%' for better resource management during tests. Additionally, it enhances the `processStream` method in the `Run` class by renaming the `config` parameter to `callerConfig` for clarity and ensuring that the configuration is shallow-copied to prevent unintended mutations. The changes also include improvements in null safety across various scripts, ensuring more robust handling of optional properties and preventing runtime errors.

refactor: enhance sanitizeOrphanToolBlocks to preserve prototypes and improve tool call filtering

This commit updates the `sanitizeOrphanToolBlocks` function to exclude tool calls with IDs starting with 'srvtoolu_' from being added to the tool call IDs set. It also modifies the way messages are patched to ensure that the original prototype is preserved, enhancing the integrity of message instances. Additionally, new tests are added to verify that the function correctly retains the prototypes of `BaseMessage`, `AIMessage`, and `AIMessageChunk` instances after processing, ensuring robust handling of message types.

chore: update model names in tests and scripts to reflect new versioning

This commit updates various test and script files to replace outdated model names with the latest version 'claude-sonnet-4-5-20250929' and 'claude-haiku-4-5'. Changes are made across multiple files, ensuring consistency in model references for both Anthropic and Bedrock integrations. These updates aim to maintain alignment with the current model specifications and improve the accuracy of test scenarios.

feat: add breakdown to agent context in StandardGraph

This commit enhances the StandardGraph class by adding a breakdown property to the agent context, which formats the token budget breakdown for improved clarity in context management. This change aims to provide better insights into token usage during graph operations.

refactor: improve context budget handling in summarization logic

This commit refines the summarization logic in both the StandardGraph and summarization node by enhancing context budget checks. It introduces clearer guidance when instructions exceed the context budget, ensuring that users receive actionable feedback. Additionally, it removes redundant checks and improves logging for skipped summarization scenarios, contributing to better observability and clarity in the summarization process.

test: add budget check tests for summarization node

This commit introduces new tests for the summarization node to validate behavior when instruction tokens exceed the maximum context tokens. It ensures that summarization is skipped in such cases and verifies that the process proceeds normally when instruction tokens are within limits. These tests enhance coverage and reliability of the summarization logic, contributing to better handling of context budgets.

chore: update image content check in context pruning logic

This commit modifies the `hasImageContent` function to clarify its purpose regarding messages with image content blocks. It emphasizes that messages containing images are not subject to position-based content degradation, as images cannot be meaningfully trimmed or replaced. Additionally, the import statement for `TokenCounter` has been repositioned for better organization. These changes aim to enhance code clarity and maintainability in the context pruning logic.

test: update string inclusion checks for clarity in splitStream tests

This commit modifies the string inclusion checks in the `splitStream.test.ts` file to explicitly compare the results with `true`. This change enhances code clarity and ensures that the tests accurately reflect the intended logic for checking message content. The updates contribute to improved readability and maintainability of the test suite.

fix: reset message count baseline in summarization logic

This commit resets the message count baseline in the `AgentContext` class after summarization. This change ensures that the growth check in the `shouldSkipSummarization` method compares against the post-summarization state, improving the accuracy of summarization checks and enhancing the overall reliability of the summarization process.

test: add tests for re-triggering summarization after baseline reset

This commit introduces new tests in the `AgentContext` to validate the behavior of the `shouldSkipSummarization` method after the message count baseline is reset by `setSummary`. The tests ensure that re-summarization is allowed under the correct conditions and that the per-run cap is still respected, enhancing the coverage and reliability of the summarization logic.

test: add multi-agent summarization tests for independent context handling

This commit introduces a new test suite for multi-agent summarization, validating that agents can summarize independently while sharing conversation history. It ensures that summarization triggers correctly based on context budgets and that each agent maintains its own summarization state. The tests enhance coverage and reliability of the summarization logic in multi-agent scenarios.

refactor: update message count baseline handling in summarization logic

This commit removes the reset of the message count baseline in the `setSummary` method and instead sets it in the `rebuildTokenMapAfterSummarization` method. This change ensures that the message count baseline reflects the surviving context size, preventing immediate re-summarization of the same context messages and improving the accuracy of the summarization process.

refactor: enhance summarization logic with improved message handling

This commit refines the summarization process in the StandardGraph class by introducing a clearer structure for checking message pruning and summarization triggers. It consolidates conditions for triggering summarization and adds detailed logging for both triggered and skipped summarization scenarios. Additionally, it updates the summarization node to include more context in the logging output, improving observability and clarity in the summarization workflow.

refactor: adjust summarization thresholds for tool-enabled and tool-free agents

This commit updates the `shouldSkipSummarization` method in the `AgentContext` class to implement distinct message growth thresholds based on the presence of tools. Tool-enabled agents now require a minimum growth of 8 messages before re-triggering summarization, while tool-free agents maintain a threshold of 4 messages. Additionally, the summarization logic is enhanced with new tests to validate the behavior of these thresholds, ensuring accurate summarization triggers based on agent capabilities.

feat: enhance summarization context handling with structured event support

This commit introduces a new `_summaryEvents` property in the `AgentContext` class to store structured events extracted from summarization output, persisting across cycles. It adds getter and setter methods for managing these events and implements an XML formatting function for output. Additionally, the summarization logic is refined to compute adaptive per-field character limits based on available budget and message count, improving the handling of tool results and message content. Tests are updated to reflect these changes, ensuring robust functionality in summarization processes.

refactor: simplify summarization logic and improve message handling

This commit refines the `shouldSkipSummarization` method in the `AgentContext` class to check for identical message counts since the last summary, allowing for clearer conditions under which summarization is triggered. It removes the previous per-run cap and adjusts the logic to focus on message count changes. Additionally, the summarization prompts in the `DEFAULT_SUMMARIZATION_PROMPT` and `DEFAULT_UPDATE_SUMMARIZATION_PROMPT` are updated for clarity. Tests are also modified to reflect these changes, ensuring accurate behavior in summarization scenarios.

feat: enhance agent context summarization with initial summary tracking

This commit introduces a new flag `_isInitialSummary` in the `AgentContext` class to differentiate between initial and mid-run summaries. It updates the summarization logic to conditionally include the summary in the system prompt based on this flag, ensuring that mid-run summaries are injected as `HumanMessage` instead. Additionally, the `createSummarizeNode` function is refined to replace existing events entirely during full compaction, preventing stale data accumulation. The changes improve the clarity and effectiveness of the summarization process across agent runs.

refactor: remove structured event handling from AgentContext and summarization logic

This commit eliminates the `_summaryEvents` property and associated methods from the `AgentContext` class, streamlining the summarization process. The `formatEventsXml` function is removed, and related event parsing logic is also deleted from the summarization workflow. Additionally, tests are updated to reflect these changes, ensuring that the summarization functionality remains robust without the structured event support.

feat: add comprehensive documentation for summarization and context management behavior

This commit introduces a new markdown file detailing the summarization and context management behavior in LibreChat. It outlines the processes for both summarization-enabled and disabled agents, including observation masking, token budget anatomy, and the full compaction process. Additionally, it clarifies the differences in behavior based on summarization settings and provides insights into the summarization pipeline, including tiered summarization strategies and fallback mechanisms. This documentation aims to enhance understanding and facilitate future development in the summarization logic.

refactor: update summarization tests and logic for clarity and accuracy

This commit refines the `shouldSkipSummarization` tests in the `AgentContext` to improve clarity in the conditions under which summarization is triggered. It modifies existing test cases to reflect changes in message count handling and introduces new tests to validate the behavior of summarization after full compaction. Additionally, it removes redundant checks and updates the summarization logic to ensure consistent behavior across different scenarios. A new test suite for masking consumed tool results is also added, enhancing coverage for message handling in summarization processes.

feat: introduce model initialization function for chat models

This commit adds a new `initializeModel` function in `src/llm/init.ts`, which serves as a centralized entry point for creating chat model instances. The function handles provider-specific configurations and tool bindings, streamlining the model initialization process across the codebase. Additionally, it updates the `Graph` and `createSummarizeNode` implementations to utilize this new function, enhancing code clarity and maintainability.

feat: enhance LLM invocation and request handling with new utilities

This commit introduces new functions in the `src/llm/invoke.ts` and `src/llm/request.ts` files to streamline the invocation of chat models and manage request configurations. The `attemptInvoke` function handles both streaming and non-streaming model invocations, while `tryFallbackProviders` attempts to use fallback providers in case of invocation failures. Additionally, the `isThinkingEnabled` and `getMaxOutputTokensKey` functions provide utility for managing provider-specific options. The changes improve the clarity and maintainability of the LLM interaction logic across the codebase.

refactor: add custom chunk processing support to attemptInvoke function

This commit enhances the `attemptInvoke` function in `src/llm/invoke.ts` by introducing an optional `onChunk` callback for custom stream processing. When provided, this callback allows users to handle stream chunks independently, replacing the default `ChatModelStreamHandler`. The changes improve flexibility in processing streaming data, enabling use cases such as summarization delta events while maintaining existing functionality for standard stream handling.

refactor: streamline summarization logic and remove unused parameters

This commit refactors the summarization logic in `src/summarization/node.ts` by removing unused functions and constants related to adaptive character limits and message formatting. It also updates the `SummarizationConfig` type in `src/types/summarize.ts` to simplify the configuration options by eliminating deprecated properties. These changes enhance code clarity and maintainability while ensuring that the summarization process remains efficient and focused.

refactor: remove overflow recovery configuration and streamline summarization logic

This commit refactors the `AgentContext` class by removing the overflow recovery configuration and related properties, simplifying the summarization logic. It introduces a new `_summaryLocation` property to track where the summary should be injected, enhancing clarity in the summarization process. Additionally, updates are made to the `StandardGraph` class to utilize the new summary handling logic, ensuring consistent behavior across agent runs.

refactor: update AgentContext token management and enhance summarization logic

This commit refactors the `AgentContext` class by renaming the `instructionTokens` property to `systemMessageTokens` for clarity and adjusting the logic to calculate total instruction overhead. It also simplifies the reset behavior and updates related tests to reflect these changes. Additionally, the summarization logic in `src/summarization/node.ts` is enhanced to improve error handling during fallback scenarios, ensuring more robust summarization processes. The `tryFallbackProviders` function is updated to support custom chunk processing, improving flexibility in handling streaming data.

refactor: update summarization tests to improve error handling and clarity

This commit modifies the `createSummarizeNode` tests to enhance error handling by changing the fallback behavior when the primary LLM call fails. The test now verifies that a metadata stub is used instead of a recovered summary. Additionally, the `AgentContext` class is updated to consistently use `systemMessageTokens` instead of `instructionTokens`, improving clarity in token management across various test scenarios.

refactor: update summarization behavior documentation for clarity and accuracy

refactor: enhance token management and summarization logic in AgentContext

This commit updates the `AgentContext` class to improve token management by introducing a new `_toolTokensCalibrated` property and refining the calculation of `instructionTokens` to include pending summary overhead. It also adjusts the logic for tool token calibration based on provider usage data, ensuring more accurate token accounting. Additionally, the `StandardGraph` class is updated to log detailed usage metrics during LLM calls, and the `createPruneMessages` function is enhanced to apply a running calibration ratio for better message token estimation. These changes aim to streamline the summarization process and improve overall clarity in token handling.

test: update AgentContext tests and token management logic

This commit modifies the `AgentContext` tests to clarify the behavior of `setSummary` and `clearSummary` methods regarding `instructionTokens`. The test descriptions are updated for better understanding, ensuring that the expected token counts reflect the summary overhead accurately. Additionally, the logic in related tests is refined to enhance clarity in token management during summarization processes, aligning with recent changes in the token accounting system.

refactor: improve caching and summarization logic in AgentContext and related components

This commit enhances the caching mechanism in the `AgentContext` class by introducing conditional cache control for summary messages based on prompt caching settings. It refines the logic for handling message arrays during summarization, ensuring that cache markers are applied correctly. Additionally, the `StandardGraph` class is updated to apply cache control only when no system runnable is present, and the `createSummarizeNode` function is modified to track existing cache markers, improving overall clarity and efficiency in the summarization process.

feat: introduce calibration ratio for improved token management across components

This commit adds a new `calibrationRatio` property to the `Run`, `AgentContext`, and `StandardGraph` classes, enhancing the token management system. The calibration ratio is now passed through various components, allowing for immediate scaling of new messages based on previous runs. Additionally, the `createPruneMessages` function is updated to utilize this ratio, ensuring more accurate message token estimation. These changes aim to streamline the summarization process and improve overall clarity in token handling.

refactor: update calibration ratio handling and improve summarization logic

This commit refines the handling of the `calibrationRatio` in the `Run` class, ensuring it defaults to 1 and is only updated based on valid configuration input. The `createPruneMessages` function is adjusted to apply the calibration ratio conditionally, enhancing token management accuracy. Additionally, the `createSummarizeNode` function is modified to improve error logging during summarization failures, contributing to better clarity and robustness in the overall summarization process.

refactor: enhance summary message formatting in AgentContext

This commit updates the summary message construction in the `AgentContext` class to include a wrapped summary format. The new format adds context about the checkpoint and improves clarity for users by indicating that the context window has been compacted. This change ensures that the summary is presented more effectively, enhancing the overall user experience during message summarization.

refactor: run step completion handling and summary event integration

This commit updates the handling of run step completion events in the `StandardGraph` and `createSummarizeNode` functions. It introduces a new `dispatchRunStepCompleted` method to facilitate the dispatching of both tool call and summary completion events, improving the clarity and structure of event handling. Additionally, the `StepCompleted` type is extended to accommodate summary events, ensuring that the system can effectively manage different types of completion results. These changes aim to streamline the event processing workflow and enhance the overall robustness of the summarization logic.

refactor: simplify logging and remove unused cache marker tracking

This commit refines the logging in the `StandardGraph` class by simplifying the emitted log message and removing the construction of message types. In the `createSummarizeNode` function, the tracking of existing cache markers has been eliminated, streamlining the summarization process. These changes aim to enhance clarity and reduce unnecessary complexity in the codebase.

refactor: increase test timeout and adjust token limits for summarization tests

This commit updates the Jest configuration to extend the test timeout to 60 seconds, accommodating the needs of end-to-end summarization tests that interact with real APIs. Additionally, it modifies the `maxTokens` parameter in various test cases to enhance the token limits, ensuring that the summarization logic can handle larger inputs effectively. These changes aim to improve the robustness and reliability of the testing framework for summarization functionalities.

refactor: improve event handling and cleanup logic in Run class

This commit enhances the event handling mechanism within the `Run` class by introducing a try-finally block to ensure proper cleanup of heavy data references after processing events. Additionally, a new private property `_streamResult` is added to store the result of the stream processing, improving the clarity and efficiency of the return value. These changes aim to streamline the event processing workflow and ensure better memory management during execution.

docs: update calibrationRatio documentation for persistence across runs

This commit enhances the documentation for the `calibrationRatio` property in the `RunConfig` type. It clarifies the importance of persisting the value returned by `Run.getCalibrationRatio()` after each run and passing it back for subsequent runs within the same conversation. This addition aims to improve understanding of the calibration ratio's role in maintaining the EMA across different run instances.

refactor: update summarization test descriptions and improve deduplication logic

This commit refines the test suite for summarization by updating the descriptions for clarity and accuracy. The focus shifts from "multi-pass summarization correctness" to "summarization deduplication correctness," emphasizing the goal of preventing duplicate section headers. Additionally, unnecessary parameters are removed from the summarization configuration, and comments are adjusted to better reflect the intent of the tests. These changes aim to enhance the readability and effectiveness of the test cases.

docs: jsdoc for getCalibrationRatio

refactor: enhance token management and context handling in AgentContext and Graph

This commit introduces several improvements to the token management system within the `AgentContext` and `StandardGraph` classes. Key changes include the addition of `lastContextStartIndex` and `lastTotalMessageCount` properties to better track message indices during pruning, and the implementation of specific token multipliers for different providers. Additionally, the `createPruneMessages` function is updated to utilize new constants for calibration ratios and pressure thresholds, enhancing the overall efficiency and accuracy of token calculations. These modifications aim to streamline context handling and improve the robustness of the summarization process.

refactor: remove deprecated single-pass summarization test

This commit eliminates the outdated test for single-pass summarization default behavior in the `summarization-unit.test.ts` file. The test was deemed unnecessary as its functionality is already verified through existing end-to-end tests. This change aims to streamline the test suite and improve maintainability by removing redundant code.

test: add calibration tests for updateLastCallUsage in AgentContext

This commit introduces new tests for the `updateLastCallUsage` method in the `AgentContext` class, focusing on the calibration process after pruning. The tests ensure that only active context entries are summed, validate the calibration logic, and confirm that context tracking resets correctly after summarization. These additions enhance the test coverage and reliability of the token management system.

fix: adjust prePruneTotalTokens calculation in StandardGraph

This commit modifies the calculation of `prePruneTotalTokens` in the `StandardGraph` class to include `instructionTokens` when `prePruneTotalTokens` is not null. This change aims to ensure more accurate token management during the summarization process, enhancing the overall efficiency of context handling.

refactor: token multipliers for different providers in AgentContext

This commit introduces a new token multiplier for the Bedrock provider and updates the logic for determining the appropriate token multiplier based on the provider type. The changes clarify the encoding differences between Anthropic, Bedrock, and OpenAI, improving the accuracy of token calculations during context management.

feat: summarization usage metadata tracking

This commit introduces the tracking of usage metadata during the summarization process. Key changes include the addition of a new constant for token overhead, updates to the `createSummarizeNode` function to handle summary usage, and modifications to the `summarizeWithCacheHit` function to return usage details. These enhancements improve the accuracy of token calculations and provide better insights into the summarization performance.

refactor: improve truncation indicator messages in utils

This commit updates the truncation indicator messages in the `truncateToolInput` and `truncateToolResultContent` functions to clarify that the character limit has been exceeded. The new messages enhance readability and provide clearer context for users regarding the truncation process.

test: update truncation indicator assertions for clarity

This commit modifies assertions in the `prune.test.ts` and `summarization-unit.test.ts` files to enhance the clarity of truncation indicator messages. The changes include updating the expected format of the truncation message and refining the comments for better understanding of the tests. These adjustments aim to improve the readability and accuracy of the test cases related to truncation functionality.

refactor: usage metadata extraction in summarization

This commit refines the usage metadata extraction logic in the `summarizeWithCacheHit` function. The changes introduce a new structure for handling `responseMsg`, allowing for more flexible retrieval of usage information from both `usage_metadata` and `response_metadata`. This improvement aims to ensure accurate tracking of usage details during the summarization process.

* fix: ensureThinkingBlockInMessages to improve message handling

This commit refines the `ensureThinkingBlockInMessages` function to better manage trailing sequences after the last HumanMessage. The logic now preserves historical AI and ToolMessages while ensuring that only the necessary trailing sequences are processed. Additionally, the test cases have been updated to reflect these changes, ensuring accurate handling of message types and lengths. Overall, these improvements aim to enhance the clarity and efficiency of message processing in the system.

* feat: implement usage metadata extraction from LLM responses

This commit introduces a new function, `extractUsageFromMessage`, to handle the extraction of usage metadata from LLM response messages. It accommodates different provider formats, ensuring accurate retrieval of input and output token counts. The `summarizeWithCacheHit` function is updated to utilize this new extraction method, enhancing the clarity and reliability of usage tracking during summarization processes. These changes aim to improve the overall efficiency and observability of token management in the system.

* refactor: tool token calibration logic in AgentContext

This commit refines the tool token calibration process within the AgentContext class. It updates the logic to apply a full gap adjustment when no messages are present, ensuring immediate convergence after summarization. For subsequent calibrations, a dampened adjustment of 30% of the gap is applied to mitigate oscillation caused by message token errors. These changes aim to improve the accuracy and stability of token management during the summarization process.

* refactor: streamline usage metadata extraction in summarization

This commit removes the `extractUsageFromMessage` function and integrates its logic directly into the `summarizeWithCacheHit` function. The new implementation enhances the clarity and efficiency of usage metadata retrieval from LLM responses, accommodating different provider formats. Additionally, logging for usage details has been improved to provide better insights during the summarization process. These changes aim to simplify the codebase while maintaining accurate tracking of usage metrics.

* refactor: update calibration ratio maximum for improved token management

This commit modifies the maximum calibration ratio constant in the pruning logic from 2.5 to 3, enhancing the safety margin for token management. Additionally, it introduces a new property, `runningEMA`, in the `createPruneMessages` function to provide a rounded value of the last calibration ratio, improving the clarity of calibration metrics during message processing. These changes aim to optimize token handling and ensure more accurate calibration during summarization processes.

* refactor: enhance token management and calibration logic in AgentContext

This commit refines the token management system within the AgentContext class by removing unused properties related to tool token calibration and simplifying the logic for calculating instruction overhead. It introduces a new `resolvedInstructionOverhead` property to improve accuracy in token calculations during summarization. Additionally, the pruning logic is updated to ensure that only active context entries are considered, enhancing the overall efficiency and clarity of the token management process. Tests are also updated to reflect these changes and ensure robust functionality.

* refactor: update Anthropic tool token multiplier description in AgentContext

This commit modifies the documentation comment for the Anthropic tool token multiplier, changing the description from a structured format to an XML-like structure and updating the multiplier value from 2.0 to 2.6. This change aims to enhance clarity regarding the token count estimation for tool schemas in the AgentContext class.

* refactor: calibration ratios and enhance pruning logic in message processing

This commit modifies the maximum calibration ratio constants in the pruning logic, increasing both the `CALIBRATION_RATIO_MAX` and `POST_CALIBRATION_SANITY_MAX` from 3 to 3.5. It also introduces new variables to track truncation states and applied calibration within the `createPruneMessages` function, improving the accuracy of token management during message processing. Additionally, the logic for determining safe calibration ratios is refined to account for whether calibration has been applied, enhancing overall robustness in token handling.

* fix: adjust expected token count in summarization tests

This commit updates the expected token count in the summarization test to account for an additional 33 tokens. The change ensures that the test accurately reflects the expected output of the summarization process, improving the reliability of the test suite. This adjustment aims to enhance the overall accuracy of token management during summarization scenarios.

* refactor: adjust EMA weight for calibration ratio and refine ratio safety checks

This commit updates the EMA weight for the new calibration ratio from 0.7 to 0.3, enhancing the responsiveness to real drift while maintaining stability against noise. Additionally, the logic for determining safe calibration ratios is refined to ensure that the ratio is always greater than zero and to apply looser bounds during the first calibration. These changes aim to improve the robustness of the pruning logic in message processing.

* refactor: update calibration logic and token management in prune.ts

This commit refines the calibration logic in the `prune.ts` file by replacing the EMA weight with a minimum cumulative calibration ratio to prevent divide-by-zero errors. It also enhances the handling of token counts by ensuring that the indexTokenCountMap remains in raw tiktoken space, with the calibration ratio capturing the necessary multiplier. Additionally, related tests are updated to reflect these changes, improving the accuracy and reliability of token management during message processing.

* refactor: enhance message sanitization and cache control handling

This commit improves the sanitization process for orphan tool_use blocks in the `StandardGraph` class, ensuring that new orphans introduced by post-pruning transformations are properly handled. Additionally, it refines the cache control logic in the `addCacheControl` and `addBedrockCacheControl` functions, consolidating the stripping of cache markers and optimizing the insertion of cache points. These changes aim to enhance the reliability and efficiency of message processing while maintaining clarity in the codebase.

* refactor: enhance token management and masking logic in prune.ts and AgentContext

This commit updates the token management system in the `AgentContext` class by adjusting the Anthropic tool token multiplier from 2.6 to 1.2 based on empirical testing. It also refines the masking logic in `prune.ts` to incorporate a budget-aware approach for consumed tool results, allowing for more flexible handling of message budgets. Additionally, related tests are updated to ensure accuracy in token accounting and masking behavior, improving overall efficiency and reliability in message processing.

* chore: comment cleanup

* chore: comment cleanup

* refactor: adjust token counting for Claude API to improve accuracy

This commit introduces a correction factor for token counts when using Anthropic's Claude API, addressing discrepancies observed between the API and local tokenizer. The adjustment ensures that token counts are more accurately reflected by applying a multiplier of 1.1 for Claude, enhancing the reliability of token management in message processing. Additionally, the token counter function is updated to conditionally apply this correction based on the encoding type.

* refactor: update Anthropic tool token multiplier to improve accuracy

This commit adjusts the Anthropic tool token multiplier from 1.2 back to 2.6, based on empirical calibration against real MCP tool sets. The updated comment clarifies the rationale behind the multiplier, accounting for internal encoding and a hidden preamble, ensuring more accurate token management in message processing.

* refactor: message pruning logic to retain original tool content

* test: update summarization tests and token calibration logic

This commit refines the summarization tests by replacing hardcoded values with a loop to progressively squeeze token limits, improving test coverage and flexibility. Additionally, it updates the token calibration logic in the token distribution edge case tests to ensure that the calibrated totals approximate the expected input tokens more accurately, enhancing the reliability of token management during message processing.

* refactor: summarization logging with cache details and image/document token accounting

This commit updates the logging in the summarization function to include cache read and creation details when available. This enhancement provides better visibility into the usage of cached tokens, improving the overall monitoring and debugging capabilities of the summarization process.

* refactor: message formatting for Anthropic tool integration

This commit updates the message handling logic in the StandardGraph class to properly format Anthropic artifact content before message pruning. It ensures that the latest ToolMessage is processed correctly based on the provider, improving the reliability of message content management and preventing dangling references.

* test: increase timeout and adjust token variance logic in summarization tests

This commit increases the timeout for summarization tests to accommodate improved token accounting, allowing for more iterations before summarization triggers. Additionally, it updates the token comparison logic to allow for a 25% variance due to encoding differences, enhancing the robustness of token management in the tests.

* refactor: streamline message formatting logic for ToolMessage handling

This commit refines the message formatting logic in the StandardGraph class by consolidating the handling of ToolMessages based on the provider. It ensures that the appropriate formatting functions are called for both Anthropic and OpenAI/Google providers, enhancing the clarity and reliability of message content management while preventing dangling references.

* test: update summarization test instructions for clarity and simplicity

This commit modifies the instructions in the summarization tests to simplify the language and enhance clarity. It also introduces a recursion limit in the stream configuration, ensuring better control over the processing depth during tests. The changes aim to improve the readability and effectiveness of the test cases.

* test: adjust maxTokens in summarization tests for improved performance

This commit modifies the maxTokens parameter in the summarization tests, reducing it from 4000 to 3000, and further adjusting subsequent calls to createRun to 2500 and 2000. These changes aim to enhance the performance and responsiveness of the tests by tightening the token limits, ensuring more efficient summarization processes.

* fix: use custom messagesStateReducer for REMOVE_ALL compaction

Graph.ts imported messagesStateReducer from @langchain/langgraph, which
cannot handle the REMOVE_ALL_MESSAGES sentinel used by the summarize
node's full compaction. Swapped to the custom reducer from
@/messages/reducer and exported it from messages/index.ts.

Also adds a 15% variance threshold to the toolSchemaTokens calibration
feedback loop to prevent oscillation between turns.

* chore: delete dead summarization/stream.ts

handleSummarizeStream was never imported anywhere. Streaming deltas are
already correctly dispatched via createSummarizationChunkHandler in
node.ts through the onChunk callback.

* fix: whitespace masking bug, DRY helpers, and constant extraction in prune pipeline

- Add .trim() to hasText check in maskConsumedToolResults so Bedrock's
  whitespace artifact ("\n\n") doesn't incorrectly mark tool results as
  consumed before the model has processed them.
- Extract sumTokenCounts() helper replacing verbatim recount loops.
- Extract ANTHROPIC_SERVER_TOOL_PREFIX constant for 'srvtoolu_' (3 sites).
- Pass pre-serialized string to truncateToolInput, avoiding double
  JSON.stringify on large tool inputs.
- Resolve context pruning settings once in the factory closure instead
  of allocating a new object every pruner invocation.

* test: make multi-agent summarization assertions unconditional

All three tests had critical assertions inside if-guards that silently
passed when summarization never fired. Replace with unconditional
expect().toBeGreaterThan(0) so tests fail fast if the feature regresses.

* refactor: add setInitialSummary for atomic cross-run summary init

setSummary() unconditionally sets _summaryLocation to 'user_message',
so fromConfig() had to immediately overwrite it to 'system_prompt' for
cross-run summaries. Add setInitialSummary() that sets the location
atomically, removing the fragile external field override.

* fix: log fallback exhaustion and fix inline import type in summarization node

Replace empty catch block with emitAgentLog('warn') when fallback
providers are exhausted, matching the primary failure path's logging.
Move ToolMessage to a top-level import type instead of inline
import() cast.

* fix: pass compileOptions in single-agent workflow, extend Bedrock artifact formatting

createWorkflow() now passes this.compileOptions to .compile(), matching
the agent subgraph and MultiAgentGraph. Compile options like
checkpointers were silently dropped for the outer single-agent workflow.

Also extends formatAnthropicArtifactContent to run for Bedrock provider
(same underlying Anthropic model family).

* feat: default recursionLimit to 50 in processStream

LangGraph's built-in default of 25 is too low for agent runs with
summarization cycles. Place the default before ...callerConfig so
callers can still override it.

* refactor: promote ANTHROPIC_SERVER_TOOL_PREFIX to Constants, name calibration threshold

Move 'srvtoolu_' prefix to Constants.ANTHROPIC_SERVER_TOOL_PREFIX in
src/common/enum.ts and update all production call sites (prune.ts,
message_inputs.ts, ToolNode.ts). Extract bare 0.15 calibration
variance threshold to CALIBRATION_VARIANCE_THRESHOLD constant.

* fix: ensure ToolMessage type safety in StandardGraph message handling

Updated the message retrieval logic in StandardGraph to explicitly cast messages to ToolMessage or undefined, enhancing type safety and preventing potential runtime errors.

* refactor: rename prePruneTotalTokens, fix double serialization, clarify orphan sanitize

- Rename prePruneTotalTokens -> prePruneContextTokens across the
  trigger, pruner, graph, and test files to reflect that the value
  includes instruction overhead (total context pressure, not just
  message tokens).
- Fix 3 remaining truncateToolInput call sites that passed raw objects
  after pre-serializing for length checks (preFlight tc.args, emergency
  content block, emergency tc.args).
- Add clarifying comment on the intentionally broad needsOrphanSanitize
  condition in Graph.ts.

* test: add emergency truncation test with summarizationEnabled=true

Exercises the full fallback chain: main pruning produces empty context,
fallback fading (summarization path) also fails to fit, emergency
truncation recovers by aggressively truncating tool results.

* refactor: streamline tool token multiplier logic in AgentContext

Consolidated the logic for determining the tool token multiplier by removing the separate handling for Bedrock and simplifying the condition for setting the multiplier based on the provider. Updated comments to clarify the default multiplier's applicability across various providers.

* fix: improve compaction continuation prompt to prevent leaky resumption language

The HumanMessage injected after context compaction previously used phrasing
that leaked internal mechanics ("context window was compacted", "checkpoint",
"resume") which the model would echo back to the user. Replaced with a direct,
assertive instruction that prevents repetition and self-acknowledgment.

* feat: calibrate instruction overhead in pruning budget and persist across runs

Use the provider-observed instruction overhead (bestInstructionOverhead) in
the budget computation instead of the local tokenizer estimate, which
consistently overestimates tool schema tokens by ~2500 for large tool sets.
This stops the calibration ratio from collapsing to compensate for the
overestimate, keeping it stable for accurate message token scaling.

- Seed bestInstructionOverhead from contextMeta on new runs so the first
  call's budget uses a calibrated value instead of starting cold
- Track the local estimate at observation time and invalidate the cached
  overhead when instructions change mid-run (tool discovery, >10% drift)
- Expose getResolvedInstructionOverhead() and getToolCount() on Run/Graph
  so hosts can persist and seed across runs with tool-count validation

* test: update multi-agent summarization tests for context handling

Modified padding in test cases to use a more descriptive string, increasing the maxContextTokens for agents to 800. Added additional conversation history to better simulate agent interactions and ensure summarization events are correctly triggered. Updated assertions to verify that at least one agent summarizes and that the correct agent IDs are logged.

* refactor: Anthropic and Bedrock providers in message handling

Added the isAnthropicLike utility function to identify native Anthropic or Bedrock models. Updated StandardGraph to utilize this function for improved message formatting and sanitization, ensuring proper handling of messages from these providers. Enhanced formatAnthropicArtifactContent to concatenate message content correctly, accommodating various content types.

* fix: trim whitespace from last message content in StandardGraph

Updated the message handling in StandardGraph to trim whitespace from the last message's content before assigning it to the finalMessages array. This ensures that only meaningful text is retained, improving message formatting and clarity.

* fix: seed calibrated instruction overhead into post-summarization pruner

When setSummary() clears pruneMessages after compaction, the recreated
pruner was using the initial Run-level seededInstructionOverhead instead
of the calibrated resolvedInstructionOverhead observed during the run.
This caused budget estimation to regress after each summarization cycle.

* refactor: remove vestigial context field and dead summarization code

- Remove `context` from SummarizationNodeInput — full compaction always
  sends all messages to messagesToRefine, so context was permanently [].
- Replace misleading contextLength/survivingMessages log with accurate
  messagesCompacted count.
- Remove dead SummarizeResult.targetMessageId/targetContentIndex fields.
- Remove write-only _summarizationCountThisRun counter.

* chore: import style, Buffer.from portability, compile cast comments

- Move UsageMetadata to import type and consolidate two same-module
  imports from @langchain/core/messages in summarization node.
- Replace atob() with Buffer.from() for Node.js <16 portability.
- Fix eslint no-unnecessary-condition on mime_type cast.
- Add explanatory comments on as unknown as never compile() casts.

* refactor: remove cross-run instruction overhead seeding

Instruction overhead changes too frequently between runs (prompt edits,
tool changes) for a persisted seed to be reliable. Keep the intra-run
bestInstructionOverhead self-calibration in the pruner — it still
corrects within a single run via provider observations. The getters on
AgentContext/Graph/Run remain for future stateful-agent use cases.

* v3.1.60

* docs: update summarization behavior documentation with calibration details

- Revised contextPressure calculation to use calibratedTotalTokens.
- Added section on calibration, explaining the cumulative calibration ratio and its persistence across runs.
- Clarified summarization trigger logic and updated trigger types for improved clarity and accuracy.

* refactor: enhance logging and performance metrics in StandardGraph

- Updated emitAgentLog to include invoke timing and provider information for better debugging.
- Simplified token counting logic in createPruneMessages by removing unnecessary variables and conditions.
- Improved context adjustment logging to capture relevant metrics after pruning adjustments.
- Streamlined emergency message handling to reduce redundant logging and improve clarity.

* fix: update variance log message in token accounting tests

Changed the log message filter in the token accounting pipeline tests from 'Token estimate variance' to 'Calibration observed' to accurately reflect the emitted logs during calibration verification.

* docs: comments

* fix: type createCallModel against AgentSubgraphState, remove double-cast

The inner closure returned by createCallModel operates on the agent
subgraph state (messages + summarizationRequest), not BaseGraphState.
Promote AgentSubgraphState to shared types and use it in the abstract
declaration and concrete implementation, eliminating the
`as unknown as Partial<BaseGraphState>` bypass on the summarization
return path.

* fix: remove compileOptions from inner agent subgraph

The inner subgraph (agent → tools → summarize) runs entirely within a
single processStream call. Passing compileOptions (which may carry a
checkpointer) causes unnecessary intermediate checkpoints and risks
stale-resumption of the summarizationRequest channel on crash recovery.
The outer workflow retains compileOptions for thread persistence.

* refactor: extract helpers from createSummarizeNode, fix empty-output recovery

Break the 590-line closure into five focused helpers:
- buildSummarizationClientConfig: assembles provider/model/clientOptions
- computeSummaryTokenCount: provider output tokens vs local tokenizer
- buildSummaryBloc…
…overage

This commit introduces new test cases in the summarization test suite to cover edge scenarios, ensuring that the summarization logic behaves correctly under various conditions. The tests validate the handling of multiple summarization requests and confirm that the system correctly processes and triggers summarization events based on the conversation history. These enhancements aim to improve the reliability and robustness of the summarization functionality.
This commit updates the DEFAULT_SUMMARIZATION_PROMPT and DEFAULT_UPDATE_SUMMARIZATION_PROMPT to enhance clarity in instructions. The changes clarify the handling of truncated tool results and emphasize that the tool executed fully, improving user understanding during the summarization process. These adjustments aim to provide clearer guidance for users when generating checkpoints and updates.
@pull pull bot locked and limited conversation to collaborators Mar 20, 2026
@pull pull bot added the ⤵️ pull label Mar 20, 2026
@pull pull bot merged commit 6b9f6fc into innFactory:main Mar 20, 2026
2 checks passed
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant