Skip to content

strands.experimental

Experimental features.

This module implements experimental features that are subject to change in future revisions without notice.

strands.experimental.agent_config

Experimental agent configuration utilities.

This module provides utilities for creating agents from configuration files or dictionaries.

Note: Configuration-based agent setup only works for tools that don't require code-based instantiation. For tools that need constructor arguments or complex setup, use the programmatic approach after creating the agent:

agent = config_to_agent("config.json")
# Add tools that need code-based instantiation
agent.tool_registry.process_tools([ToolWithConfigArg(HttpsConnection("localhost"))])

config_to_agent(config, **kwargs)

Create an Agent from a configuration file or dictionary.

This function supports tools that can be loaded declaratively (file paths, module names, or @tool annotated functions). For tools requiring code-based instantiation with constructor arguments, add them programmatically after creating the agent:

agent = config_to_agent("config.json")
agent.process_tools([ToolWithConfigArg(HttpsConnection("localhost"))])

Parameters:

Name Type Description Default
config str | dict[str, Any]

Either a file path (with optional file:// prefix) or a configuration dictionary

required
**kwargs dict[str, Any]

Additional keyword arguments to pass to the Agent constructor

{}

Returns:

Name Type Description
Agent Any

A configured Agent instance

Raises:

Type Description
FileNotFoundError

If the configuration file doesn't exist

JSONDecodeError

If the configuration file contains invalid JSON

ValueError

If the configuration is invalid or tools cannot be loaded

Examples:

Create agent from file:

>>> agent = config_to_agent("/path/to/config.json")

Create agent from file with file:// prefix:

>>> agent = config_to_agent("file:///path/to/config.json")

Create agent from dictionary:

>>> config = {"model": "anthropic.claude-3-5-sonnet-20241022-v2:0", "tools": ["calculator"]}
>>> agent = config_to_agent(config)
Source code in strands/experimental/agent_config.py
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
def config_to_agent(config: str | dict[str, Any], **kwargs: dict[str, Any]) -> Any:
    """Create an Agent from a configuration file or dictionary.

    This function supports tools that can be loaded declaratively (file paths, module names,
    or @tool annotated functions). For tools requiring code-based instantiation with constructor
    arguments, add them programmatically after creating the agent:

        agent = config_to_agent("config.json")
        agent.process_tools([ToolWithConfigArg(HttpsConnection("localhost"))])

    Args:
        config: Either a file path (with optional file:// prefix) or a configuration dictionary
        **kwargs: Additional keyword arguments to pass to the Agent constructor

    Returns:
        Agent: A configured Agent instance

    Raises:
        FileNotFoundError: If the configuration file doesn't exist
        json.JSONDecodeError: If the configuration file contains invalid JSON
        ValueError: If the configuration is invalid or tools cannot be loaded

    Examples:
        Create agent from file:
        >>> agent = config_to_agent("/path/to/config.json")

        Create agent from file with file:// prefix:
        >>> agent = config_to_agent("file:///path/to/config.json")

        Create agent from dictionary:
        >>> config = {"model": "anthropic.claude-3-5-sonnet-20241022-v2:0", "tools": ["calculator"]}
        >>> agent = config_to_agent(config)
    """
    # Parse configuration
    if isinstance(config, str):
        # Handle file path
        file_path = config

        # Remove file:// prefix if present
        if file_path.startswith("file://"):
            file_path = file_path[7:]

        # Load JSON from file
        config_path = Path(file_path)
        if not config_path.exists():
            raise FileNotFoundError(f"Configuration file not found: {file_path}")

        with open(config_path, "r") as f:
            config_dict = json.load(f)
    elif isinstance(config, dict):
        config_dict = config.copy()
    else:
        raise ValueError("Config must be a file path string or dictionary")

    # Validate configuration against schema
    try:
        _VALIDATOR.validate(config_dict)
    except ValidationError as e:
        # Provide more detailed error message
        error_path = " -> ".join(str(p) for p in e.absolute_path) if e.absolute_path else "root"
        raise ValueError(f"Configuration validation error at {error_path}: {e.message}") from e

    # Prepare Agent constructor arguments
    agent_kwargs = {}

    # Map configuration keys to Agent constructor parameters
    config_mapping = {
        "model": "model",
        "prompt": "system_prompt",
        "tools": "tools",
        "name": "name",
    }

    # Only include non-None values from config
    for config_key, agent_param in config_mapping.items():
        if config_key in config_dict and config_dict[config_key] is not None:
            agent_kwargs[agent_param] = config_dict[config_key]

    # Override with any additional kwargs provided
    agent_kwargs.update(kwargs)

    # Import Agent at runtime to avoid circular imports
    from ..agent import Agent

    # Create and return Agent
    return Agent(**agent_kwargs)

strands.experimental.bidi

Bidirectional streaming package.

strands.experimental.bidi.models.model

Bidirectional streaming model interface.

Defines the abstract interface for models that support real-time bidirectional communication with persistent connections. Unlike traditional request-response models, bidirectional models maintain an open connection for streaming audio, text, and tool interactions.

Features:

  • Persistent connection management with connect/close lifecycle
  • Real-time bidirectional communication (send and receive simultaneously)
  • Provider-agnostic event normalization
  • Support for audio, text, image, and tool result streaming

BidiModel

Bases: Protocol

Protocol for bidirectional streaming models.

This interface defines the contract for models that support persistent streaming connections with real-time audio and text communication. Implementations handle provider-specific protocols while exposing a standardized event-based API.

Attributes:

Name Type Description
config dict[str, Any]

Configuration dictionary with provider-specific settings.

Source code in strands/experimental/bidi/models/model.py
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
class BidiModel(Protocol):
    """Protocol for bidirectional streaming models.

    This interface defines the contract for models that support persistent streaming
    connections with real-time audio and text communication. Implementations handle
    provider-specific protocols while exposing a standardized event-based API.

    Attributes:
        config: Configuration dictionary with provider-specific settings.
    """

    config: dict[str, Any]

    async def start(
        self,
        system_prompt: str | None = None,
        tools: list[ToolSpec] | None = None,
        messages: Messages | None = None,
        **kwargs: Any,
    ) -> None:
        """Establish a persistent streaming connection with the model.

        Opens a bidirectional connection that remains active for real-time communication.
        The connection supports concurrent sending and receiving of events until explicitly
        closed. Must be called before any send() or receive() operations.

        Args:
            system_prompt: System instructions to configure model behavior.
            tools: Tool specifications that the model can invoke during the conversation.
            messages: Initial conversation history to provide context.
            **kwargs: Provider-specific configuration options.
        """
        ...

    async def stop(self) -> None:
        """Close the streaming connection and release resources.

        Terminates the active bidirectional connection and cleans up any associated
        resources such as network connections, buffers, or background tasks. After
        calling close(), the model instance cannot be used until start() is called again.
        """
        ...

    def receive(self) -> AsyncIterable[BidiOutputEvent]:
        """Receive streaming events from the model.

        Continuously yields events from the model as they arrive over the connection.
        Events are normalized to a provider-agnostic format for uniform processing.
        This method should be called in a loop or async task to process model responses.

        The stream continues until the connection is closed or an error occurs.

        Yields:
            BidiOutputEvent: Standardized event objects containing audio output,
                transcripts, tool calls, or control signals.
        """
        ...

    async def send(
        self,
        content: BidiInputEvent | ToolResultEvent,
    ) -> None:
        """Send content to the model over the active connection.

        Transmits user input or tool results to the model during an active streaming
        session. Supports multiple content types including text, audio, images, and
        tool execution results. Can be called multiple times during a conversation.

        Args:
            content: The content to send. Must be one of:

                - BidiTextInputEvent: Text message from the user
                - BidiAudioInputEvent: Audio data for speech input
                - BidiImageInputEvent: Image data for visual understanding
                - ToolResultEvent: Result from a tool execution

        Example:
            ```
            await model.send(BidiTextInputEvent(text="Hello", role="user"))
            await model.send(BidiAudioInputEvent(audio=bytes, format="pcm", sample_rate=16000, channels=1))
            await model.send(BidiImageInputEvent(image=bytes, mime_type="image/jpeg", encoding="raw"))
            await model.send(ToolResultEvent(tool_result))
            ```
        """
        ...
receive()

Receive streaming events from the model.

Continuously yields events from the model as they arrive over the connection. Events are normalized to a provider-agnostic format for uniform processing. This method should be called in a loop or async task to process model responses.

The stream continues until the connection is closed or an error occurs.

Yields:

Name Type Description
BidiOutputEvent AsyncIterable[BidiOutputEvent]

Standardized event objects containing audio output, transcripts, tool calls, or control signals.

Source code in strands/experimental/bidi/models/model.py
73
74
75
76
77
78
79
80
81
82
83
84
85
86
def receive(self) -> AsyncIterable[BidiOutputEvent]:
    """Receive streaming events from the model.

    Continuously yields events from the model as they arrive over the connection.
    Events are normalized to a provider-agnostic format for uniform processing.
    This method should be called in a loop or async task to process model responses.

    The stream continues until the connection is closed or an error occurs.

    Yields:
        BidiOutputEvent: Standardized event objects containing audio output,
            transcripts, tool calls, or control signals.
    """
    ...
send(content) async

Send content to the model over the active connection.

Transmits user input or tool results to the model during an active streaming session. Supports multiple content types including text, audio, images, and tool execution results. Can be called multiple times during a conversation.

Parameters:

Name Type Description Default
content BidiInputEvent | ToolResultEvent

The content to send. Must be one of:

  • BidiTextInputEvent: Text message from the user
  • BidiAudioInputEvent: Audio data for speech input
  • BidiImageInputEvent: Image data for visual understanding
  • ToolResultEvent: Result from a tool execution
required
Example
await model.send(BidiTextInputEvent(text="Hello", role="user"))
await model.send(BidiAudioInputEvent(audio=bytes, format="pcm", sample_rate=16000, channels=1))
await model.send(BidiImageInputEvent(image=bytes, mime_type="image/jpeg", encoding="raw"))
await model.send(ToolResultEvent(tool_result))
Source code in strands/experimental/bidi/models/model.py
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
async def send(
    self,
    content: BidiInputEvent | ToolResultEvent,
) -> None:
    """Send content to the model over the active connection.

    Transmits user input or tool results to the model during an active streaming
    session. Supports multiple content types including text, audio, images, and
    tool execution results. Can be called multiple times during a conversation.

    Args:
        content: The content to send. Must be one of:

            - BidiTextInputEvent: Text message from the user
            - BidiAudioInputEvent: Audio data for speech input
            - BidiImageInputEvent: Image data for visual understanding
            - ToolResultEvent: Result from a tool execution

    Example:
        ```
        await model.send(BidiTextInputEvent(text="Hello", role="user"))
        await model.send(BidiAudioInputEvent(audio=bytes, format="pcm", sample_rate=16000, channels=1))
        await model.send(BidiImageInputEvent(image=bytes, mime_type="image/jpeg", encoding="raw"))
        await model.send(ToolResultEvent(tool_result))
        ```
    """
    ...
start(system_prompt=None, tools=None, messages=None, **kwargs) async

Establish a persistent streaming connection with the model.

Opens a bidirectional connection that remains active for real-time communication. The connection supports concurrent sending and receiving of events until explicitly closed. Must be called before any send() or receive() operations.

Parameters:

Name Type Description Default
system_prompt str | None

System instructions to configure model behavior.

None
tools list[ToolSpec] | None

Tool specifications that the model can invoke during the conversation.

None
messages Messages | None

Initial conversation history to provide context.

None
**kwargs Any

Provider-specific configuration options.

{}
Source code in strands/experimental/bidi/models/model.py
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
async def start(
    self,
    system_prompt: str | None = None,
    tools: list[ToolSpec] | None = None,
    messages: Messages | None = None,
    **kwargs: Any,
) -> None:
    """Establish a persistent streaming connection with the model.

    Opens a bidirectional connection that remains active for real-time communication.
    The connection supports concurrent sending and receiving of events until explicitly
    closed. Must be called before any send() or receive() operations.

    Args:
        system_prompt: System instructions to configure model behavior.
        tools: Tool specifications that the model can invoke during the conversation.
        messages: Initial conversation history to provide context.
        **kwargs: Provider-specific configuration options.
    """
    ...
stop() async

Close the streaming connection and release resources.

Terminates the active bidirectional connection and cleans up any associated resources such as network connections, buffers, or background tasks. After calling close(), the model instance cannot be used until start() is called again.

Source code in strands/experimental/bidi/models/model.py
64
65
66
67
68
69
70
71
async def stop(self) -> None:
    """Close the streaming connection and release resources.

    Terminates the active bidirectional connection and cleans up any associated
    resources such as network connections, buffers, or background tasks. After
    calling close(), the model instance cannot be used until start() is called again.
    """
    ...

BidiModelTimeoutError

Bases: Exception

Model timeout error.

Bidirectional models are often configured with a connection time limit. Nova sonic for example keeps the connection open for 8 minutes max. Upon receiving a timeout, the agent loop is configured to restart the model connection so as to create a seamless, uninterrupted experience for the user.

Source code in strands/experimental/bidi/models/model.py
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
class BidiModelTimeoutError(Exception):
    """Model timeout error.

    Bidirectional models are often configured with a connection time limit. Nova sonic for example keeps the connection
    open for 8 minutes max. Upon receiving a timeout, the agent loop is configured to restart the model connection so as
    to create a seamless, uninterrupted experience for the user.
    """

    def __init__(self, message: str, **restart_config: Any) -> None:
        """Initialize error.

        Args:
            message: Timeout message from model.
            **restart_config: Configure restart specific behaviors in the call to model start.
        """
        super().__init__(self, message)

        self.restart_config = restart_config
__init__(message, **restart_config)

Initialize error.

Parameters:

Name Type Description Default
message str

Timeout message from model.

required
**restart_config Any

Configure restart specific behaviors in the call to model start.

{}
Source code in strands/experimental/bidi/models/model.py
125
126
127
128
129
130
131
132
133
134
def __init__(self, message: str, **restart_config: Any) -> None:
    """Initialize error.

    Args:
        message: Timeout message from model.
        **restart_config: Configure restart specific behaviors in the call to model start.
    """
    super().__init__(self, message)

    self.restart_config = restart_config

strands.experimental.bidi.types

Type definitions for bidirectional streaming.

strands.experimental.bidi.types.agent

Agent-related type definitions for bidirectional streaming.

This module defines the types used for BidiAgent.

strands.experimental.bidi.types.events

Bidirectional streaming types for real-time audio/text conversations.

Type definitions for bidirectional streaming that extends Strands' existing streaming capabilities with real-time audio and persistent connection support.

Key features:

  • Audio input/output events with standardized formats
  • Interruption detection and handling
  • Connection lifecycle management
  • Provider-agnostic event types
  • Type-safe discriminated unions with TypedEvent
  • JSON-serializable events (audio/images stored as base64 strings)

Audio format normalization:

  • Supports PCM, WAV, Opus, and MP3 formats
  • Standardizes sample rates (16kHz, 24kHz, 48kHz)
  • Normalizes channel configurations (mono/stereo)
  • Abstracts provider-specific encodings
  • Audio data stored as base64-encoded strings for JSON compatibility

AudioChannel = Literal[1, 2] module-attribute

Number of audio channels.

  • Mono: 1
  • Stereo: 2

AudioFormat = Literal['pcm', 'wav', 'opus', 'mp3'] module-attribute

Audio encoding format.

AudioSampleRate = Literal[16000, 24000, 48000] module-attribute

Audio sample rate in Hz.

BidiInputEvent = BidiTextInputEvent | BidiAudioInputEvent | BidiImageInputEvent module-attribute

Union of different bidi input event types.

BidiOutputEvent = BidiConnectionStartEvent | BidiConnectionRestartEvent | BidiResponseStartEvent | BidiAudioStreamEvent | BidiTranscriptStreamEvent | BidiInterruptionEvent | BidiResponseCompleteEvent | BidiUsageEvent | BidiConnectionCloseEvent | BidiErrorEvent | ToolUseStreamEvent module-attribute

Union of different bidi output event types.

Role = Literal['user', 'assistant'] module-attribute

Role of a message sender.

  • "user": Messages from the user to the assistant.
  • "assistant": Messages from the assistant to the user.

StopReason = Literal['complete', 'error', 'interrupted', 'tool_use'] module-attribute

Reason for the model ending its response generation.

  • "complete": Model completed its response.
  • "error": Model encountered an error.
  • "interrupted": Model was interrupted by the user.
  • "tool_use": Model is requesting a tool use.

BidiAudioInputEvent

Bases: TypedEvent

Audio input event for sending audio to the model.

Used for sending audio data through the send() method.

Parameters:

Name Type Description Default
audio str

Base64-encoded audio string to send to model.

required
format AudioFormat | str

Audio format from SUPPORTED_AUDIO_FORMATS.

required
sample_rate AudioSampleRate

Sample rate from SUPPORTED_SAMPLE_RATES.

required
channels AudioChannel

Channel count from SUPPORTED_CHANNELS.

required
Source code in strands/experimental/bidi/types/events.py
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
class BidiAudioInputEvent(TypedEvent):
    """Audio input event for sending audio to the model.

    Used for sending audio data through the send() method.

    Parameters:
        audio: Base64-encoded audio string to send to model.
        format: Audio format from SUPPORTED_AUDIO_FORMATS.
        sample_rate: Sample rate from SUPPORTED_SAMPLE_RATES.
        channels: Channel count from SUPPORTED_CHANNELS.
    """

    def __init__(
        self,
        audio: str,
        format: AudioFormat | str,
        sample_rate: AudioSampleRate,
        channels: AudioChannel,
    ):
        """Initialize audio input event."""
        super().__init__(
            {
                "type": "bidi_audio_input",
                "audio": audio,
                "format": format,
                "sample_rate": sample_rate,
                "channels": channels,
            }
        )

    @property
    def audio(self) -> str:
        """Base64-encoded audio string."""
        return cast(str, self["audio"])

    @property
    def format(self) -> AudioFormat:
        """Audio encoding format."""
        return cast(AudioFormat, self["format"])

    @property
    def sample_rate(self) -> AudioSampleRate:
        """Number of audio samples per second in Hz."""
        return cast(AudioSampleRate, self["sample_rate"])

    @property
    def channels(self) -> AudioChannel:
        """Number of audio channels (1=mono, 2=stereo)."""
        return cast(AudioChannel, self["channels"])
audio property

Base64-encoded audio string.

channels property

Number of audio channels (1=mono, 2=stereo).

format property

Audio encoding format.

sample_rate property

Number of audio samples per second in Hz.

__init__(audio, format, sample_rate, channels)

Initialize audio input event.

Source code in strands/experimental/bidi/types/events.py
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
def __init__(
    self,
    audio: str,
    format: AudioFormat | str,
    sample_rate: AudioSampleRate,
    channels: AudioChannel,
):
    """Initialize audio input event."""
    super().__init__(
        {
            "type": "bidi_audio_input",
            "audio": audio,
            "format": format,
            "sample_rate": sample_rate,
            "channels": channels,
        }
    )

BidiAudioStreamEvent

Bases: TypedEvent

Streaming audio output from the model.

Parameters:

Name Type Description Default
audio str

Base64-encoded audio string.

required
format AudioFormat

Audio encoding format.

required
sample_rate AudioSampleRate

Number of audio samples per second in Hz.

required
channels AudioChannel

Number of audio channels (1=mono, 2=stereo).

required
Source code in strands/experimental/bidi/types/events.py
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
class BidiAudioStreamEvent(TypedEvent):
    """Streaming audio output from the model.

    Parameters:
        audio: Base64-encoded audio string.
        format: Audio encoding format.
        sample_rate: Number of audio samples per second in Hz.
        channels: Number of audio channels (1=mono, 2=stereo).
    """

    def __init__(
        self,
        audio: str,
        format: AudioFormat,
        sample_rate: AudioSampleRate,
        channels: AudioChannel,
    ):
        """Initialize audio stream event."""
        super().__init__(
            {
                "type": "bidi_audio_stream",
                "audio": audio,
                "format": format,
                "sample_rate": sample_rate,
                "channels": channels,
            }
        )

    @property
    def audio(self) -> str:
        """Base64-encoded audio string."""
        return cast(str, self["audio"])

    @property
    def format(self) -> AudioFormat:
        """Audio encoding format."""
        return cast(AudioFormat, self["format"])

    @property
    def sample_rate(self) -> AudioSampleRate:
        """Number of audio samples per second in Hz."""
        return cast(AudioSampleRate, self["sample_rate"])

    @property
    def channels(self) -> AudioChannel:
        """Number of audio channels (1=mono, 2=stereo)."""
        return cast(AudioChannel, self["channels"])
audio property

Base64-encoded audio string.

channels property

Number of audio channels (1=mono, 2=stereo).

format property

Audio encoding format.

sample_rate property

Number of audio samples per second in Hz.

__init__(audio, format, sample_rate, channels)

Initialize audio stream event.

Source code in strands/experimental/bidi/types/events.py
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
def __init__(
    self,
    audio: str,
    format: AudioFormat,
    sample_rate: AudioSampleRate,
    channels: AudioChannel,
):
    """Initialize audio stream event."""
    super().__init__(
        {
            "type": "bidi_audio_stream",
            "audio": audio,
            "format": format,
            "sample_rate": sample_rate,
            "channels": channels,
        }
    )

BidiConnectionCloseEvent

Bases: TypedEvent

Streaming connection closed.

Parameters:

Name Type Description Default
connection_id str

Unique identifier for this streaming connection (matches BidiConnectionStartEvent).

required
reason Literal['client_disconnect', 'timeout', 'error', 'complete', 'user_request']

Why the connection was closed.

required
Source code in strands/experimental/bidi/types/events.py
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
class BidiConnectionCloseEvent(TypedEvent):
    """Streaming connection closed.

    Parameters:
        connection_id: Unique identifier for this streaming connection (matches BidiConnectionStartEvent).
        reason: Why the connection was closed.
    """

    def __init__(
        self,
        connection_id: str,
        reason: Literal["client_disconnect", "timeout", "error", "complete", "user_request"],
    ):
        """Initialize connection close event."""
        super().__init__(
            {
                "type": "bidi_connection_close",
                "connection_id": connection_id,
                "reason": reason,
            }
        )

    @property
    def connection_id(self) -> str:
        """Unique identifier for this streaming connection."""
        return cast(str, self["connection_id"])

    @property
    def reason(self) -> str:
        """Why the interruption occurred."""
        return cast(str, self["reason"])
connection_id property

Unique identifier for this streaming connection.

reason property

Why the interruption occurred.

__init__(connection_id, reason)

Initialize connection close event.

Source code in strands/experimental/bidi/types/events.py
510
511
512
513
514
515
516
517
518
519
520
521
522
def __init__(
    self,
    connection_id: str,
    reason: Literal["client_disconnect", "timeout", "error", "complete", "user_request"],
):
    """Initialize connection close event."""
    super().__init__(
        {
            "type": "bidi_connection_close",
            "connection_id": connection_id,
            "reason": reason,
        }
    )

BidiConnectionRestartEvent

Bases: TypedEvent

Agent is restarting the model connection after timeout.

Source code in strands/experimental/bidi/types/events.py
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
class BidiConnectionRestartEvent(TypedEvent):
    """Agent is restarting the model connection after timeout."""

    def __init__(self, timeout_error: "BidiModelTimeoutError"):
        """Initialize.

        Args:
            timeout_error: Timeout error reported by the model.
        """
        super().__init__(
            {
                "type": "bidi_connection_restart",
                "timeout_error": timeout_error,
            }
        )

    @property
    def timeout_error(self) -> "BidiModelTimeoutError":
        """Model timeout error."""
        return cast("BidiModelTimeoutError", self["timeout_error"])
timeout_error property

Model timeout error.

__init__(timeout_error)

Initialize.

Parameters:

Name Type Description Default
timeout_error BidiModelTimeoutError

Timeout error reported by the model.

required
Source code in strands/experimental/bidi/types/events.py
218
219
220
221
222
223
224
225
226
227
228
229
def __init__(self, timeout_error: "BidiModelTimeoutError"):
    """Initialize.

    Args:
        timeout_error: Timeout error reported by the model.
    """
    super().__init__(
        {
            "type": "bidi_connection_restart",
            "timeout_error": timeout_error,
        }
    )

BidiConnectionStartEvent

Bases: TypedEvent

Streaming connection established and ready for interaction.

Parameters:

Name Type Description Default
connection_id str

Unique identifier for this streaming connection.

required
model str

Model identifier (e.g., "gpt-realtime", "gemini-2.0-flash-live").

required
Source code in strands/experimental/bidi/types/events.py
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
class BidiConnectionStartEvent(TypedEvent):
    """Streaming connection established and ready for interaction.

    Parameters:
        connection_id: Unique identifier for this streaming connection.
        model: Model identifier (e.g., "gpt-realtime", "gemini-2.0-flash-live").
    """

    def __init__(self, connection_id: str, model: str):
        """Initialize connection start event."""
        super().__init__(
            {
                "type": "bidi_connection_start",
                "connection_id": connection_id,
                "model": model,
            }
        )

    @property
    def connection_id(self) -> str:
        """Unique identifier for this streaming connection."""
        return cast(str, self["connection_id"])

    @property
    def model(self) -> str:
        """Model identifier (e.g., 'gpt-realtime', 'gemini-2.0-flash-live')."""
        return cast(str, self["model"])
connection_id property

Unique identifier for this streaming connection.

model property

Model identifier (e.g., 'gpt-realtime', 'gemini-2.0-flash-live').

__init__(connection_id, model)

Initialize connection start event.

Source code in strands/experimental/bidi/types/events.py
194
195
196
197
198
199
200
201
202
def __init__(self, connection_id: str, model: str):
    """Initialize connection start event."""
    super().__init__(
        {
            "type": "bidi_connection_start",
            "connection_id": connection_id,
            "model": model,
        }
    )

BidiErrorEvent

Bases: TypedEvent

Error occurred during the session.

Stores the full Exception object as an instance attribute for debugging while keeping the event dict JSON-serializable. The exception can be accessed via the error property for re-raising or type-based error handling.

Parameters:

Name Type Description Default
error Exception

The exception that occurred.

required
details dict[str, Any] | None

Optional additional error information.

None
Source code in strands/experimental/bidi/types/events.py
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
class BidiErrorEvent(TypedEvent):
    """Error occurred during the session.

    Stores the full Exception object as an instance attribute for debugging while
    keeping the event dict JSON-serializable. The exception can be accessed via
    the `error` property for re-raising or type-based error handling.

    Parameters:
        error: The exception that occurred.
        details: Optional additional error information.
    """

    def __init__(
        self,
        error: Exception,
        details: dict[str, Any] | None = None,
    ):
        """Initialize error event."""
        # Store serializable data in dict (for JSON serialization)
        super().__init__(
            {
                "type": "bidi_error",
                "message": str(error),
                "code": type(error).__name__,
                "details": details,
            }
        )
        # Store exception as instance attribute (not serialized)
        self._error = error

    @property
    def error(self) -> Exception:
        """The original exception that occurred.

        Can be used for re-raising or type-based error handling.
        """
        return self._error

    @property
    def code(self) -> str:
        """Error code derived from exception class name."""
        return cast(str, self["code"])

    @property
    def message(self) -> str:
        """Human-readable error message from the exception."""
        return cast(str, self["message"])

    @property
    def details(self) -> dict[str, Any] | None:
        """Additional error context beyond the exception itself."""
        return cast(dict[str, Any] | None, self.get("details"))
code property

Error code derived from exception class name.

details property

Additional error context beyond the exception itself.

error property

The original exception that occurred.

Can be used for re-raising or type-based error handling.

message property

Human-readable error message from the exception.

__init__(error, details=None)

Initialize error event.

Source code in strands/experimental/bidi/types/events.py
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
def __init__(
    self,
    error: Exception,
    details: dict[str, Any] | None = None,
):
    """Initialize error event."""
    # Store serializable data in dict (for JSON serialization)
    super().__init__(
        {
            "type": "bidi_error",
            "message": str(error),
            "code": type(error).__name__,
            "details": details,
        }
    )
    # Store exception as instance attribute (not serialized)
    self._error = error

BidiImageInputEvent

Bases: TypedEvent

Image input event for sending images/video frames to the model.

Used for sending image data through the send() method.

Parameters:

Name Type Description Default
image str

Base64-encoded image string.

required
mime_type str

MIME type (e.g., "image/jpeg", "image/png").

required
Source code in strands/experimental/bidi/types/events.py
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
class BidiImageInputEvent(TypedEvent):
    """Image input event for sending images/video frames to the model.

    Used for sending image data through the send() method.

    Parameters:
        image: Base64-encoded image string.
        mime_type: MIME type (e.g., "image/jpeg", "image/png").
    """

    def __init__(
        self,
        image: str,
        mime_type: str,
    ):
        """Initialize image input event."""
        super().__init__(
            {
                "type": "bidi_image_input",
                "image": image,
                "mime_type": mime_type,
            }
        )

    @property
    def image(self) -> str:
        """Base64-encoded image string."""
        return cast(str, self["image"])

    @property
    def mime_type(self) -> str:
        """MIME type of the image (e.g., "image/jpeg", "image/png")."""
        return cast(str, self["mime_type"])
image property

Base64-encoded image string.

mime_type property

MIME type of the image (e.g., "image/jpeg", "image/png").

__init__(image, mime_type)

Initialize image input event.

Source code in strands/experimental/bidi/types/events.py
156
157
158
159
160
161
162
163
164
165
166
167
168
def __init__(
    self,
    image: str,
    mime_type: str,
):
    """Initialize image input event."""
    super().__init__(
        {
            "type": "bidi_image_input",
            "image": image,
            "mime_type": mime_type,
        }
    )

BidiInterruptionEvent

Bases: TypedEvent

Model generation was interrupted.

Parameters:

Name Type Description Default
reason Literal['user_speech', 'error']

Why the interruption occurred.

required
Source code in strands/experimental/bidi/types/events.py
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
class BidiInterruptionEvent(TypedEvent):
    """Model generation was interrupted.

    Parameters:
        reason: Why the interruption occurred.
    """

    def __init__(self, reason: Literal["user_speech", "error"]):
        """Initialize interruption event."""
        super().__init__(
            {
                "type": "bidi_interruption",
                "reason": reason,
            }
        )

    @property
    def reason(self) -> str:
        """Why the interruption occurred."""
        return cast(str, self["reason"])
reason property

Why the interruption occurred.

__init__(reason)

Initialize interruption event.

Source code in strands/experimental/bidi/types/events.py
370
371
372
373
374
375
376
377
def __init__(self, reason: Literal["user_speech", "error"]):
    """Initialize interruption event."""
    super().__init__(
        {
            "type": "bidi_interruption",
            "reason": reason,
        }
    )

BidiResponseCompleteEvent

Bases: TypedEvent

Model finished generating response.

Parameters:

Name Type Description Default
response_id str

ID of the response that completed (matches response.start).

required
stop_reason StopReason

Why the response ended.

required
Source code in strands/experimental/bidi/types/events.py
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
class BidiResponseCompleteEvent(TypedEvent):
    """Model finished generating response.

    Parameters:
        response_id: ID of the response that completed (matches response.start).
        stop_reason: Why the response ended.
    """

    def __init__(
        self,
        response_id: str,
        stop_reason: StopReason,
    ):
        """Initialize response complete event."""
        super().__init__(
            {
                "type": "bidi_response_complete",
                "response_id": response_id,
                "stop_reason": stop_reason,
            }
        )

    @property
    def response_id(self) -> str:
        """Unique identifier for this response."""
        return cast(str, self["response_id"])

    @property
    def stop_reason(self) -> StopReason:
        """Why the response ended."""
        return cast(StopReason, self["stop_reason"])
response_id property

Unique identifier for this response.

stop_reason property

Why the response ended.

__init__(response_id, stop_reason)

Initialize response complete event.

Source code in strands/experimental/bidi/types/events.py
393
394
395
396
397
398
399
400
401
402
403
404
405
def __init__(
    self,
    response_id: str,
    stop_reason: StopReason,
):
    """Initialize response complete event."""
    super().__init__(
        {
            "type": "bidi_response_complete",
            "response_id": response_id,
            "stop_reason": stop_reason,
        }
    )

BidiResponseStartEvent

Bases: TypedEvent

Model starts generating a response.

Parameters:

Name Type Description Default
response_id str

Unique identifier for this response (used in response.complete).

required
Source code in strands/experimental/bidi/types/events.py
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
class BidiResponseStartEvent(TypedEvent):
    """Model starts generating a response.

    Parameters:
        response_id: Unique identifier for this response (used in response.complete).
    """

    def __init__(self, response_id: str):
        """Initialize response start event."""
        super().__init__({"type": "bidi_response_start", "response_id": response_id})

    @property
    def response_id(self) -> str:
        """Unique identifier for this response."""
        return cast(str, self["response_id"])
response_id property

Unique identifier for this response.

__init__(response_id)

Initialize response start event.

Source code in strands/experimental/bidi/types/events.py
244
245
246
def __init__(self, response_id: str):
    """Initialize response start event."""
    super().__init__({"type": "bidi_response_start", "response_id": response_id})

BidiTextInputEvent

Bases: TypedEvent

Text input event for sending text to the model.

Used for sending text content through the send() method.

Parameters:

Name Type Description Default
text str

The text content to send to the model.

required
role Role

The role of the message sender (default: "user").

'user'
Source code in strands/experimental/bidi/types/events.py
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
class BidiTextInputEvent(TypedEvent):
    """Text input event for sending text to the model.

    Used for sending text content through the send() method.

    Parameters:
        text: The text content to send to the model.
        role: The role of the message sender (default: "user").
    """

    def __init__(self, text: str, role: Role = "user"):
        """Initialize text input event."""
        super().__init__(
            {
                "type": "bidi_text_input",
                "text": text,
                "role": role,
            }
        )

    @property
    def text(self) -> str:
        """The text content to send to the model."""
        return cast(str, self["text"])

    @property
    def role(self) -> Role:
        """The role of the message sender."""
        return cast(Role, self["role"])
role property

The role of the message sender.

text property

The text content to send to the model.

__init__(text, role='user')

Initialize text input event.

Source code in strands/experimental/bidi/types/events.py
74
75
76
77
78
79
80
81
82
def __init__(self, text: str, role: Role = "user"):
    """Initialize text input event."""
    super().__init__(
        {
            "type": "bidi_text_input",
            "text": text,
            "role": role,
        }
    )

BidiTranscriptStreamEvent

Bases: ModelStreamEvent

Audio transcription streaming (user or assistant speech).

Supports incremental transcript updates for providers that send partial transcripts before the final version.

Parameters:

Name Type Description Default
delta ContentBlockDelta

The incremental transcript change (ContentBlockDelta).

required
text str

The delta text (same as delta content for convenience).

required
role Role

Who is speaking ("user" or "assistant").

required
is_final bool

Whether this is the final/complete transcript.

required
current_transcript str | None

The accumulated transcript text so far (None for first delta).

None
Source code in strands/experimental/bidi/types/events.py
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
class BidiTranscriptStreamEvent(ModelStreamEvent):
    """Audio transcription streaming (user or assistant speech).

    Supports incremental transcript updates for providers that send partial
    transcripts before the final version.

    Parameters:
        delta: The incremental transcript change (ContentBlockDelta).
        text: The delta text (same as delta content for convenience).
        role: Who is speaking ("user" or "assistant").
        is_final: Whether this is the final/complete transcript.
        current_transcript: The accumulated transcript text so far (None for first delta).
    """

    def __init__(
        self,
        delta: ContentBlockDelta,
        text: str,
        role: Role,
        is_final: bool,
        current_transcript: str | None = None,
    ):
        """Initialize transcript stream event."""
        super().__init__(
            {
                "type": "bidi_transcript_stream",
                "delta": delta,
                "text": text,
                "role": role,
                "is_final": is_final,
                "current_transcript": current_transcript,
            }
        )

    @property
    def delta(self) -> ContentBlockDelta:
        """The incremental transcript change."""
        return cast(ContentBlockDelta, self["delta"])

    @property
    def text(self) -> str:
        """The text content to send to the model."""
        return cast(str, self["text"])

    @property
    def role(self) -> Role:
        """The role of the message sender."""
        return cast(Role, self["role"])

    @property
    def is_final(self) -> bool:
        """Whether this is the final/complete transcript."""
        return cast(bool, self["is_final"])

    @property
    def current_transcript(self) -> str | None:
        """The accumulated transcript text so far."""
        return cast(str | None, self.get("current_transcript"))
current_transcript property

The accumulated transcript text so far.

delta property

The incremental transcript change.

is_final property

Whether this is the final/complete transcript.

role property

The role of the message sender.

text property

The text content to send to the model.

__init__(delta, text, role, is_final, current_transcript=None)

Initialize transcript stream event.

Source code in strands/experimental/bidi/types/events.py
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
def __init__(
    self,
    delta: ContentBlockDelta,
    text: str,
    role: Role,
    is_final: bool,
    current_transcript: str | None = None,
):
    """Initialize transcript stream event."""
    super().__init__(
        {
            "type": "bidi_transcript_stream",
            "delta": delta,
            "text": text,
            "role": role,
            "is_final": is_final,
            "current_transcript": current_transcript,
        }
    )

BidiUsageEvent

Bases: TypedEvent

Token usage event with modality breakdown for bidirectional streaming.

Tracks token consumption across different modalities (audio, text, images) during bidirectional streaming sessions.

Parameters:

Name Type Description Default
input_tokens int

Total tokens used for all input modalities.

required
output_tokens int

Total tokens used for all output modalities.

required
total_tokens int

Sum of input and output tokens.

required
modality_details list[ModalityUsage] | None

Optional list of token usage per modality.

None
cache_read_input_tokens int | None

Optional tokens read from cache.

None
cache_write_input_tokens int | None

Optional tokens written to cache.

None
Source code in strands/experimental/bidi/types/events.py
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
class BidiUsageEvent(TypedEvent):
    """Token usage event with modality breakdown for bidirectional streaming.

    Tracks token consumption across different modalities (audio, text, images)
    during bidirectional streaming sessions.

    Parameters:
        input_tokens: Total tokens used for all input modalities.
        output_tokens: Total tokens used for all output modalities.
        total_tokens: Sum of input and output tokens.
        modality_details: Optional list of token usage per modality.
        cache_read_input_tokens: Optional tokens read from cache.
        cache_write_input_tokens: Optional tokens written to cache.
    """

    def __init__(
        self,
        input_tokens: int,
        output_tokens: int,
        total_tokens: int,
        modality_details: list[ModalityUsage] | None = None,
        cache_read_input_tokens: int | None = None,
        cache_write_input_tokens: int | None = None,
    ):
        """Initialize usage event."""
        data: dict[str, Any] = {
            "type": "bidi_usage",
            "inputTokens": input_tokens,
            "outputTokens": output_tokens,
            "totalTokens": total_tokens,
        }
        if modality_details is not None:
            data["modality_details"] = modality_details
        if cache_read_input_tokens is not None:
            data["cacheReadInputTokens"] = cache_read_input_tokens
        if cache_write_input_tokens is not None:
            data["cacheWriteInputTokens"] = cache_write_input_tokens
        super().__init__(data)

    @property
    def input_tokens(self) -> int:
        """Total tokens used for all input modalities."""
        return cast(int, self["inputTokens"])

    @property
    def output_tokens(self) -> int:
        """Total tokens used for all output modalities."""
        return cast(int, self["outputTokens"])

    @property
    def total_tokens(self) -> int:
        """Sum of input and output tokens."""
        return cast(int, self["totalTokens"])

    @property
    def modality_details(self) -> list[ModalityUsage]:
        """Optional list of token usage per modality."""
        return cast(list[ModalityUsage], self.get("modality_details", []))

    @property
    def cache_read_input_tokens(self) -> int | None:
        """Optional tokens read from cache."""
        return cast(int | None, self.get("cacheReadInputTokens"))

    @property
    def cache_write_input_tokens(self) -> int | None:
        """Optional tokens written to cache."""
        return cast(int | None, self.get("cacheWriteInputTokens"))
cache_read_input_tokens property

Optional tokens read from cache.

cache_write_input_tokens property

Optional tokens written to cache.

input_tokens property

Total tokens used for all input modalities.

modality_details property

Optional list of token usage per modality.

output_tokens property

Total tokens used for all output modalities.

total_tokens property

Sum of input and output tokens.

__init__(input_tokens, output_tokens, total_tokens, modality_details=None, cache_read_input_tokens=None, cache_write_input_tokens=None)

Initialize usage event.

Source code in strands/experimental/bidi/types/events.py
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
def __init__(
    self,
    input_tokens: int,
    output_tokens: int,
    total_tokens: int,
    modality_details: list[ModalityUsage] | None = None,
    cache_read_input_tokens: int | None = None,
    cache_write_input_tokens: int | None = None,
):
    """Initialize usage event."""
    data: dict[str, Any] = {
        "type": "bidi_usage",
        "inputTokens": input_tokens,
        "outputTokens": output_tokens,
        "totalTokens": total_tokens,
    }
    if modality_details is not None:
        data["modality_details"] = modality_details
    if cache_read_input_tokens is not None:
        data["cacheReadInputTokens"] = cache_read_input_tokens
    if cache_write_input_tokens is not None:
        data["cacheWriteInputTokens"] = cache_write_input_tokens
    super().__init__(data)

ModalityUsage

Bases: dict

Token usage for a specific modality.

Attributes:

Name Type Description
modality Literal['text', 'audio', 'image', 'cached']

Type of content.

input_tokens int

Tokens used for this modality's input.

output_tokens int

Tokens used for this modality's output.

Source code in strands/experimental/bidi/types/events.py
418
419
420
421
422
423
424
425
426
427
428
429
class ModalityUsage(dict):
    """Token usage for a specific modality.

    Attributes:
        modality: Type of content.
        input_tokens: Tokens used for this modality's input.
        output_tokens: Tokens used for this modality's output.
    """

    modality: Literal["text", "audio", "image", "cached"]
    input_tokens: int
    output_tokens: int

strands.experimental.bidi.types.io

Protocol for bidirectional streaming IO channels.

Defines callable protocols for input and output channels that can be used with BidiAgent. This approach provides better typing and flexibility by separating input and output concerns into independent callables.

BidiInput

Bases: Protocol

Protocol for bidirectional input callables.

Input callables read data from a source (microphone, camera, websocket, etc.) and return events to be sent to the agent.

Source code in strands/experimental/bidi/types/io.py
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
@runtime_checkable
class BidiInput(Protocol):
    """Protocol for bidirectional input callables.

    Input callables read data from a source (microphone, camera, websocket, etc.)
    and return events to be sent to the agent.
    """

    async def start(self, agent: "BidiAgent") -> None:
        """Start input."""
        return

    async def stop(self) -> None:
        """Stop input."""
        return

    def __call__(self) -> Awaitable[BidiInputEvent]:
        """Read input data from the source.

        Returns:
            Awaitable that resolves to an input event (audio, text, image, etc.)
        """
        ...
__call__()

Read input data from the source.

Returns:

Type Description
Awaitable[BidiInputEvent]

Awaitable that resolves to an input event (audio, text, image, etc.)

Source code in strands/experimental/bidi/types/io.py
32
33
34
35
36
37
38
def __call__(self) -> Awaitable[BidiInputEvent]:
    """Read input data from the source.

    Returns:
        Awaitable that resolves to an input event (audio, text, image, etc.)
    """
    ...
start(agent) async

Start input.

Source code in strands/experimental/bidi/types/io.py
24
25
26
async def start(self, agent: "BidiAgent") -> None:
    """Start input."""
    return
stop() async

Stop input.

Source code in strands/experimental/bidi/types/io.py
28
29
30
async def stop(self) -> None:
    """Stop input."""
    return

BidiOutput

Bases: Protocol

Protocol for bidirectional output callables.

Output callables receive events from the agent and handle them appropriately (play audio, display text, send over websocket, etc.).

Source code in strands/experimental/bidi/types/io.py
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
@runtime_checkable
class BidiOutput(Protocol):
    """Protocol for bidirectional output callables.

    Output callables receive events from the agent and handle them appropriately
    (play audio, display text, send over websocket, etc.).
    """

    async def start(self, agent: "BidiAgent") -> None:
        """Start output."""
        return

    async def stop(self) -> None:
        """Stop output."""
        return

    def __call__(self, event: BidiOutputEvent) -> Awaitable[None]:
        """Process output events from the agent.

        Args:
            event: Output event from the agent (audio, text, tool calls, etc.)
        """
        ...
__call__(event)

Process output events from the agent.

Parameters:

Name Type Description Default
event BidiOutputEvent

Output event from the agent (audio, text, tool calls, etc.)

required
Source code in strands/experimental/bidi/types/io.py
57
58
59
60
61
62
63
def __call__(self, event: BidiOutputEvent) -> Awaitable[None]:
    """Process output events from the agent.

    Args:
        event: Output event from the agent (audio, text, tool calls, etc.)
    """
    ...
start(agent) async

Start output.

Source code in strands/experimental/bidi/types/io.py
49
50
51
async def start(self, agent: "BidiAgent") -> None:
    """Start output."""
    return
stop() async

Stop output.

Source code in strands/experimental/bidi/types/io.py
53
54
55
async def stop(self) -> None:
    """Stop output."""
    return

strands.experimental.bidi.types.model

Model-related type definitions for bidirectional streaming.

Defines types and configurations that are central to model providers, including audio configuration that models use to specify their audio processing requirements.

AudioConfig

Bases: TypedDict

Audio configuration for bidirectional streaming models.

Defines standard audio parameters that model providers use to specify their audio processing requirements. All fields are optional to support models that may not use audio or only need specific parameters.

Model providers build this configuration by merging user-provided values with their own defaults. The resulting configuration is then used by audio I/O implementations to configure hardware appropriately.

Attributes:

Name Type Description
input_rate AudioSampleRate

Input sample rate in Hz (e.g., 16000, 24000, 48000)

output_rate AudioSampleRate

Output sample rate in Hz (e.g., 16000, 24000, 48000)

channels AudioChannel

Number of audio channels (1=mono, 2=stereo)

format AudioFormat

Audio encoding format

voice str

Voice identifier for text-to-speech (e.g., "alloy", "matthew")

Source code in strands/experimental/bidi/types/model.py
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
class AudioConfig(TypedDict, total=False):
    """Audio configuration for bidirectional streaming models.

    Defines standard audio parameters that model providers use to specify
    their audio processing requirements. All fields are optional to support
    models that may not use audio or only need specific parameters.

    Model providers build this configuration by merging user-provided values
    with their own defaults. The resulting configuration is then used by
    audio I/O implementations to configure hardware appropriately.

    Attributes:
        input_rate: Input sample rate in Hz (e.g., 16000, 24000, 48000)
        output_rate: Output sample rate in Hz (e.g., 16000, 24000, 48000)
        channels: Number of audio channels (1=mono, 2=stereo)
        format: Audio encoding format
        voice: Voice identifier for text-to-speech (e.g., "alloy", "matthew")
    """

    input_rate: AudioSampleRate
    output_rate: AudioSampleRate
    channels: AudioChannel
    format: AudioFormat
    voice: str

strands.experimental.hooks

Experimental hook functionality that has not yet reached stability.

strands.experimental.hooks.events

Experimental hook events emitted as part of invoking Agents and BidiAgents.

This module defines the events that are emitted as Agents and BidiAgents run through the lifecycle of a request.

BidiAfterConnectionRestartEvent dataclass

Bases: BidiHookEvent

Event emitted after agent attempts to restart model connection after timeout.

Attribtues

exception: Populated if exception was raised during connection restart. None value means the restart was successful.

Source code in strands/experimental/hooks/events.py
208
209
210
211
212
213
214
215
216
217
@dataclass
class BidiAfterConnectionRestartEvent(BidiHookEvent):
    """Event emitted after agent attempts to restart model connection after timeout.

    Attribtues:
        exception: Populated if exception was raised during connection restart.
            None value means the restart was successful.
    """

    exception: Exception | None = None

BidiAfterInvocationEvent dataclass

Bases: BidiHookEvent

Event triggered when BidiAgent ends a streaming session.

This event is fired after the BidiAgent has completed a streaming session, regardless of whether it completed successfully or encountered an error. Hook providers can use this event for cleanup, logging, or state persistence.

Note: This event uses reverse callback ordering, meaning callbacks registered later will be invoked first during cleanup.

This event is triggered at the end of agent.stop().

Source code in strands/experimental/hooks/events.py
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
@dataclass
class BidiAfterInvocationEvent(BidiHookEvent):
    """Event triggered when BidiAgent ends a streaming session.

    This event is fired after the BidiAgent has completed a streaming session,
    regardless of whether it completed successfully or encountered an error.
    Hook providers can use this event for cleanup, logging, or state persistence.

    Note: This event uses reverse callback ordering, meaning callbacks registered
    later will be invoked first during cleanup.

    This event is triggered at the end of agent.stop().
    """

    @property
    def should_reverse_callbacks(self) -> bool:
        """True to invoke callbacks in reverse order."""
        return True
should_reverse_callbacks property

True to invoke callbacks in reverse order.

BidiAfterToolCallEvent dataclass

Bases: BidiHookEvent

Event triggered after BidiAgent executes a tool.

This event is fired after the BidiAgent has finished executing a tool during a streaming session, regardless of whether the execution was successful or resulted in an error. Hook providers can use this event for cleanup, logging, or post-processing.

Note: This event uses reverse callback ordering, meaning callbacks registered later will be invoked first during cleanup.

Attributes:

Name Type Description
selected_tool AgentTool | None

The tool that was invoked. It may be None if tool lookup failed.

tool_use ToolUse

The tool parameters that were passed to the tool invoked.

invocation_state dict[str, Any]

Keyword arguments that were passed to the tool.

result ToolResult

The result of the tool invocation. Either a ToolResult on success or an Exception if the tool execution failed.

exception Exception | None

Exception if the tool execution failed, None if successful.

cancel_message str | None

The cancellation message if the user cancelled the tool call.

Source code in strands/experimental/hooks/events.py
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
@dataclass
class BidiAfterToolCallEvent(BidiHookEvent):
    """Event triggered after BidiAgent executes a tool.

    This event is fired after the BidiAgent has finished executing a tool during
    a streaming session, regardless of whether the execution was successful or
    resulted in an error. Hook providers can use this event for cleanup, logging,
    or post-processing.

    Note: This event uses reverse callback ordering, meaning callbacks registered
    later will be invoked first during cleanup.

    Attributes:
        selected_tool: The tool that was invoked. It may be None if tool lookup failed.
        tool_use: The tool parameters that were passed to the tool invoked.
        invocation_state: Keyword arguments that were passed to the tool.
        result: The result of the tool invocation. Either a ToolResult on success
            or an Exception if the tool execution failed.
        exception: Exception if the tool execution failed, None if successful.
        cancel_message: The cancellation message if the user cancelled the tool call.
    """

    selected_tool: AgentTool | None
    tool_use: ToolUse
    invocation_state: dict[str, Any]
    result: ToolResult
    exception: Exception | None = None
    cancel_message: str | None = None

    def _can_write(self, name: str) -> bool:
        return name == "result"

    @property
    def should_reverse_callbacks(self) -> bool:
        """True to invoke callbacks in reverse order."""
        return True
should_reverse_callbacks property

True to invoke callbacks in reverse order.

BidiAgentInitializedEvent dataclass

Bases: BidiHookEvent

Event triggered when a BidiAgent has finished initialization.

This event is fired after the BidiAgent has been fully constructed and all built-in components have been initialized. Hook providers can use this event to perform setup tasks that require a fully initialized agent.

Source code in strands/experimental/hooks/events.py
46
47
48
49
50
51
52
53
54
55
@dataclass
class BidiAgentInitializedEvent(BidiHookEvent):
    """Event triggered when a BidiAgent has finished initialization.

    This event is fired after the BidiAgent has been fully constructed and all
    built-in components have been initialized. Hook providers can use this
    event to perform setup tasks that require a fully initialized agent.
    """

    pass

BidiBeforeConnectionRestartEvent dataclass

Bases: BidiHookEvent

Event emitted before agent attempts to restart model connection after timeout.

Attributes:

Name Type Description
timeout_error BidiModelTimeoutError

Timeout error reported by the model.

Source code in strands/experimental/hooks/events.py
197
198
199
200
201
202
203
204
205
@dataclass
class BidiBeforeConnectionRestartEvent(BidiHookEvent):
    """Event emitted before agent attempts to restart model connection after timeout.

    Attributes:
        timeout_error: Timeout error reported by the model.
    """

    timeout_error: "BidiModelTimeoutError"

BidiBeforeInvocationEvent dataclass

Bases: BidiHookEvent

Event triggered when BidiAgent starts a streaming session.

This event is fired before the BidiAgent begins a streaming session, before any model connection or audio processing occurs. Hook providers can use this event to perform session-level setup, logging, or validation.

This event is triggered at the beginning of agent.start().

Source code in strands/experimental/hooks/events.py
58
59
60
61
62
63
64
65
66
67
68
69
@dataclass
class BidiBeforeInvocationEvent(BidiHookEvent):
    """Event triggered when BidiAgent starts a streaming session.

    This event is fired before the BidiAgent begins a streaming session,
    before any model connection or audio processing occurs. Hook providers can
    use this event to perform session-level setup, logging, or validation.

    This event is triggered at the beginning of agent.start().
    """

    pass

BidiBeforeToolCallEvent dataclass

Bases: BidiHookEvent

Event triggered before BidiAgent executes a tool.

This event is fired just before the BidiAgent executes a tool during a streaming session, allowing hook providers to inspect, modify, or replace the tool that will be executed. The selected_tool can be modified by hook callbacks to change which tool gets executed.

Attributes:

Name Type Description
selected_tool AgentTool | None

The tool that will be invoked. Can be modified by hooks to change which tool gets executed. This may be None if tool lookup failed.

tool_use ToolUse

The tool parameters that will be passed to selected_tool.

invocation_state dict[str, Any]

Keyword arguments that will be passed to the tool.

cancel_tool bool | str

A user defined message that when set, will cancel the tool call. The message will be placed into a tool result with an error status. If set to True, Strands will cancel the tool call and use a default cancel message.

Source code in strands/experimental/hooks/events.py
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
@dataclass
class BidiBeforeToolCallEvent(BidiHookEvent):
    """Event triggered before BidiAgent executes a tool.

    This event is fired just before the BidiAgent executes a tool during a streaming
    session, allowing hook providers to inspect, modify, or replace the tool that
    will be executed. The selected_tool can be modified by hook callbacks to change
    which tool gets executed.

    Attributes:
        selected_tool: The tool that will be invoked. Can be modified by hooks
            to change which tool gets executed. This may be None if tool lookup failed.
        tool_use: The tool parameters that will be passed to selected_tool.
        invocation_state: Keyword arguments that will be passed to the tool.
        cancel_tool: A user defined message that when set, will cancel the tool call.
            The message will be placed into a tool result with an error status. If set to `True`, Strands will cancel
            the tool call and use a default cancel message.
    """

    selected_tool: AgentTool | None
    tool_use: ToolUse
    invocation_state: dict[str, Any]
    cancel_tool: bool | str = False

    def _can_write(self, name: str) -> bool:
        return name in ["cancel_tool", "selected_tool", "tool_use"]

BidiHookEvent dataclass

Bases: BaseHookEvent

Base class for BidiAgent hook events.

Attributes:

Name Type Description
agent BidiAgent

The BidiAgent instance that triggered this event.

Source code in strands/experimental/hooks/events.py
35
36
37
38
39
40
41
42
43
@dataclass
class BidiHookEvent(BaseHookEvent):
    """Base class for BidiAgent hook events.

    Attributes:
        agent: The BidiAgent instance that triggered this event.
    """

    agent: "BidiAgent"

BidiInterruptionEvent dataclass

Bases: BidiHookEvent

Event triggered when model generation is interrupted.

This event is fired when the user interrupts the assistant (e.g., by speaking during the assistant's response) or when an error causes interruption. This is specific to bidirectional streaming and doesn't exist in standard agents.

Hook providers can use this event to log interruptions, implement custom interruption handling, or trigger cleanup logic.

Attributes:

Name Type Description
reason Literal['user_speech', 'error']

The reason for the interruption ("user_speech" or "error").

interrupted_response_id str | None

Optional ID of the response that was interrupted.

Source code in strands/experimental/hooks/events.py
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
@dataclass
class BidiInterruptionEvent(BidiHookEvent):
    """Event triggered when model generation is interrupted.

    This event is fired when the user interrupts the assistant (e.g., by speaking
    during the assistant's response) or when an error causes interruption. This is
    specific to bidirectional streaming and doesn't exist in standard agents.

    Hook providers can use this event to log interruptions, implement custom
    interruption handling, or trigger cleanup logic.

    Attributes:
        reason: The reason for the interruption ("user_speech" or "error").
        interrupted_response_id: Optional ID of the response that was interrupted.
    """

    reason: Literal["user_speech", "error"]
    interrupted_response_id: str | None = None

BidiMessageAddedEvent dataclass

Bases: BidiHookEvent

Event triggered when BidiAgent adds a message to the conversation.

This event is fired whenever the BidiAgent adds a new message to its internal message history, including user messages (from transcripts), assistant responses, and tool results. Hook providers can use this event for logging, monitoring, or implementing custom message processing logic.

Note: This event is only triggered for messages added by the framework itself, not for messages manually added by tools or external code.

Attributes:

Name Type Description
message Message

The message that was added to the conversation history.

Source code in strands/experimental/hooks/events.py
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
@dataclass
class BidiMessageAddedEvent(BidiHookEvent):
    """Event triggered when BidiAgent adds a message to the conversation.

    This event is fired whenever the BidiAgent adds a new message to its internal
    message history, including user messages (from transcripts), assistant responses,
    and tool results. Hook providers can use this event for logging, monitoring, or
    implementing custom message processing logic.

    Note: This event is only triggered for messages added by the framework
    itself, not for messages manually added by tools or external code.

    Attributes:
        message: The message that was added to the conversation history.
    """

    message: Message

strands.experimental.hooks.multiagent.events

Multi-agent execution lifecycle events for hook system integration.

These events are fired by orchestrators (Graph/Swarm) at key points so hooks can persist, monitor, or debug execution. No intermediate state model is used—hooks read from the orchestrator directly.

AfterMultiAgentInvocationEvent dataclass

Bases: BaseHookEvent

Event triggered after orchestrator execution completes.

Attributes:

Name Type Description
source MultiAgentBase

The multi-agent orchestrator instance

invocation_state dict[str, Any] | None

Configuration that user passes in

Source code in strands/experimental/hooks/multiagent/events.py
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
@dataclass
class AfterMultiAgentInvocationEvent(BaseHookEvent):
    """Event triggered after orchestrator execution completes.

    Attributes:
        source: The multi-agent orchestrator instance
        invocation_state: Configuration that user passes in
    """

    source: "MultiAgentBase"
    invocation_state: dict[str, Any] | None = None

    @property
    def should_reverse_callbacks(self) -> bool:
        """True to invoke callbacks in reverse order."""
        return True
should_reverse_callbacks property

True to invoke callbacks in reverse order.

AfterNodeCallEvent dataclass

Bases: BaseHookEvent

Event triggered after individual node execution completes.

Attributes:

Name Type Description
source MultiAgentBase

The multi-agent orchestrator instance

node_id str

ID of the node that just completed execution

invocation_state dict[str, Any] | None

Configuration that user passes in

Source code in strands/experimental/hooks/multiagent/events.py
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
@dataclass
class AfterNodeCallEvent(BaseHookEvent):
    """Event triggered after individual node execution completes.

    Attributes:
        source: The multi-agent orchestrator instance
        node_id: ID of the node that just completed execution
        invocation_state: Configuration that user passes in
    """

    source: "MultiAgentBase"
    node_id: str
    invocation_state: dict[str, Any] | None = None

    @property
    def should_reverse_callbacks(self) -> bool:
        """True to invoke callbacks in reverse order."""
        return True
should_reverse_callbacks property

True to invoke callbacks in reverse order.

BeforeMultiAgentInvocationEvent dataclass

Bases: BaseHookEvent

Event triggered before orchestrator execution starts.

Attributes:

Name Type Description
source MultiAgentBase

The multi-agent orchestrator instance

invocation_state dict[str, Any] | None

Configuration that user passes in

Source code in strands/experimental/hooks/multiagent/events.py
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
@dataclass
class BeforeMultiAgentInvocationEvent(BaseHookEvent):
    """Event triggered before orchestrator execution starts.

    Attributes:
        source: The multi-agent orchestrator instance
        invocation_state: Configuration that user passes in
    """

    source: "MultiAgentBase"
    invocation_state: dict[str, Any] | None = None

BeforeNodeCallEvent dataclass

Bases: BaseHookEvent, _Interruptible

Event triggered before individual node execution starts.

Attributes:

Name Type Description
source MultiAgentBase

The multi-agent orchestrator instance

node_id str

ID of the node about to execute

invocation_state dict[str, Any] | None

Configuration that user passes in

cancel_node bool | str

A user defined message that when set, will cancel the node execution with status FAILED. The message will be emitted under a MultiAgentNodeCancel event. If set to True, Strands will cancel the node using a default cancel message.

Source code in strands/experimental/hooks/multiagent/events.py
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
@dataclass
class BeforeNodeCallEvent(BaseHookEvent, _Interruptible):
    """Event triggered before individual node execution starts.

    Attributes:
        source: The multi-agent orchestrator instance
        node_id: ID of the node about to execute
        invocation_state: Configuration that user passes in
        cancel_node: A user defined message that when set, will cancel the node execution with status FAILED.
            The message will be emitted under a MultiAgentNodeCancel event. If set to `True`, Strands will cancel the
            node using a default cancel message.
    """

    source: "MultiAgentBase"
    node_id: str
    invocation_state: dict[str, Any] | None = None
    cancel_node: bool | str = False

    def _can_write(self, name: str) -> bool:
        return name in ["cancel_node"]

    @override
    def _interrupt_id(self, name: str) -> str:
        """Unique id for the interrupt.

        Args:
            name: User defined name for the interrupt.

        Returns:
            Interrupt id.
        """
        node_id = uuid.uuid5(uuid.NAMESPACE_OID, self.node_id)
        call_id = uuid.uuid5(uuid.NAMESPACE_OID, name)
        return f"v1:before_node_call:{node_id}:{call_id}"

MultiAgentInitializedEvent dataclass

Bases: BaseHookEvent

Event triggered when multi-agent orchestrator initialized.

Attributes:

Name Type Description
source MultiAgentBase

The multi-agent orchestrator instance

invocation_state dict[str, Any] | None

Configuration that user passes in

Source code in strands/experimental/hooks/multiagent/events.py
21
22
23
24
25
26
27
28
29
30
31
@dataclass
class MultiAgentInitializedEvent(BaseHookEvent):
    """Event triggered when multi-agent orchestrator initialized.

    Attributes:
        source: The multi-agent orchestrator instance
        invocation_state: Configuration that user passes in
    """

    source: "MultiAgentBase"
    invocation_state: dict[str, Any] | None = None

strands.experimental.steering

Steering system for Strands agents.

Provides contextual guidance for agents through modular prompting with progressive disclosure. Instead of front-loading all instructions, steering handlers provide just-in-time feedback based on local context data populated by context callbacks.

Core components:

  • SteeringHandler: Base class for guidance logic with local context
  • SteeringContextCallback: Protocol for context update functions
  • SteeringContextProvider: Protocol for multi-event context providers
  • SteeringAction: Proceed/Guide/Interrupt decisions
Usage

handler = LLMSteeringHandler(system_prompt="...") agent = Agent(tools=[...], hooks=[handler])

strands.experimental.steering.context_providers.ledger_provider

Ledger context provider for comprehensive agent activity tracking.

Tracks complete agent activity ledger including tool calls, conversation history, and timing information. This comprehensive audit trail enables steering handlers to make informed guidance decisions based on agent behavior patterns and history.

Data captured:

- Tool call history with inputs, outputs, timing, success/failure
- Conversation messages and agent responses
- Session metadata and timing information
- Error patterns and recovery attempts
Usage

Use as context provider functions or mix into steering handlers.

LedgerAfterToolCall

Bases: SteeringContextCallback[AfterToolCallEvent]

Context provider for ledger tracking after tool calls.

Source code in strands/experimental/steering/context_providers/ledger_provider.py
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
class LedgerAfterToolCall(SteeringContextCallback[AfterToolCallEvent]):
    """Context provider for ledger tracking after tool calls."""

    def __call__(self, event: AfterToolCallEvent, steering_context: SteeringContext, **kwargs: Any) -> None:
        """Update ledger after tool call."""
        ledger = steering_context.data.get("ledger") or {}

        if ledger.get("tool_calls"):
            last_call = ledger["tool_calls"][-1]
            last_call.update(
                {
                    "completion_timestamp": datetime.now().isoformat(),
                    "status": event.result["status"],
                    "result": event.result["content"],
                    "error": str(event.exception) if event.exception else None,
                }
            )
            steering_context.data.set("ledger", ledger)
__call__(event, steering_context, **kwargs)

Update ledger after tool call.

Source code in strands/experimental/steering/context_providers/ledger_provider.py
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
def __call__(self, event: AfterToolCallEvent, steering_context: SteeringContext, **kwargs: Any) -> None:
    """Update ledger after tool call."""
    ledger = steering_context.data.get("ledger") or {}

    if ledger.get("tool_calls"):
        last_call = ledger["tool_calls"][-1]
        last_call.update(
            {
                "completion_timestamp": datetime.now().isoformat(),
                "status": event.result["status"],
                "result": event.result["content"],
                "error": str(event.exception) if event.exception else None,
            }
        )
        steering_context.data.set("ledger", ledger)

LedgerBeforeToolCall

Bases: SteeringContextCallback[BeforeToolCallEvent]

Context provider for ledger tracking before tool calls.

Source code in strands/experimental/steering/context_providers/ledger_provider.py
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
class LedgerBeforeToolCall(SteeringContextCallback[BeforeToolCallEvent]):
    """Context provider for ledger tracking before tool calls."""

    def __init__(self) -> None:
        """Initialize the ledger provider."""
        self.session_start = datetime.now().isoformat()

    def __call__(self, event: BeforeToolCallEvent, steering_context: SteeringContext, **kwargs: Any) -> None:
        """Update ledger before tool call."""
        ledger = steering_context.data.get("ledger") or {}

        if not ledger:
            ledger = {
                "session_start": self.session_start,
                "tool_calls": [],
                "conversation_history": [],
                "session_metadata": {},
            }

        tool_call_entry = {
            "timestamp": datetime.now().isoformat(),
            "tool_name": event.tool_use.get("name"),
            "tool_args": event.tool_use.get("arguments", {}),
            "status": "pending",
        }
        ledger["tool_calls"].append(tool_call_entry)
        steering_context.data.set("ledger", ledger)
__call__(event, steering_context, **kwargs)

Update ledger before tool call.

Source code in strands/experimental/steering/context_providers/ledger_provider.py
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
def __call__(self, event: BeforeToolCallEvent, steering_context: SteeringContext, **kwargs: Any) -> None:
    """Update ledger before tool call."""
    ledger = steering_context.data.get("ledger") or {}

    if not ledger:
        ledger = {
            "session_start": self.session_start,
            "tool_calls": [],
            "conversation_history": [],
            "session_metadata": {},
        }

    tool_call_entry = {
        "timestamp": datetime.now().isoformat(),
        "tool_name": event.tool_use.get("name"),
        "tool_args": event.tool_use.get("arguments", {}),
        "status": "pending",
    }
    ledger["tool_calls"].append(tool_call_entry)
    steering_context.data.set("ledger", ledger)
__init__()

Initialize the ledger provider.

Source code in strands/experimental/steering/context_providers/ledger_provider.py
31
32
33
def __init__(self) -> None:
    """Initialize the ledger provider."""
    self.session_start = datetime.now().isoformat()

LedgerProvider

Bases: SteeringContextProvider

Combined ledger context provider for both before and after tool calls.

Source code in strands/experimental/steering/context_providers/ledger_provider.py
77
78
79
80
81
82
83
84
85
class LedgerProvider(SteeringContextProvider):
    """Combined ledger context provider for both before and after tool calls."""

    def context_providers(self, **kwargs: Any) -> list[SteeringContextCallback]:
        """Return ledger context providers with shared state."""
        return [
            LedgerBeforeToolCall(),
            LedgerAfterToolCall(),
        ]
context_providers(**kwargs)

Return ledger context providers with shared state.

Source code in strands/experimental/steering/context_providers/ledger_provider.py
80
81
82
83
84
85
def context_providers(self, **kwargs: Any) -> list[SteeringContextCallback]:
    """Return ledger context providers with shared state."""
    return [
        LedgerBeforeToolCall(),
        LedgerAfterToolCall(),
    ]

strands.experimental.steering.core

Core steering system interfaces and base classes.

strands.experimental.steering.core.action

SteeringAction types for steering evaluation results.

Defines structured outcomes from steering handlers that determine how tool calls should be handled. SteeringActions enable modular prompting by providing just-in-time feedback rather than front-loading all instructions in monolithic prompts.

Flow

SteeringHandler.steer() → SteeringAction → BeforeToolCallEvent handling ↓ ↓ ↓ Evaluate context Action type Tool execution modified

SteeringAction types

Proceed: Tool executes immediately (no intervention needed) Guide: Tool cancelled, agent receives contextual feedback to explore alternatives Interrupt: Tool execution paused for human input via interrupt system

Extensibility

New action types can be added to the union. Always handle the default case in pattern matching to maintain backward compatibility.

Guide

Bases: BaseModel

Cancel tool and provide contextual feedback for agent to explore alternatives.

The tool call is cancelled and the agent receives the reason as contextual feedback to help them consider alternative approaches while maintaining adaptive reasoning capabilities.

Source code in strands/experimental/steering/core/action.py
38
39
40
41
42
43
44
45
46
47
class Guide(BaseModel):
    """Cancel tool and provide contextual feedback for agent to explore alternatives.

    The tool call is cancelled and the agent receives the reason as contextual
    feedback to help them consider alternative approaches while maintaining
    adaptive reasoning capabilities.
    """

    type: Literal["guide"] = "guide"
    reason: str

Interrupt

Bases: BaseModel

Pause tool execution for human input via interrupt system.

The tool call is paused and human input is requested through Strands' interrupt system. The human can approve or deny the operation, and their decision determines whether the tool executes or is cancelled.

Source code in strands/experimental/steering/core/action.py
50
51
52
53
54
55
56
57
58
59
class Interrupt(BaseModel):
    """Pause tool execution for human input via interrupt system.

    The tool call is paused and human input is requested through Strands'
    interrupt system. The human can approve or deny the operation, and their
    decision determines whether the tool executes or is cancelled.
    """

    type: Literal["interrupt"] = "interrupt"
    reason: str

Proceed

Bases: BaseModel

Allow tool to execute immediately without intervention.

The tool call proceeds as planned. The reason provides context for logging and debugging purposes.

Source code in strands/experimental/steering/core/action.py
27
28
29
30
31
32
33
34
35
class Proceed(BaseModel):
    """Allow tool to execute immediately without intervention.

    The tool call proceeds as planned. The reason provides context
    for logging and debugging purposes.
    """

    type: Literal["proceed"] = "proceed"
    reason: str

strands.experimental.steering.core.context

Steering context protocols for contextual guidance.

Defines protocols for context callbacks and providers that populate steering context data used by handlers to make guidance decisions.

Architecture

SteeringContextCallback → Handler.steering_context → SteeringHandler.steer() ↓ ↓ ↓ Update local context Store in handler Access via self.steering_context

Context lifecycle
  1. Handler registers context callbacks for hook events
  2. Callbacks update handler's local steering_context on events
  3. Handler accesses self.steering_context in steer() method
  4. Context persists across calls within handler instance
Implementation

Each handler maintains its own JSONSerializableDict context. Callbacks are registered per handler instance for isolation. Providers can supply multiple callbacks for different events.

SteeringContext dataclass

Container for steering context data.

Source code in strands/experimental/steering/core/context.py
34
35
36
37
38
39
40
41
42
43
@dataclass
class SteeringContext:
    """Container for steering context data."""

    """Container for steering context data.

    This class should not be instantiated directly - it is intended for internal use only.
    """

    data: JSONSerializableDict = field(default_factory=JSONSerializableDict)

SteeringContextCallback

Bases: ABC, Generic[EventType]

Abstract base class for steering context update callbacks.

Source code in strands/experimental/steering/core/context.py
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
class SteeringContextCallback(ABC, Generic[EventType]):
    """Abstract base class for steering context update callbacks."""

    @property
    def event_type(self) -> type[HookEvent]:
        """Return the event type this callback handles."""
        for base in getattr(self.__class__, "__orig_bases__", ()):
            if get_origin(base) is SteeringContextCallback:
                return cast(type[HookEvent], get_args(base)[0])
        raise ValueError("Could not determine event type from generic parameter")

    def __call__(self, event: EventType, steering_context: "SteeringContext", **kwargs: Any) -> None:
        """Update steering context based on hook event.

        Args:
            event: The hook event that triggered the callback
            steering_context: The steering context to update
            **kwargs: Additional keyword arguments for context updates
        """
        ...
event_type property

Return the event type this callback handles.

__call__(event, steering_context, **kwargs)

Update steering context based on hook event.

Parameters:

Name Type Description Default
event EventType

The hook event that triggered the callback

required
steering_context SteeringContext

The steering context to update

required
**kwargs Any

Additional keyword arguments for context updates

{}
Source code in strands/experimental/steering/core/context.py
60
61
62
63
64
65
66
67
68
def __call__(self, event: EventType, steering_context: "SteeringContext", **kwargs: Any) -> None:
    """Update steering context based on hook event.

    Args:
        event: The hook event that triggered the callback
        steering_context: The steering context to update
        **kwargs: Additional keyword arguments for context updates
    """
    ...

SteeringContextProvider

Bases: ABC

Abstract base class for context providers that handle multiple event types.

Source code in strands/experimental/steering/core/context.py
71
72
73
74
75
76
77
class SteeringContextProvider(ABC):
    """Abstract base class for context providers that handle multiple event types."""

    @abstractmethod
    def context_providers(self, **kwargs: Any) -> list[SteeringContextCallback]:
        """Return list of context callbacks with event types extracted from generics."""
        ...
context_providers(**kwargs) abstractmethod

Return list of context callbacks with event types extracted from generics.

Source code in strands/experimental/steering/core/context.py
74
75
76
77
@abstractmethod
def context_providers(self, **kwargs: Any) -> list[SteeringContextCallback]:
    """Return list of context callbacks with event types extracted from generics."""
    ...

strands.experimental.steering.core.handler

Steering handler base class for providing contextual guidance to agents.

Provides modular prompting through contextual guidance that appears when relevant, rather than front-loading all instructions. Handlers integrate with the Strands hook system to intercept tool calls and provide just-in-time feedback based on local context.

Architecture

BeforeToolCallEvent → Context Callbacks → Update steering_context → steer() → SteeringAction ↓ ↓ ↓ ↓ ↓ Hook triggered Populate context Handler evaluates Handler decides Action taken

Lifecycle
  1. Context callbacks update handler's steering_context on hook events
  2. BeforeToolCallEvent triggers steering evaluation via steer() method
  3. Handler accesses self.steering_context for guidance decisions
  4. SteeringAction determines tool execution: Proceed/Guide/Interrupt
Implementation

Subclass SteeringHandler and implement steer() method. Pass context_callbacks in constructor to register context update functions. Each handler maintains isolated steering_context that persists across calls.

SteeringAction handling

Proceed: Tool executes immediately Guide: Tool cancelled, agent receives contextual feedback to explore alternatives Interrupt: Tool execution paused for human input via interrupt system

SteeringHandler

Bases: HookProvider, ABC

Base class for steering handlers that provide contextual guidance to agents.

Steering handlers maintain local context and register hook callbacks to populate context data as needed for guidance decisions.

Source code in strands/experimental/steering/core/handler.py
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
class SteeringHandler(HookProvider, ABC):
    """Base class for steering handlers that provide contextual guidance to agents.

    Steering handlers maintain local context and register hook callbacks
    to populate context data as needed for guidance decisions.
    """

    def __init__(self, context_providers: list[SteeringContextProvider] | None = None):
        """Initialize the steering handler.

        Args:
            context_providers: List of context providers for context updates
        """
        super().__init__()
        self.steering_context = SteeringContext()
        self._context_callbacks = []

        # Collect callbacks from all providers
        for provider in context_providers or []:
            self._context_callbacks.extend(provider.context_providers())

        logger.debug("handler_class=<%s> | initialized", self.__class__.__name__)

    def register_hooks(self, registry: HookRegistry, **kwargs: Any) -> None:
        """Register hooks for steering guidance and context updates."""
        # Register context update callbacks
        for callback in self._context_callbacks:
            registry.add_callback(
                callback.event_type, lambda event, callback=callback: callback(event, self.steering_context)
            )

        # Register steering guidance
        registry.add_callback(BeforeToolCallEvent, self._provide_steering_guidance)

    async def _provide_steering_guidance(self, event: BeforeToolCallEvent) -> None:
        """Provide steering guidance for tool call."""
        tool_name = event.tool_use["name"]
        logger.debug("tool_name=<%s> | providing steering guidance", tool_name)

        try:
            action = await self.steer(event.agent, event.tool_use)
        except Exception as e:
            logger.debug("tool_name=<%s>, error=<%s> | steering handler guidance failed", tool_name, e)
            return

        self._handle_steering_action(action, event, tool_name)

    def _handle_steering_action(self, action: SteeringAction, event: BeforeToolCallEvent, tool_name: str) -> None:
        """Handle the steering action by modifying tool execution flow.

        Proceed: Tool executes normally
        Guide: Tool cancelled with contextual feedback for agent to consider alternatives
        Interrupt: Tool execution paused for human input via interrupt system
        """
        if isinstance(action, Proceed):
            logger.debug("tool_name=<%s> | tool call proceeding", tool_name)
        elif isinstance(action, Guide):
            logger.debug("tool_name=<%s> | tool call guided: %s", tool_name, action.reason)
            event.cancel_tool = (
                f"Tool call cancelled given new guidance. {action.reason}. Consider this approach and continue"
            )
        elif isinstance(action, Interrupt):
            logger.debug("tool_name=<%s> | tool call requires human input: %s", tool_name, action.reason)
            can_proceed: bool = event.interrupt(name=f"steering_input_{tool_name}", reason={"message": action.reason})
            logger.debug("tool_name=<%s> | received human input for tool call", tool_name)

            if not can_proceed:
                event.cancel_tool = f"Manual approval denied: {action.reason}"
                logger.debug("tool_name=<%s> | tool call denied by manual approval", tool_name)
            else:
                logger.debug("tool_name=<%s> | tool call approved manually", tool_name)
        else:
            raise ValueError(f"Unknown steering action type: {action}")

    @abstractmethod
    async def steer(self, agent: "Agent", tool_use: ToolUse, **kwargs: Any) -> SteeringAction:
        """Provide contextual guidance to help agent navigate complex workflows.

        Args:
            agent: The agent instance
            tool_use: The tool use object with name and arguments
            **kwargs: Additional keyword arguments for guidance evaluation

        Returns:
            SteeringAction indicating how to guide the agent's next action

        Note:
            Access steering context via self.steering_context
        """
        ...
__init__(context_providers=None)

Initialize the steering handler.

Parameters:

Name Type Description Default
context_providers list[SteeringContextProvider] | None

List of context providers for context updates

None
Source code in strands/experimental/steering/core/handler.py
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
def __init__(self, context_providers: list[SteeringContextProvider] | None = None):
    """Initialize the steering handler.

    Args:
        context_providers: List of context providers for context updates
    """
    super().__init__()
    self.steering_context = SteeringContext()
    self._context_callbacks = []

    # Collect callbacks from all providers
    for provider in context_providers or []:
        self._context_callbacks.extend(provider.context_providers())

    logger.debug("handler_class=<%s> | initialized", self.__class__.__name__)
register_hooks(registry, **kwargs)

Register hooks for steering guidance and context updates.

Source code in strands/experimental/steering/core/handler.py
68
69
70
71
72
73
74
75
76
77
def register_hooks(self, registry: HookRegistry, **kwargs: Any) -> None:
    """Register hooks for steering guidance and context updates."""
    # Register context update callbacks
    for callback in self._context_callbacks:
        registry.add_callback(
            callback.event_type, lambda event, callback=callback: callback(event, self.steering_context)
        )

    # Register steering guidance
    registry.add_callback(BeforeToolCallEvent, self._provide_steering_guidance)
steer(agent, tool_use, **kwargs) abstractmethod async

Provide contextual guidance to help agent navigate complex workflows.

Parameters:

Name Type Description Default
agent Agent

The agent instance

required
tool_use ToolUse

The tool use object with name and arguments

required
**kwargs Any

Additional keyword arguments for guidance evaluation

{}

Returns:

Type Description
SteeringAction

SteeringAction indicating how to guide the agent's next action

Note

Access steering context via self.steering_context

Source code in strands/experimental/steering/core/handler.py
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
@abstractmethod
async def steer(self, agent: "Agent", tool_use: ToolUse, **kwargs: Any) -> SteeringAction:
    """Provide contextual guidance to help agent navigate complex workflows.

    Args:
        agent: The agent instance
        tool_use: The tool use object with name and arguments
        **kwargs: Additional keyword arguments for guidance evaluation

    Returns:
        SteeringAction indicating how to guide the agent's next action

    Note:
        Access steering context via self.steering_context
    """
    ...

strands.experimental.steering.handlers

Steering handler implementations.

strands.experimental.steering.handlers.llm

LLM steering handler with prompt mapping.

strands.experimental.steering.handlers.llm.llm_handler

LLM-based steering handler that uses an LLM to provide contextual guidance.

LLMSteeringHandler

Bases: SteeringHandler

Steering handler that uses an LLM to provide contextual guidance.

Uses natural language prompts to evaluate tool calls and provide contextual steering guidance to help agents navigate complex workflows.

Source code in strands/experimental/steering/handlers/llm/llm_handler.py
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
class LLMSteeringHandler(SteeringHandler):
    """Steering handler that uses an LLM to provide contextual guidance.

    Uses natural language prompts to evaluate tool calls and provide
    contextual steering guidance to help agents navigate complex workflows.
    """

    def __init__(
        self,
        system_prompt: str,
        prompt_mapper: LLMPromptMapper | None = None,
        model: Model | None = None,
        context_providers: list[SteeringContextProvider] | None = None,
    ):
        """Initialize the LLMSteeringHandler.

        Args:
            system_prompt: System prompt defining steering guidance rules
            prompt_mapper: Custom prompt mapper for evaluation prompts
            model: Optional model override for steering evaluation
            context_providers: List of context providers for populating steering context
        """
        providers = context_providers or [LedgerProvider()]
        super().__init__(context_providers=providers)
        self.system_prompt = system_prompt
        self.prompt_mapper = prompt_mapper or DefaultPromptMapper()
        self.model = model

    async def steer(self, agent: "Agent", tool_use: ToolUse, **kwargs: Any) -> SteeringAction:
        """Provide contextual guidance for tool usage.

        Args:
            agent: The agent instance
            tool_use: The tool use object with name and arguments
            **kwargs: Additional keyword arguments for steering evaluation

        Returns:
            SteeringAction indicating how to guide the agent's next action
        """
        # Generate steering prompt
        prompt = self.prompt_mapper.create_steering_prompt(self.steering_context, tool_use=tool_use)

        # Create isolated agent for steering evaluation (no shared conversation state)
        from .....agent import Agent

        steering_agent = Agent(system_prompt=self.system_prompt, model=self.model or agent.model, callback_handler=None)

        # Get LLM decision
        llm_result: _LLMSteering = cast(
            _LLMSteering, steering_agent(prompt, structured_output_model=_LLMSteering).structured_output
        )

        # Convert LLM decision to steering action
        match llm_result.decision:
            case "proceed":
                return Proceed(reason=llm_result.reason)
            case "guide":
                return Guide(reason=llm_result.reason)
            case "interrupt":
                return Interrupt(reason=llm_result.reason)
            case _:
                logger.warning("decision=<%s> | uŹknown llm decision, defaulting to proceed", llm_result.decision)  # type: ignore[unreachable]
                return Proceed(reason="Unknown LLM decision, defaulting to proceed")
__init__(system_prompt, prompt_mapper=None, model=None, context_providers=None)

Initialize the LLMSteeringHandler.

Parameters:

Name Type Description Default
system_prompt str

System prompt defining steering guidance rules

required
prompt_mapper LLMPromptMapper | None

Custom prompt mapper for evaluation prompts

None
model Model | None

Optional model override for steering evaluation

None
context_providers list[SteeringContextProvider] | None

List of context providers for populating steering context

None
Source code in strands/experimental/steering/handlers/llm/llm_handler.py
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
def __init__(
    self,
    system_prompt: str,
    prompt_mapper: LLMPromptMapper | None = None,
    model: Model | None = None,
    context_providers: list[SteeringContextProvider] | None = None,
):
    """Initialize the LLMSteeringHandler.

    Args:
        system_prompt: System prompt defining steering guidance rules
        prompt_mapper: Custom prompt mapper for evaluation prompts
        model: Optional model override for steering evaluation
        context_providers: List of context providers for populating steering context
    """
    providers = context_providers or [LedgerProvider()]
    super().__init__(context_providers=providers)
    self.system_prompt = system_prompt
    self.prompt_mapper = prompt_mapper or DefaultPromptMapper()
    self.model = model
steer(agent, tool_use, **kwargs) async

Provide contextual guidance for tool usage.

Parameters:

Name Type Description Default
agent 'Agent'

The agent instance

required
tool_use ToolUse

The tool use object with name and arguments

required
**kwargs Any

Additional keyword arguments for steering evaluation

{}

Returns:

Type Description
SteeringAction

SteeringAction indicating how to guide the agent's next action

Source code in strands/experimental/steering/handlers/llm/llm_handler.py
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
async def steer(self, agent: "Agent", tool_use: ToolUse, **kwargs: Any) -> SteeringAction:
    """Provide contextual guidance for tool usage.

    Args:
        agent: The agent instance
        tool_use: The tool use object with name and arguments
        **kwargs: Additional keyword arguments for steering evaluation

    Returns:
        SteeringAction indicating how to guide the agent's next action
    """
    # Generate steering prompt
    prompt = self.prompt_mapper.create_steering_prompt(self.steering_context, tool_use=tool_use)

    # Create isolated agent for steering evaluation (no shared conversation state)
    from .....agent import Agent

    steering_agent = Agent(system_prompt=self.system_prompt, model=self.model or agent.model, callback_handler=None)

    # Get LLM decision
    llm_result: _LLMSteering = cast(
        _LLMSteering, steering_agent(prompt, structured_output_model=_LLMSteering).structured_output
    )

    # Convert LLM decision to steering action
    match llm_result.decision:
        case "proceed":
            return Proceed(reason=llm_result.reason)
        case "guide":
            return Guide(reason=llm_result.reason)
        case "interrupt":
            return Interrupt(reason=llm_result.reason)
        case _:
            logger.warning("decision=<%s> | uŹknown llm decision, defaulting to proceed", llm_result.decision)  # type: ignore[unreachable]
            return Proceed(reason="Unknown LLM decision, defaulting to proceed")

strands.experimental.steering.handlers.llm.mappers

LLM steering prompt mappers for generating evaluation prompts.

DefaultPromptMapper

Bases: LLMPromptMapper

Default prompt mapper for steering evaluation.

Source code in strands/experimental/steering/handlers/llm/mappers.py
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
class DefaultPromptMapper(LLMPromptMapper):
    """Default prompt mapper for steering evaluation."""

    def create_steering_prompt(
        self, steering_context: SteeringContext, tool_use: ToolUse | None = None, **kwargs: Any
    ) -> str:
        """Create default steering prompt using Agent SOP structure.

        Uses Agent SOP format for structured, constraint-based prompts.
        See: https://github.com/strands-agents/agent-sop
        """
        context_str = (
            json.dumps(steering_context.data.get(), indent=2) if steering_context.data.get() else "No context available"
        )

        if tool_use:
            event_description = (
                f"Tool: {tool_use['name']}\nArguments: {json.dumps(tool_use.get('input', {}), indent=2)}"
            )
            action_type = "tool call"
        else:
            event_description = "General evaluation"
            action_type = "action"

        return _STEERING_PROMPT_TEMPLATE.format(
            action_type=action_type,
            action_type_title=action_type.title(),
            context_str=context_str,
            event_description=event_description,
        )
create_steering_prompt(steering_context, tool_use=None, **kwargs)

Create default steering prompt using Agent SOP structure.

Uses Agent SOP format for structured, constraint-based prompts. See: https://github.com/strands-agents/agent-sop

Source code in strands/experimental/steering/handlers/llm/mappers.py
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
def create_steering_prompt(
    self, steering_context: SteeringContext, tool_use: ToolUse | None = None, **kwargs: Any
) -> str:
    """Create default steering prompt using Agent SOP structure.

    Uses Agent SOP format for structured, constraint-based prompts.
    See: https://github.com/strands-agents/agent-sop
    """
    context_str = (
        json.dumps(steering_context.data.get(), indent=2) if steering_context.data.get() else "No context available"
    )

    if tool_use:
        event_description = (
            f"Tool: {tool_use['name']}\nArguments: {json.dumps(tool_use.get('input', {}), indent=2)}"
        )
        action_type = "tool call"
    else:
        event_description = "General evaluation"
        action_type = "action"

    return _STEERING_PROMPT_TEMPLATE.format(
        action_type=action_type,
        action_type_title=action_type.title(),
        context_str=context_str,
        event_description=event_description,
    )

LLMPromptMapper

Bases: Protocol

Protocol for mapping context and events to LLM evaluation prompts.

Source code in strands/experimental/steering/handlers/llm/mappers.py
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
class LLMPromptMapper(Protocol):
    """Protocol for mapping context and events to LLM evaluation prompts."""

    def create_steering_prompt(
        self, steering_context: SteeringContext, tool_use: ToolUse | None = None, **kwargs: Any
    ) -> str:
        """Create steering prompt for LLM evaluation.

        Args:
            steering_context: Steering context with populated data
            tool_use: Tool use object for tool call events (None for other events)
            **kwargs: Additional event data for other steering events

        Returns:
            Formatted prompt string for LLM evaluation
        """
        ...
create_steering_prompt(steering_context, tool_use=None, **kwargs)

Create steering prompt for LLM evaluation.

Parameters:

Name Type Description Default
steering_context SteeringContext

Steering context with populated data

required
tool_use ToolUse | None

Tool use object for tool call events (None for other events)

None
**kwargs Any

Additional event data for other steering events

{}

Returns:

Type Description
str

Formatted prompt string for LLM evaluation

Source code in strands/experimental/steering/handlers/llm/mappers.py
71
72
73
74
75
76
77
78
79
80
81
82
83
84
def create_steering_prompt(
    self, steering_context: SteeringContext, tool_use: ToolUse | None = None, **kwargs: Any
) -> str:
    """Create steering prompt for LLM evaluation.

    Args:
        steering_context: Steering context with populated data
        tool_use: Tool use object for tool call events (None for other events)
        **kwargs: Additional event data for other steering events

    Returns:
        Formatted prompt string for LLM evaluation
    """
    ...

strands.experimental.tools.tool_provider

Tool provider interface.

ToolProvider

Bases: ABC

Interface for providing tools with lifecycle management.

Provides a way to load a collection of tools and clean them up when done, with lifecycle managed by the agent.

Source code in strands/experimental/tools/tool_provider.py
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
class ToolProvider(ABC):
    """Interface for providing tools with lifecycle management.

    Provides a way to load a collection of tools and clean them up
    when done, with lifecycle managed by the agent.
    """

    @abstractmethod
    async def load_tools(self, **kwargs: Any) -> Sequence["AgentTool"]:
        """Load and return the tools in this provider.

        Args:
            **kwargs: Additional arguments for future compatibility.

        Returns:
            List of tools that are ready to use.
        """
        pass

    @abstractmethod
    def add_consumer(self, consumer_id: Any, **kwargs: Any) -> None:
        """Add a consumer to this tool provider.

        Args:
            consumer_id: Unique identifier for the consumer.
            **kwargs: Additional arguments for future compatibility.
        """
        pass

    @abstractmethod
    def remove_consumer(self, consumer_id: Any, **kwargs: Any) -> None:
        """Remove a consumer from this tool provider.

        This method must be idempotent - calling it multiple times with the same ID
        should have no additional effect after the first call.

        Provider may clean up resources when no consumers remain.

        Args:
            consumer_id: Unique identifier for the consumer.
            **kwargs: Additional arguments for future compatibility.
        """
        pass
add_consumer(consumer_id, **kwargs) abstractmethod

Add a consumer to this tool provider.

Parameters:

Name Type Description Default
consumer_id Any

Unique identifier for the consumer.

required
**kwargs Any

Additional arguments for future compatibility.

{}
Source code in strands/experimental/tools/tool_provider.py
29
30
31
32
33
34
35
36
37
@abstractmethod
def add_consumer(self, consumer_id: Any, **kwargs: Any) -> None:
    """Add a consumer to this tool provider.

    Args:
        consumer_id: Unique identifier for the consumer.
        **kwargs: Additional arguments for future compatibility.
    """
    pass
load_tools(**kwargs) abstractmethod async

Load and return the tools in this provider.

Parameters:

Name Type Description Default
**kwargs Any

Additional arguments for future compatibility.

{}

Returns:

Type Description
Sequence[AgentTool]

List of tools that are ready to use.

Source code in strands/experimental/tools/tool_provider.py
17
18
19
20
21
22
23
24
25
26
27
@abstractmethod
async def load_tools(self, **kwargs: Any) -> Sequence["AgentTool"]:
    """Load and return the tools in this provider.

    Args:
        **kwargs: Additional arguments for future compatibility.

    Returns:
        List of tools that are ready to use.
    """
    pass
remove_consumer(consumer_id, **kwargs) abstractmethod

Remove a consumer from this tool provider.

This method must be idempotent - calling it multiple times with the same ID should have no additional effect after the first call.

Provider may clean up resources when no consumers remain.

Parameters:

Name Type Description Default
consumer_id Any

Unique identifier for the consumer.

required
**kwargs Any

Additional arguments for future compatibility.

{}
Source code in strands/experimental/tools/tool_provider.py
39
40
41
42
43
44
45
46
47
48
49
50
51
52
@abstractmethod
def remove_consumer(self, consumer_id: Any, **kwargs: Any) -> None:
    """Remove a consumer from this tool provider.

    This method must be idempotent - calling it multiple times with the same ID
    should have no additional effect after the first call.

    Provider may clean up resources when no consumers remain.

    Args:
        consumer_id: Unique identifier for the consumer.
        **kwargs: Additional arguments for future compatibility.
    """
    pass