Skip to content

strands.agent.agent

Agent Interface.

This module implements the core Agent class that serves as the primary entry point for interacting with foundation models and tools in the SDK.

The Agent interface supports two complementary interaction patterns:

  1. Natural language for conversation: agent("Analyze this data")
  2. Method-style for direct tool access: agent.tool.tool_name(param1="value")

AgentInput = str | list[ContentBlock] | list[InterruptResponseContent] | Messages | None module-attribute

AgentState = JSONSerializableDict module-attribute

AttributeValue = str | bool | float | int | list[str] | list[bool] | list[float] | list[int] | Sequence[str] | Sequence[bool] | Sequence[int] | Sequence[float] module-attribute

INITIAL_DELAY = 4 module-attribute

MAX_ATTEMPTS = 6 module-attribute

MAX_DELAY = 240 module-attribute

Messages = list[Message] module-attribute

A list of messages representing a conversation.

T = TypeVar('T', bound=BaseModel) module-attribute

_DEFAULT_AGENT_ID = 'default' module-attribute

_DEFAULT_AGENT_NAME = 'Strands Agents' module-attribute

_DEFAULT_CALLBACK_HANDLER = _DefaultCallbackHandlerSentinel() module-attribute

_DEFAULT_RETRY_STRATEGY = _DefaultRetryStrategySentinel() module-attribute

logger = logging.getLogger(__name__) module-attribute

AfterInvocationEvent dataclass

Bases: HookEvent

Event triggered at the end of an agent request.

This event is fired after the agent has completed processing a request, regardless of whether it completed successfully or encountered an error. Hook providers can use this event for cleanup, logging, or state persistence.

Note: This event uses reverse callback ordering, meaning callbacks registered later will be invoked first during cleanup.

This event is triggered at the end of the following api calls
  • Agent.call
  • Agent.stream_async
  • Agent.structured_output

Attributes:

Name Type Description
invocation_state dict[str, Any]

State and configuration passed through the agent invocation. This can include shared context for multi-agent coordination, request tracking, and dynamic configuration.

result AgentResult | None

The result of the agent invocation, if available. This will be None when invoked from structured_output methods, as those return typed output directly rather than AgentResult.

Source code in strands/hooks/events.py
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
@dataclass
class AfterInvocationEvent(HookEvent):
    """Event triggered at the end of an agent request.

    This event is fired after the agent has completed processing a request,
    regardless of whether it completed successfully or encountered an error.
    Hook providers can use this event for cleanup, logging, or state persistence.

    Note: This event uses reverse callback ordering, meaning callbacks registered
    later will be invoked first during cleanup.

    This event is triggered at the end of the following api calls:
      - Agent.__call__
      - Agent.stream_async
      - Agent.structured_output

    Attributes:
        invocation_state: State and configuration passed through the agent invocation.
            This can include shared context for multi-agent coordination, request tracking,
            and dynamic configuration.
        result: The result of the agent invocation, if available.
            This will be None when invoked from structured_output methods, as those return typed output directly rather
            than AgentResult.
    """

    invocation_state: dict[str, Any] = field(default_factory=dict)
    result: "AgentResult | None" = None

    @property
    def should_reverse_callbacks(self) -> bool:
        """True to invoke callbacks in reverse order."""
        return True

should_reverse_callbacks property

True to invoke callbacks in reverse order.

Agent

Bases: AgentBase

Core Agent implementation.

An agent orchestrates the following workflow:

  1. Receives user input
  2. Processes the input using a language model
  3. Decides whether to use tools to gather information or perform actions
  4. Executes those tools and receives results
  5. Continues reasoning with the new information
  6. Produces a final response
Source code in strands/agent/agent.py
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
class Agent(AgentBase):
    """Core Agent implementation.

    An agent orchestrates the following workflow:

    1. Receives user input
    2. Processes the input using a language model
    3. Decides whether to use tools to gather information or perform actions
    4. Executes those tools and receives results
    5. Continues reasoning with the new information
    6. Produces a final response
    """

    # For backwards compatibility
    ToolCaller = _ToolCaller

    def __init__(
        self,
        model: Model | str | None = None,
        messages: Messages | None = None,
        tools: list[Union[str, dict[str, str], "ToolProvider", Any]] | None = None,
        system_prompt: str | list[SystemContentBlock] | None = None,
        structured_output_model: type[BaseModel] | None = None,
        callback_handler: Callable[..., Any] | _DefaultCallbackHandlerSentinel | None = _DEFAULT_CALLBACK_HANDLER,
        conversation_manager: ConversationManager | None = None,
        record_direct_tool_call: bool = True,
        load_tools_from_directory: bool = False,
        trace_attributes: Mapping[str, AttributeValue] | None = None,
        *,
        agent_id: str | None = None,
        name: str | None = None,
        description: str | None = None,
        state: AgentState | dict | None = None,
        hooks: list[HookProvider] | None = None,
        session_manager: SessionManager | None = None,
        structured_output_prompt: str | None = None,
        tool_executor: ToolExecutor | None = None,
        retry_strategy: ModelRetryStrategy | _DefaultRetryStrategySentinel | None = _DEFAULT_RETRY_STRATEGY,
    ):
        """Initialize the Agent with the specified configuration.

        Args:
            model: Provider for running inference or a string representing the model-id for Bedrock to use.
                Defaults to strands.models.BedrockModel if None.
            messages: List of initial messages to pre-load into the conversation.
                Defaults to an empty list if None.
            tools: List of tools to make available to the agent.
                Can be specified as:

                - String tool names (e.g., "retrieve")
                - File paths (e.g., "/path/to/tool.py")
                - Imported Python modules (e.g., from strands_tools import current_time)
                - Dictionaries with name/path keys (e.g., {"name": "tool_name", "path": "/path/to/tool.py"})
                - ToolProvider instances for managed tool collections
                - Functions decorated with `@strands.tool` decorator.

                If provided, only these tools will be available. If None, all tools will be available.
            system_prompt: System prompt to guide model behavior.
                Can be a string or a list of SystemContentBlock objects for advanced features like caching.
                If None, the model will behave according to its default settings.
            structured_output_model: Pydantic model type(s) for structured output.
                When specified, all agent calls will attempt to return structured output of this type.
                This can be overridden on the agent invocation.
                Defaults to None (no structured output).
            callback_handler: Callback for processing events as they happen during agent execution.
                If not provided (using the default), a new PrintingCallbackHandler instance is created.
                If explicitly set to None, null_callback_handler is used.
            conversation_manager: Manager for conversation history and context window.
                Defaults to strands.agent.conversation_manager.SlidingWindowConversationManager if None.
            record_direct_tool_call: Whether to record direct tool calls in message history.
                Defaults to True.
            load_tools_from_directory: Whether to load and automatically reload tools in the `./tools/` directory.
                Defaults to False.
            trace_attributes: Custom trace attributes to apply to the agent's trace span.
            agent_id: Optional ID for the agent, useful for session management and multi-agent scenarios.
                Defaults to "default".
            name: name of the Agent
                Defaults to "Strands Agents".
            description: description of what the Agent does
                Defaults to None.
            state: stateful information for the agent. Can be either an AgentState object, or a json serializable dict.
                Defaults to an empty AgentState object.
            hooks: hooks to be added to the agent hook registry
                Defaults to None.
            session_manager: Manager for handling agent sessions including conversation history and state.
                If provided, enables session-based persistence and state management.
            structured_output_prompt: Custom prompt message used when forcing structured output.
                When using structured output, if the model doesn't automatically use the output tool,
                the agent sends a follow-up message to request structured formatting. This parameter
                allows customizing that message.
                Defaults to "You must format the previous response as structured output."
            tool_executor: Definition of tool execution strategy (e.g., sequential, concurrent, etc.).
            retry_strategy: Strategy for retrying model calls on throttling or other transient errors.
                Defaults to ModelRetryStrategy with max_attempts=6, initial_delay=4s, max_delay=240s.
                Implement a custom HookProvider for custom retry logic, or pass None to disable retries.

        Raises:
            ValueError: If agent id contains path separators.
        """
        self.model = BedrockModel() if not model else BedrockModel(model_id=model) if isinstance(model, str) else model
        self.messages = messages if messages is not None else []
        # initializing self._system_prompt for backwards compatibility
        self._system_prompt, self._system_prompt_content = self._initialize_system_prompt(system_prompt)
        self._default_structured_output_model = structured_output_model
        self._structured_output_prompt = structured_output_prompt
        self.agent_id = _identifier.validate(agent_id or _DEFAULT_AGENT_ID, _identifier.Identifier.AGENT)
        self.name = name or _DEFAULT_AGENT_NAME
        self.description = description

        # If not provided, create a new PrintingCallbackHandler instance
        # If explicitly set to None, use null_callback_handler
        # Otherwise use the passed callback_handler
        self.callback_handler: Callable[..., Any] | PrintingCallbackHandler
        if isinstance(callback_handler, _DefaultCallbackHandlerSentinel):
            self.callback_handler = PrintingCallbackHandler()
        elif callback_handler is None:
            self.callback_handler = null_callback_handler
        else:
            self.callback_handler = callback_handler

        self.conversation_manager = conversation_manager if conversation_manager else SlidingWindowConversationManager()

        # Process trace attributes to ensure they're of compatible types
        self.trace_attributes: dict[str, AttributeValue] = {}
        if trace_attributes:
            for k, v in trace_attributes.items():
                if isinstance(v, (str, int, float, bool)) or (
                    isinstance(v, list) and all(isinstance(x, (str, int, float, bool)) for x in v)
                ):
                    self.trace_attributes[k] = v

        self.record_direct_tool_call = record_direct_tool_call
        self.load_tools_from_directory = load_tools_from_directory

        self.tool_registry = ToolRegistry()

        # Process tool list if provided
        if tools is not None:
            self.tool_registry.process_tools(tools)

        # Initialize tools and configuration
        self.tool_registry.initialize_tools(self.load_tools_from_directory)
        if load_tools_from_directory:
            self.tool_watcher = ToolWatcher(tool_registry=self.tool_registry)

        self.event_loop_metrics = EventLoopMetrics()

        # Initialize tracer instance (no-op if not configured)
        self.tracer = get_tracer()
        self.trace_span: trace_api.Span | None = None

        # Initialize agent state management
        if state is not None:
            if isinstance(state, dict):
                self.state = AgentState(state)
            elif isinstance(state, AgentState):
                self.state = state
            else:
                raise ValueError("state must be an AgentState object or a dict")
        else:
            self.state = AgentState()

        self.tool_caller = _ToolCaller(self)

        self.hooks = HookRegistry()

        self._interrupt_state = _InterruptState()

        # Initialize lock for guarding concurrent invocations
        # Using threading.Lock instead of asyncio.Lock because run_async() creates
        # separate event loops in different threads, so asyncio.Lock wouldn't work
        self._invocation_lock = threading.Lock()

        # In the future, we'll have a RetryStrategy base class but until
        # that API is determined we only allow ModelRetryStrategy
        if (
            retry_strategy is not None
            and not isinstance(retry_strategy, _DefaultRetryStrategySentinel)
            and type(retry_strategy) is not ModelRetryStrategy
        ):
            raise ValueError("retry_strategy must be an instance of ModelRetryStrategy")

        # If not provided (using the default), create a new ModelRetryStrategy instance
        # If explicitly set to None, disable retries (max_attempts=1 means no retries)
        # Otherwise use the passed retry_strategy
        if isinstance(retry_strategy, _DefaultRetryStrategySentinel):
            self._retry_strategy = ModelRetryStrategy(
                max_attempts=MAX_ATTEMPTS, max_delay=MAX_DELAY, initial_delay=INITIAL_DELAY
            )
        elif retry_strategy is None:
            # If no retry strategy is passed in, then we turn retries off
            self._retry_strategy = ModelRetryStrategy(max_attempts=1)
        else:
            self._retry_strategy = retry_strategy

        # Initialize session management functionality
        self._session_manager = session_manager
        if self._session_manager:
            self.hooks.add_hook(self._session_manager)

        # Allow conversation_managers to subscribe to hooks
        self.hooks.add_hook(self.conversation_manager)

        # Register retry strategy as a hook
        self.hooks.add_hook(self._retry_strategy)

        self.tool_executor = tool_executor or ConcurrentToolExecutor()

        if hooks:
            for hook in hooks:
                self.hooks.add_hook(hook)
        self.hooks.invoke_callbacks(AgentInitializedEvent(agent=self))

    @property
    def system_prompt(self) -> str | None:
        """Get the system prompt as a string for backwards compatibility.

        Returns the system prompt as a concatenated string when it contains text content,
        or None if no text content is present. This maintains backwards compatibility
        with existing code that expects system_prompt to be a string.

        Returns:
            The system prompt as a string, or None if no text content exists.
        """
        return self._system_prompt

    @system_prompt.setter
    def system_prompt(self, value: str | list[SystemContentBlock] | None) -> None:
        """Set the system prompt and update internal content representation.

        Accepts either a string or list of SystemContentBlock objects.
        When set, both the backwards-compatible string representation and the internal
        content block representation are updated to maintain consistency.

        Args:
            value: System prompt as string, list of SystemContentBlock objects, or None.
                  - str: Simple text prompt (most common use case)
                  - list[SystemContentBlock]: Content blocks with features like caching
                  - None: Clear the system prompt
        """
        self._system_prompt, self._system_prompt_content = self._initialize_system_prompt(value)

    @property
    def tool(self) -> _ToolCaller:
        """Call tool as a function.

        Returns:
            Tool caller through which user can invoke tool as a function.

        Example:
            ```
            agent = Agent(tools=[calculator])
            agent.tool.calculator(...)
            ```
        """
        return self.tool_caller

    @property
    def tool_names(self) -> list[str]:
        """Get a list of all registered tool names.

        Returns:
            Names of all tools available to this agent.
        """
        all_tools = self.tool_registry.get_all_tools_config()
        return list(all_tools.keys())

    def __call__(
        self,
        prompt: AgentInput = None,
        *,
        invocation_state: dict[str, Any] | None = None,
        structured_output_model: type[BaseModel] | None = None,
        structured_output_prompt: str | None = None,
        **kwargs: Any,
    ) -> AgentResult:
        """Process a natural language prompt through the agent's event loop.

        This method implements the conversational interface with multiple input patterns:
        - String input: `agent("hello!")`
        - ContentBlock list: `agent([{"text": "hello"}, {"image": {...}}])`
        - Message list: `agent([{"role": "user", "content": [{"text": "hello"}]}])`
        - No input: `agent()` - uses existing conversation history

        Args:
            prompt: User input in various formats:
                - str: Simple text input
                - list[ContentBlock]: Multi-modal content blocks
                - list[Message]: Complete messages with roles
                - None: Use existing conversation history
            invocation_state: Additional parameters to pass through the event loop.
            structured_output_model: Pydantic model type(s) for structured output (overrides agent default).
            structured_output_prompt: Custom prompt for forcing structured output (overrides agent default).
            **kwargs: Additional parameters to pass through the event loop.[Deprecating]

        Returns:
            Result object containing:

                - stop_reason: Why the event loop stopped (e.g., "end_turn", "max_tokens")
                - message: The final message from the model
                - metrics: Performance metrics from the event loop
                - state: The final state of the event loop
                - structured_output: Parsed structured output when structured_output_model was specified
        """
        return run_async(
            lambda: self.invoke_async(
                prompt,
                invocation_state=invocation_state,
                structured_output_model=structured_output_model,
                structured_output_prompt=structured_output_prompt,
                **kwargs,
            )
        )

    async def invoke_async(
        self,
        prompt: AgentInput = None,
        *,
        invocation_state: dict[str, Any] | None = None,
        structured_output_model: type[BaseModel] | None = None,
        structured_output_prompt: str | None = None,
        **kwargs: Any,
    ) -> AgentResult:
        """Process a natural language prompt through the agent's event loop.

        This method implements the conversational interface with multiple input patterns:
        - String input: Simple text input
        - ContentBlock list: Multi-modal content blocks
        - Message list: Complete messages with roles
        - No input: Use existing conversation history

        Args:
            prompt: User input in various formats:
                - str: Simple text input
                - list[ContentBlock]: Multi-modal content blocks
                - list[Message]: Complete messages with roles
                - None: Use existing conversation history
            invocation_state: Additional parameters to pass through the event loop.
            structured_output_model: Pydantic model type(s) for structured output (overrides agent default).
            structured_output_prompt: Custom prompt for forcing structured output (overrides agent default).
            **kwargs: Additional parameters to pass through the event loop.[Deprecating]

        Returns:
            Result: object containing:

                - stop_reason: Why the event loop stopped (e.g., "end_turn", "max_tokens")
                - message: The final message from the model
                - metrics: Performance metrics from the event loop
                - state: The final state of the event loop
        """
        events = self.stream_async(
            prompt,
            invocation_state=invocation_state,
            structured_output_model=structured_output_model,
            structured_output_prompt=structured_output_prompt,
            **kwargs,
        )
        async for event in events:
            _ = event

        return cast(AgentResult, event["result"])

    def structured_output(self, output_model: type[T], prompt: AgentInput = None) -> T:
        """This method allows you to get structured output from the agent.

        If you pass in a prompt, it will be used temporarily without adding it to the conversation history.
        If you don't pass in a prompt, it will use only the existing conversation history to respond.

        For smaller models, you may want to use the optional prompt to add additional instructions to explicitly
        instruct the model to output the structured data.

        Args:
            output_model: The output model (a JSON schema written as a Pydantic BaseModel)
                that the agent will use when responding.
            prompt: The prompt to use for the agent in various formats:
                - str: Simple text input
                - list[ContentBlock]: Multi-modal content blocks
                - list[Message]: Complete messages with roles
                - None: Use existing conversation history

        Raises:
            ValueError: If no conversation history or prompt is provided.
        """
        warnings.warn(
            "Agent.structured_output method is deprecated."
            " You should pass in `structured_output_model` directly into the agent invocation."
            " see: https://strandsagents.com/latest/documentation/docs/user-guide/concepts/agents/structured-output/",
            category=DeprecationWarning,
            stacklevel=2,
        )

        return run_async(lambda: self.structured_output_async(output_model, prompt))

    async def structured_output_async(self, output_model: type[T], prompt: AgentInput = None) -> T:
        """This method allows you to get structured output from the agent.

        If you pass in a prompt, it will be used temporarily without adding it to the conversation history.
        If you don't pass in a prompt, it will use only the existing conversation history to respond.

        For smaller models, you may want to use the optional prompt to add additional instructions to explicitly
        instruct the model to output the structured data.

        Args:
            output_model: The output model (a JSON schema written as a Pydantic BaseModel)
                that the agent will use when responding.
            prompt: The prompt to use for the agent (will not be added to conversation history).

        Raises:
            ValueError: If no conversation history or prompt is provided.
        -
        """
        if self._interrupt_state.activated:
            raise RuntimeError("cannot call structured output during interrupt")

        warnings.warn(
            "Agent.structured_output_async method is deprecated."
            " You should pass in `structured_output_model` directly into the agent invocation."
            " see: https://strandsagents.com/latest/documentation/docs/user-guide/concepts/agents/structured-output/",
            category=DeprecationWarning,
            stacklevel=2,
        )
        await self.hooks.invoke_callbacks_async(BeforeInvocationEvent(agent=self, invocation_state={}))
        with self.tracer.tracer.start_as_current_span(
            "execute_structured_output", kind=trace_api.SpanKind.CLIENT
        ) as structured_output_span:
            try:
                if not self.messages and not prompt:
                    raise ValueError("No conversation history or prompt provided")

                temp_messages: Messages = self.messages + await self._convert_prompt_to_messages(prompt)

                structured_output_span.set_attributes(
                    {
                        "gen_ai.system": "strands-agents",
                        "gen_ai.agent.name": self.name,
                        "gen_ai.agent.id": self.agent_id,
                        "gen_ai.operation.name": "execute_structured_output",
                    }
                )
                if self.system_prompt:
                    structured_output_span.add_event(
                        "gen_ai.system.message",
                        attributes={"role": "system", "content": serialize([{"text": self.system_prompt}])},
                    )
                for message in temp_messages:
                    structured_output_span.add_event(
                        f"gen_ai.{message['role']}.message",
                        attributes={"role": message["role"], "content": serialize(message["content"])},
                    )
                events = self.model.structured_output(output_model, temp_messages, system_prompt=self.system_prompt)
                async for event in events:
                    if isinstance(event, TypedEvent):
                        event.prepare(invocation_state={})
                        if event.is_callback_event:
                            self.callback_handler(**event.as_dict())

                structured_output_span.add_event(
                    "gen_ai.choice", attributes={"message": serialize(event["output"].model_dump())}
                )
                return event["output"]

            finally:
                await self.hooks.invoke_callbacks_async(AfterInvocationEvent(agent=self, invocation_state={}))

    def cleanup(self) -> None:
        """Clean up resources used by the agent.

        This method cleans up all tool providers that require explicit cleanup,
        such as MCP clients. It should be called when the agent is no longer needed
        to ensure proper resource cleanup.

        Note: This method uses a "belt and braces" approach with automatic cleanup
        through finalizers as a fallback, but explicit cleanup is recommended.
        """
        self.tool_registry.cleanup()

    def __del__(self) -> None:
        """Clean up resources when agent is garbage collected."""
        # __del__ is called even when an exception is thrown in the constructor,
        # so there is no guarantee tool_registry was set..
        if hasattr(self, "tool_registry"):
            self.tool_registry.cleanup()

    async def stream_async(
        self,
        prompt: AgentInput = None,
        *,
        invocation_state: dict[str, Any] | None = None,
        structured_output_model: type[BaseModel] | None = None,
        structured_output_prompt: str | None = None,
        **kwargs: Any,
    ) -> AsyncIterator[Any]:
        """Process a natural language prompt and yield events as an async iterator.

        This method provides an asynchronous interface for streaming agent events with multiple input patterns:
        - String input: Simple text input
        - ContentBlock list: Multi-modal content blocks
        - Message list: Complete messages with roles
        - No input: Use existing conversation history

        Args:
            prompt: User input in various formats:
                - str: Simple text input
                - list[ContentBlock]: Multi-modal content blocks
                - list[Message]: Complete messages with roles
                - None: Use existing conversation history
            invocation_state: Additional parameters to pass through the event loop.
            structured_output_model: Pydantic model type(s) for structured output (overrides agent default).
            structured_output_prompt: Custom prompt for forcing structured output (overrides agent default).
            **kwargs: Additional parameters to pass to the event loop.[Deprecating]

        Yields:
            An async iterator that yields events. Each event is a dictionary containing
                information about the current state of processing, such as:

                - data: Text content being generated
                - complete: Whether this is the final chunk
                - current_tool_use: Information about tools being executed
                - And other event data provided by the callback handler

        Raises:
            ConcurrencyException: If another invocation is already in progress on this agent instance.
            Exception: Any exceptions from the agent invocation will be propagated to the caller.

        Example:
            ```python
            async for event in agent.stream_async("Analyze this data"):
                if "data" in event:
                    yield event["data"]
            ```
        """
        # Acquire lock to prevent concurrent invocations
        # Using threading.Lock instead of asyncio.Lock because run_async() creates
        # separate event loops in different threads
        acquired = self._invocation_lock.acquire(blocking=False)
        if not acquired:
            raise ConcurrencyException(
                "Agent is already processing a request. Concurrent invocations are not supported."
            )

        try:
            self._interrupt_state.resume(prompt)

            self.event_loop_metrics.reset_usage_metrics()

            merged_state = {}
            if kwargs:
                warnings.warn("`**kwargs` parameter is deprecating, use `invocation_state` instead.", stacklevel=2)
                merged_state.update(kwargs)
                if invocation_state is not None:
                    merged_state["invocation_state"] = invocation_state
            else:
                if invocation_state is not None:
                    merged_state = invocation_state

            callback_handler = self.callback_handler
            if kwargs:
                callback_handler = kwargs.get("callback_handler", self.callback_handler)

            # Process input and get message to add (if any)
            messages = await self._convert_prompt_to_messages(prompt)

            self.trace_span = self._start_agent_trace_span(messages)

            with trace_api.use_span(self.trace_span):
                try:
                    events = self._run_loop(messages, merged_state, structured_output_model, structured_output_prompt)

                    async for event in events:
                        event.prepare(invocation_state=merged_state)

                        if event.is_callback_event:
                            as_dict = event.as_dict()
                            callback_handler(**as_dict)
                            yield as_dict

                    result = AgentResult(*event["stop"])
                    callback_handler(result=result)
                    yield AgentResultEvent(result=result).as_dict()

                    self._end_agent_trace_span(response=result)

                except Exception as e:
                    self._end_agent_trace_span(error=e)
                    raise

        finally:
            self._invocation_lock.release()

    async def _run_loop(
        self,
        messages: Messages,
        invocation_state: dict[str, Any],
        structured_output_model: type[BaseModel] | None = None,
        structured_output_prompt: str | None = None,
    ) -> AsyncGenerator[TypedEvent, None]:
        """Execute the agent's event loop with the given message and parameters.

        Args:
            messages: The input messages to add to the conversation.
            invocation_state: Additional parameters to pass to the event loop.
            structured_output_model: Optional Pydantic model type for structured output.
            structured_output_prompt: Optional custom prompt for forcing structured output.

        Yields:
            Events from the event loop cycle.
        """
        before_invocation_event, _interrupts = await self.hooks.invoke_callbacks_async(
            BeforeInvocationEvent(agent=self, invocation_state=invocation_state, messages=messages)
        )
        messages = before_invocation_event.messages if before_invocation_event.messages is not None else messages

        agent_result: AgentResult | None = None
        try:
            yield InitEventLoopEvent()

            await self._append_messages(*messages)

            structured_output_context = StructuredOutputContext(
                structured_output_model or self._default_structured_output_model,
                structured_output_prompt=structured_output_prompt or self._structured_output_prompt,
            )

            # Execute the event loop cycle with retry logic for context limits
            events = self._execute_event_loop_cycle(invocation_state, structured_output_context)
            async for event in events:
                # Signal from the model provider that the message sent by the user should be redacted,
                # likely due to a guardrail.
                if (
                    isinstance(event, ModelStreamChunkEvent)
                    and event.chunk
                    and event.chunk.get("redactContent")
                    and event.chunk["redactContent"].get("redactUserContentMessage")
                ):
                    self.messages[-1]["content"] = self._redact_user_content(
                        self.messages[-1]["content"], str(event.chunk["redactContent"]["redactUserContentMessage"])
                    )
                    if self._session_manager:
                        self._session_manager.redact_latest_message(self.messages[-1], self)
                yield event

            # Capture the result from the final event if available
            if isinstance(event, EventLoopStopEvent):
                agent_result = AgentResult(*event["stop"])

        finally:
            self.conversation_manager.apply_management(self)
            await self.hooks.invoke_callbacks_async(
                AfterInvocationEvent(agent=self, invocation_state=invocation_state, result=agent_result)
            )

    async def _execute_event_loop_cycle(
        self, invocation_state: dict[str, Any], structured_output_context: StructuredOutputContext | None = None
    ) -> AsyncGenerator[TypedEvent, None]:
        """Execute the event loop cycle with retry logic for context window limits.

        This internal method handles the execution of the event loop cycle and implements
        retry logic for handling context window overflow exceptions by reducing the
        conversation context and retrying.

        Args:
            invocation_state: Additional parameters to pass to the event loop.
            structured_output_context: Optional structured output context for this invocation.

        Yields:
            Events of the loop cycle.
        """
        # Add `Agent` to invocation_state to keep backwards-compatibility
        invocation_state["agent"] = self

        if structured_output_context:
            structured_output_context.register_tool(self.tool_registry)

        try:
            events = event_loop_cycle(
                agent=self,
                invocation_state=invocation_state,
                structured_output_context=structured_output_context,
            )
            async for event in events:
                yield event

        except ContextWindowOverflowException as e:
            # Try reducing the context size and retrying
            self.conversation_manager.reduce_context(self, e=e)

            # Sync agent after reduce_context to keep conversation_manager_state up to date in the session
            if self._session_manager:
                self._session_manager.sync_agent(self)

            events = self._execute_event_loop_cycle(invocation_state, structured_output_context)
            async for event in events:
                yield event

        finally:
            if structured_output_context:
                structured_output_context.cleanup(self.tool_registry)

    async def _convert_prompt_to_messages(self, prompt: AgentInput) -> Messages:
        if self._interrupt_state.activated:
            return []

        messages: Messages | None = None
        if prompt is not None:
            # Check if the latest message is toolUse
            if len(self.messages) > 0 and any("toolUse" in content for content in self.messages[-1]["content"]):
                # Add toolResult message after to have a valid conversation
                logger.info(
                    "Agents latest message is toolUse, appending a toolResult message to have valid conversation."
                )
                tool_use_ids = [
                    content["toolUse"]["toolUseId"] for content in self.messages[-1]["content"] if "toolUse" in content
                ]
                await self._append_messages(
                    {
                        "role": "user",
                        "content": generate_missing_tool_result_content(tool_use_ids),
                    }
                )
            if isinstance(prompt, str):
                # String input - convert to user message
                messages = [{"role": "user", "content": [{"text": prompt}]}]
            elif isinstance(prompt, list):
                if len(prompt) == 0:
                    # Empty list
                    messages = []
                # Check if all item in input list are dictionaries
                elif all(isinstance(item, dict) for item in prompt):
                    # Check if all items are messages
                    if all(all(key in item for key in Message.__annotations__.keys()) for item in prompt):
                        # Messages input - add all messages to conversation
                        messages = cast(Messages, prompt)

                    # Check if all items are content blocks
                    elif all(any(key in ContentBlock.__annotations__.keys() for key in item) for item in prompt):
                        # Treat as List[ContentBlock] input - convert to user message
                        # This allows invalid structures to be passed through to the model
                        messages = [{"role": "user", "content": cast(list[ContentBlock], prompt)}]
        else:
            messages = []
        if messages is None:
            raise ValueError("Input prompt must be of type: `str | list[Contentblock] | Messages | None`.")
        return messages

    def _start_agent_trace_span(self, messages: Messages) -> trace_api.Span:
        """Starts a trace span for the agent.

        Args:
            messages: The input messages.
        """
        model_id = self.model.config.get("model_id") if hasattr(self.model, "config") else None
        return self.tracer.start_agent_span(
            messages=messages,
            agent_name=self.name,
            model_id=model_id,
            tools=self.tool_names,
            system_prompt=self.system_prompt,
            custom_trace_attributes=self.trace_attributes,
            tools_config=self.tool_registry.get_all_tools_config(),
        )

    def _end_agent_trace_span(
        self,
        response: AgentResult | None = None,
        error: Exception | None = None,
    ) -> None:
        """Ends a trace span for the agent.

        Args:
            span: The span to end.
            response: Response to record as a trace attribute.
            error: Error to record as a trace attribute.
        """
        if self.trace_span:
            trace_attributes: dict[str, Any] = {
                "span": self.trace_span,
            }

            if response:
                trace_attributes["response"] = response
            if error:
                trace_attributes["error"] = error

            self.tracer.end_agent_span(**trace_attributes)

    def _initialize_system_prompt(
        self, system_prompt: str | list[SystemContentBlock] | None
    ) -> tuple[str | None, list[SystemContentBlock] | None]:
        """Initialize system prompt fields from constructor input.

        Maintains backwards compatibility by keeping system_prompt as str when string input
        provided, avoiding breaking existing consumers.

        Maps system_prompt input to both string and content block representations:
        - If string: system_prompt=string, _system_prompt_content=[{text: string}]
        - If list with text elements: system_prompt=concatenated_text, _system_prompt_content=list
        - If list without text elements: system_prompt=None, _system_prompt_content=list
        - If None: system_prompt=None, _system_prompt_content=None
        """
        if isinstance(system_prompt, str):
            return system_prompt, [{"text": system_prompt}]
        elif isinstance(system_prompt, list):
            # Concatenate all text elements for backwards compatibility, None if no text found
            text_parts = [block["text"] for block in system_prompt if "text" in block]
            system_prompt_str = "\n".join(text_parts) if text_parts else None
            return system_prompt_str, system_prompt
        else:
            return None, None

    async def _append_messages(self, *messages: Message) -> None:
        """Appends messages to history and invoke the callbacks for the MessageAddedEvent."""
        for message in messages:
            self.messages.append(message)
            await self.hooks.invoke_callbacks_async(MessageAddedEvent(agent=self, message=message))

    def _redact_user_content(self, content: list[ContentBlock], redact_message: str) -> list[ContentBlock]:
        """Redact user content preserving toolResult blocks.

        Args:
            content: content blocks to be redacted
            redact_message: redact message to be replaced

        Returns:
            Redacted content, as follows:
            - if the message contains at least a toolResult block,
                all toolResult blocks(s) are kept, redacting only the result content;
            - otherwise, the entire content of the message is replaced
                with a single text block with the redact message.
        """
        redacted_content = []
        for block in content:
            if "toolResult" in block:
                block["toolResult"]["content"] = [{"text": redact_message}]
                redacted_content.append(block)

        if not redacted_content:
            # Text content is added only if no toolResult blocks were found
            redacted_content = [{"text": redact_message}]

        return redacted_content

system_prompt property writable

Get the system prompt as a string for backwards compatibility.

Returns the system prompt as a concatenated string when it contains text content, or None if no text content is present. This maintains backwards compatibility with existing code that expects system_prompt to be a string.

Returns:

Type Description
str | None

The system prompt as a string, or None if no text content exists.

tool property

Call tool as a function.

Returns:

Type Description
_ToolCaller

Tool caller through which user can invoke tool as a function.

Example
agent = Agent(tools=[calculator])
agent.tool.calculator(...)

tool_names property

Get a list of all registered tool names.

Returns:

Type Description
list[str]

Names of all tools available to this agent.

__call__(prompt=None, *, invocation_state=None, structured_output_model=None, structured_output_prompt=None, **kwargs)

Process a natural language prompt through the agent's event loop.

This method implements the conversational interface with multiple input patterns: - String input: agent("hello!") - ContentBlock list: agent([{"text": "hello"}, {"image": {...}}]) - Message list: agent([{"role": "user", "content": [{"text": "hello"}]}]) - No input: agent() - uses existing conversation history

Parameters:

Name Type Description Default
prompt AgentInput

User input in various formats: - str: Simple text input - list[ContentBlock]: Multi-modal content blocks - list[Message]: Complete messages with roles - None: Use existing conversation history

None
invocation_state dict[str, Any] | None

Additional parameters to pass through the event loop.

None
structured_output_model type[BaseModel] | None

Pydantic model type(s) for structured output (overrides agent default).

None
structured_output_prompt str | None

Custom prompt for forcing structured output (overrides agent default).

None
**kwargs Any

Additional parameters to pass through the event loop.[Deprecating]

{}

Returns:

Type Description
AgentResult

Result object containing:

  • stop_reason: Why the event loop stopped (e.g., "end_turn", "max_tokens")
  • message: The final message from the model
  • metrics: Performance metrics from the event loop
  • state: The final state of the event loop
  • structured_output: Parsed structured output when structured_output_model was specified
Source code in strands/agent/agent.py
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
def __call__(
    self,
    prompt: AgentInput = None,
    *,
    invocation_state: dict[str, Any] | None = None,
    structured_output_model: type[BaseModel] | None = None,
    structured_output_prompt: str | None = None,
    **kwargs: Any,
) -> AgentResult:
    """Process a natural language prompt through the agent's event loop.

    This method implements the conversational interface with multiple input patterns:
    - String input: `agent("hello!")`
    - ContentBlock list: `agent([{"text": "hello"}, {"image": {...}}])`
    - Message list: `agent([{"role": "user", "content": [{"text": "hello"}]}])`
    - No input: `agent()` - uses existing conversation history

    Args:
        prompt: User input in various formats:
            - str: Simple text input
            - list[ContentBlock]: Multi-modal content blocks
            - list[Message]: Complete messages with roles
            - None: Use existing conversation history
        invocation_state: Additional parameters to pass through the event loop.
        structured_output_model: Pydantic model type(s) for structured output (overrides agent default).
        structured_output_prompt: Custom prompt for forcing structured output (overrides agent default).
        **kwargs: Additional parameters to pass through the event loop.[Deprecating]

    Returns:
        Result object containing:

            - stop_reason: Why the event loop stopped (e.g., "end_turn", "max_tokens")
            - message: The final message from the model
            - metrics: Performance metrics from the event loop
            - state: The final state of the event loop
            - structured_output: Parsed structured output when structured_output_model was specified
    """
    return run_async(
        lambda: self.invoke_async(
            prompt,
            invocation_state=invocation_state,
            structured_output_model=structured_output_model,
            structured_output_prompt=structured_output_prompt,
            **kwargs,
        )
    )

__del__()

Clean up resources when agent is garbage collected.

Source code in strands/agent/agent.py
570
571
572
573
574
575
def __del__(self) -> None:
    """Clean up resources when agent is garbage collected."""
    # __del__ is called even when an exception is thrown in the constructor,
    # so there is no guarantee tool_registry was set..
    if hasattr(self, "tool_registry"):
        self.tool_registry.cleanup()

__init__(model=None, messages=None, tools=None, system_prompt=None, structured_output_model=None, callback_handler=_DEFAULT_CALLBACK_HANDLER, conversation_manager=None, record_direct_tool_call=True, load_tools_from_directory=False, trace_attributes=None, *, agent_id=None, name=None, description=None, state=None, hooks=None, session_manager=None, structured_output_prompt=None, tool_executor=None, retry_strategy=_DEFAULT_RETRY_STRATEGY)

Initialize the Agent with the specified configuration.

Parameters:

Name Type Description Default
model Model | str | None

Provider for running inference or a string representing the model-id for Bedrock to use. Defaults to strands.models.BedrockModel if None.

None
messages Messages | None

List of initial messages to pre-load into the conversation. Defaults to an empty list if None.

None
tools list[Union[str, dict[str, str], ToolProvider, Any]] | None

List of tools to make available to the agent. Can be specified as:

  • String tool names (e.g., "retrieve")
  • File paths (e.g., "/path/to/tool.py")
  • Imported Python modules (e.g., from strands_tools import current_time)
  • Dictionaries with name/path keys (e.g., {"name": "tool_name", "path": "/path/to/tool.py"})
  • ToolProvider instances for managed tool collections
  • Functions decorated with @strands.tool decorator.

If provided, only these tools will be available. If None, all tools will be available.

None
system_prompt str | list[SystemContentBlock] | None

System prompt to guide model behavior. Can be a string or a list of SystemContentBlock objects for advanced features like caching. If None, the model will behave according to its default settings.

None
structured_output_model type[BaseModel] | None

Pydantic model type(s) for structured output. When specified, all agent calls will attempt to return structured output of this type. This can be overridden on the agent invocation. Defaults to None (no structured output).

None
callback_handler Callable[..., Any] | _DefaultCallbackHandlerSentinel | None

Callback for processing events as they happen during agent execution. If not provided (using the default), a new PrintingCallbackHandler instance is created. If explicitly set to None, null_callback_handler is used.

_DEFAULT_CALLBACK_HANDLER
conversation_manager ConversationManager | None

Manager for conversation history and context window. Defaults to strands.agent.conversation_manager.SlidingWindowConversationManager if None.

None
record_direct_tool_call bool

Whether to record direct tool calls in message history. Defaults to True.

True
load_tools_from_directory bool

Whether to load and automatically reload tools in the ./tools/ directory. Defaults to False.

False
trace_attributes Mapping[str, AttributeValue] | None

Custom trace attributes to apply to the agent's trace span.

None
agent_id str | None

Optional ID for the agent, useful for session management and multi-agent scenarios. Defaults to "default".

None
name str | None

name of the Agent Defaults to "Strands Agents".

None
description str | None

description of what the Agent does Defaults to None.

None
state AgentState | dict | None

stateful information for the agent. Can be either an AgentState object, or a json serializable dict. Defaults to an empty AgentState object.

None
hooks list[HookProvider] | None

hooks to be added to the agent hook registry Defaults to None.

None
session_manager SessionManager | None

Manager for handling agent sessions including conversation history and state. If provided, enables session-based persistence and state management.

None
structured_output_prompt str | None

Custom prompt message used when forcing structured output. When using structured output, if the model doesn't automatically use the output tool, the agent sends a follow-up message to request structured formatting. This parameter allows customizing that message. Defaults to "You must format the previous response as structured output."

None
tool_executor ToolExecutor | None

Definition of tool execution strategy (e.g., sequential, concurrent, etc.).

None
retry_strategy ModelRetryStrategy | _DefaultRetryStrategySentinel | None

Strategy for retrying model calls on throttling or other transient errors. Defaults to ModelRetryStrategy with max_attempts=6, initial_delay=4s, max_delay=240s. Implement a custom HookProvider for custom retry logic, or pass None to disable retries.

_DEFAULT_RETRY_STRATEGY

Raises:

Type Description
ValueError

If agent id contains path separators.

Source code in strands/agent/agent.py
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
def __init__(
    self,
    model: Model | str | None = None,
    messages: Messages | None = None,
    tools: list[Union[str, dict[str, str], "ToolProvider", Any]] | None = None,
    system_prompt: str | list[SystemContentBlock] | None = None,
    structured_output_model: type[BaseModel] | None = None,
    callback_handler: Callable[..., Any] | _DefaultCallbackHandlerSentinel | None = _DEFAULT_CALLBACK_HANDLER,
    conversation_manager: ConversationManager | None = None,
    record_direct_tool_call: bool = True,
    load_tools_from_directory: bool = False,
    trace_attributes: Mapping[str, AttributeValue] | None = None,
    *,
    agent_id: str | None = None,
    name: str | None = None,
    description: str | None = None,
    state: AgentState | dict | None = None,
    hooks: list[HookProvider] | None = None,
    session_manager: SessionManager | None = None,
    structured_output_prompt: str | None = None,
    tool_executor: ToolExecutor | None = None,
    retry_strategy: ModelRetryStrategy | _DefaultRetryStrategySentinel | None = _DEFAULT_RETRY_STRATEGY,
):
    """Initialize the Agent with the specified configuration.

    Args:
        model: Provider for running inference or a string representing the model-id for Bedrock to use.
            Defaults to strands.models.BedrockModel if None.
        messages: List of initial messages to pre-load into the conversation.
            Defaults to an empty list if None.
        tools: List of tools to make available to the agent.
            Can be specified as:

            - String tool names (e.g., "retrieve")
            - File paths (e.g., "/path/to/tool.py")
            - Imported Python modules (e.g., from strands_tools import current_time)
            - Dictionaries with name/path keys (e.g., {"name": "tool_name", "path": "/path/to/tool.py"})
            - ToolProvider instances for managed tool collections
            - Functions decorated with `@strands.tool` decorator.

            If provided, only these tools will be available. If None, all tools will be available.
        system_prompt: System prompt to guide model behavior.
            Can be a string or a list of SystemContentBlock objects for advanced features like caching.
            If None, the model will behave according to its default settings.
        structured_output_model: Pydantic model type(s) for structured output.
            When specified, all agent calls will attempt to return structured output of this type.
            This can be overridden on the agent invocation.
            Defaults to None (no structured output).
        callback_handler: Callback for processing events as they happen during agent execution.
            If not provided (using the default), a new PrintingCallbackHandler instance is created.
            If explicitly set to None, null_callback_handler is used.
        conversation_manager: Manager for conversation history and context window.
            Defaults to strands.agent.conversation_manager.SlidingWindowConversationManager if None.
        record_direct_tool_call: Whether to record direct tool calls in message history.
            Defaults to True.
        load_tools_from_directory: Whether to load and automatically reload tools in the `./tools/` directory.
            Defaults to False.
        trace_attributes: Custom trace attributes to apply to the agent's trace span.
        agent_id: Optional ID for the agent, useful for session management and multi-agent scenarios.
            Defaults to "default".
        name: name of the Agent
            Defaults to "Strands Agents".
        description: description of what the Agent does
            Defaults to None.
        state: stateful information for the agent. Can be either an AgentState object, or a json serializable dict.
            Defaults to an empty AgentState object.
        hooks: hooks to be added to the agent hook registry
            Defaults to None.
        session_manager: Manager for handling agent sessions including conversation history and state.
            If provided, enables session-based persistence and state management.
        structured_output_prompt: Custom prompt message used when forcing structured output.
            When using structured output, if the model doesn't automatically use the output tool,
            the agent sends a follow-up message to request structured formatting. This parameter
            allows customizing that message.
            Defaults to "You must format the previous response as structured output."
        tool_executor: Definition of tool execution strategy (e.g., sequential, concurrent, etc.).
        retry_strategy: Strategy for retrying model calls on throttling or other transient errors.
            Defaults to ModelRetryStrategy with max_attempts=6, initial_delay=4s, max_delay=240s.
            Implement a custom HookProvider for custom retry logic, or pass None to disable retries.

    Raises:
        ValueError: If agent id contains path separators.
    """
    self.model = BedrockModel() if not model else BedrockModel(model_id=model) if isinstance(model, str) else model
    self.messages = messages if messages is not None else []
    # initializing self._system_prompt for backwards compatibility
    self._system_prompt, self._system_prompt_content = self._initialize_system_prompt(system_prompt)
    self._default_structured_output_model = structured_output_model
    self._structured_output_prompt = structured_output_prompt
    self.agent_id = _identifier.validate(agent_id or _DEFAULT_AGENT_ID, _identifier.Identifier.AGENT)
    self.name = name or _DEFAULT_AGENT_NAME
    self.description = description

    # If not provided, create a new PrintingCallbackHandler instance
    # If explicitly set to None, use null_callback_handler
    # Otherwise use the passed callback_handler
    self.callback_handler: Callable[..., Any] | PrintingCallbackHandler
    if isinstance(callback_handler, _DefaultCallbackHandlerSentinel):
        self.callback_handler = PrintingCallbackHandler()
    elif callback_handler is None:
        self.callback_handler = null_callback_handler
    else:
        self.callback_handler = callback_handler

    self.conversation_manager = conversation_manager if conversation_manager else SlidingWindowConversationManager()

    # Process trace attributes to ensure they're of compatible types
    self.trace_attributes: dict[str, AttributeValue] = {}
    if trace_attributes:
        for k, v in trace_attributes.items():
            if isinstance(v, (str, int, float, bool)) or (
                isinstance(v, list) and all(isinstance(x, (str, int, float, bool)) for x in v)
            ):
                self.trace_attributes[k] = v

    self.record_direct_tool_call = record_direct_tool_call
    self.load_tools_from_directory = load_tools_from_directory

    self.tool_registry = ToolRegistry()

    # Process tool list if provided
    if tools is not None:
        self.tool_registry.process_tools(tools)

    # Initialize tools and configuration
    self.tool_registry.initialize_tools(self.load_tools_from_directory)
    if load_tools_from_directory:
        self.tool_watcher = ToolWatcher(tool_registry=self.tool_registry)

    self.event_loop_metrics = EventLoopMetrics()

    # Initialize tracer instance (no-op if not configured)
    self.tracer = get_tracer()
    self.trace_span: trace_api.Span | None = None

    # Initialize agent state management
    if state is not None:
        if isinstance(state, dict):
            self.state = AgentState(state)
        elif isinstance(state, AgentState):
            self.state = state
        else:
            raise ValueError("state must be an AgentState object or a dict")
    else:
        self.state = AgentState()

    self.tool_caller = _ToolCaller(self)

    self.hooks = HookRegistry()

    self._interrupt_state = _InterruptState()

    # Initialize lock for guarding concurrent invocations
    # Using threading.Lock instead of asyncio.Lock because run_async() creates
    # separate event loops in different threads, so asyncio.Lock wouldn't work
    self._invocation_lock = threading.Lock()

    # In the future, we'll have a RetryStrategy base class but until
    # that API is determined we only allow ModelRetryStrategy
    if (
        retry_strategy is not None
        and not isinstance(retry_strategy, _DefaultRetryStrategySentinel)
        and type(retry_strategy) is not ModelRetryStrategy
    ):
        raise ValueError("retry_strategy must be an instance of ModelRetryStrategy")

    # If not provided (using the default), create a new ModelRetryStrategy instance
    # If explicitly set to None, disable retries (max_attempts=1 means no retries)
    # Otherwise use the passed retry_strategy
    if isinstance(retry_strategy, _DefaultRetryStrategySentinel):
        self._retry_strategy = ModelRetryStrategy(
            max_attempts=MAX_ATTEMPTS, max_delay=MAX_DELAY, initial_delay=INITIAL_DELAY
        )
    elif retry_strategy is None:
        # If no retry strategy is passed in, then we turn retries off
        self._retry_strategy = ModelRetryStrategy(max_attempts=1)
    else:
        self._retry_strategy = retry_strategy

    # Initialize session management functionality
    self._session_manager = session_manager
    if self._session_manager:
        self.hooks.add_hook(self._session_manager)

    # Allow conversation_managers to subscribe to hooks
    self.hooks.add_hook(self.conversation_manager)

    # Register retry strategy as a hook
    self.hooks.add_hook(self._retry_strategy)

    self.tool_executor = tool_executor or ConcurrentToolExecutor()

    if hooks:
        for hook in hooks:
            self.hooks.add_hook(hook)
    self.hooks.invoke_callbacks(AgentInitializedEvent(agent=self))

cleanup()

Clean up resources used by the agent.

This method cleans up all tool providers that require explicit cleanup, such as MCP clients. It should be called when the agent is no longer needed to ensure proper resource cleanup.

Note: This method uses a "belt and braces" approach with automatic cleanup through finalizers as a fallback, but explicit cleanup is recommended.

Source code in strands/agent/agent.py
558
559
560
561
562
563
564
565
566
567
568
def cleanup(self) -> None:
    """Clean up resources used by the agent.

    This method cleans up all tool providers that require explicit cleanup,
    such as MCP clients. It should be called when the agent is no longer needed
    to ensure proper resource cleanup.

    Note: This method uses a "belt and braces" approach with automatic cleanup
    through finalizers as a fallback, but explicit cleanup is recommended.
    """
    self.tool_registry.cleanup()

invoke_async(prompt=None, *, invocation_state=None, structured_output_model=None, structured_output_prompt=None, **kwargs) async

Process a natural language prompt through the agent's event loop.

This method implements the conversational interface with multiple input patterns: - String input: Simple text input - ContentBlock list: Multi-modal content blocks - Message list: Complete messages with roles - No input: Use existing conversation history

Parameters:

Name Type Description Default
prompt AgentInput

User input in various formats: - str: Simple text input - list[ContentBlock]: Multi-modal content blocks - list[Message]: Complete messages with roles - None: Use existing conversation history

None
invocation_state dict[str, Any] | None

Additional parameters to pass through the event loop.

None
structured_output_model type[BaseModel] | None

Pydantic model type(s) for structured output (overrides agent default).

None
structured_output_prompt str | None

Custom prompt for forcing structured output (overrides agent default).

None
**kwargs Any

Additional parameters to pass through the event loop.[Deprecating]

{}

Returns:

Name Type Description
Result AgentResult

object containing:

  • stop_reason: Why the event loop stopped (e.g., "end_turn", "max_tokens")
  • message: The final message from the model
  • metrics: Performance metrics from the event loop
  • state: The final state of the event loop
Source code in strands/agent/agent.py
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
async def invoke_async(
    self,
    prompt: AgentInput = None,
    *,
    invocation_state: dict[str, Any] | None = None,
    structured_output_model: type[BaseModel] | None = None,
    structured_output_prompt: str | None = None,
    **kwargs: Any,
) -> AgentResult:
    """Process a natural language prompt through the agent's event loop.

    This method implements the conversational interface with multiple input patterns:
    - String input: Simple text input
    - ContentBlock list: Multi-modal content blocks
    - Message list: Complete messages with roles
    - No input: Use existing conversation history

    Args:
        prompt: User input in various formats:
            - str: Simple text input
            - list[ContentBlock]: Multi-modal content blocks
            - list[Message]: Complete messages with roles
            - None: Use existing conversation history
        invocation_state: Additional parameters to pass through the event loop.
        structured_output_model: Pydantic model type(s) for structured output (overrides agent default).
        structured_output_prompt: Custom prompt for forcing structured output (overrides agent default).
        **kwargs: Additional parameters to pass through the event loop.[Deprecating]

    Returns:
        Result: object containing:

            - stop_reason: Why the event loop stopped (e.g., "end_turn", "max_tokens")
            - message: The final message from the model
            - metrics: Performance metrics from the event loop
            - state: The final state of the event loop
    """
    events = self.stream_async(
        prompt,
        invocation_state=invocation_state,
        structured_output_model=structured_output_model,
        structured_output_prompt=structured_output_prompt,
        **kwargs,
    )
    async for event in events:
        _ = event

    return cast(AgentResult, event["result"])

stream_async(prompt=None, *, invocation_state=None, structured_output_model=None, structured_output_prompt=None, **kwargs) async

Process a natural language prompt and yield events as an async iterator.

This method provides an asynchronous interface for streaming agent events with multiple input patterns: - String input: Simple text input - ContentBlock list: Multi-modal content blocks - Message list: Complete messages with roles - No input: Use existing conversation history

Parameters:

Name Type Description Default
prompt AgentInput

User input in various formats: - str: Simple text input - list[ContentBlock]: Multi-modal content blocks - list[Message]: Complete messages with roles - None: Use existing conversation history

None
invocation_state dict[str, Any] | None

Additional parameters to pass through the event loop.

None
structured_output_model type[BaseModel] | None

Pydantic model type(s) for structured output (overrides agent default).

None
structured_output_prompt str | None

Custom prompt for forcing structured output (overrides agent default).

None
**kwargs Any

Additional parameters to pass to the event loop.[Deprecating]

{}

Yields:

Type Description
AsyncIterator[Any]

An async iterator that yields events. Each event is a dictionary containing information about the current state of processing, such as:

  • data: Text content being generated
  • complete: Whether this is the final chunk
  • current_tool_use: Information about tools being executed
  • And other event data provided by the callback handler

Raises:

Type Description
ConcurrencyException

If another invocation is already in progress on this agent instance.

Exception

Any exceptions from the agent invocation will be propagated to the caller.

Example
async for event in agent.stream_async("Analyze this data"):
    if "data" in event:
        yield event["data"]
Source code in strands/agent/agent.py
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
async def stream_async(
    self,
    prompt: AgentInput = None,
    *,
    invocation_state: dict[str, Any] | None = None,
    structured_output_model: type[BaseModel] | None = None,
    structured_output_prompt: str | None = None,
    **kwargs: Any,
) -> AsyncIterator[Any]:
    """Process a natural language prompt and yield events as an async iterator.

    This method provides an asynchronous interface for streaming agent events with multiple input patterns:
    - String input: Simple text input
    - ContentBlock list: Multi-modal content blocks
    - Message list: Complete messages with roles
    - No input: Use existing conversation history

    Args:
        prompt: User input in various formats:
            - str: Simple text input
            - list[ContentBlock]: Multi-modal content blocks
            - list[Message]: Complete messages with roles
            - None: Use existing conversation history
        invocation_state: Additional parameters to pass through the event loop.
        structured_output_model: Pydantic model type(s) for structured output (overrides agent default).
        structured_output_prompt: Custom prompt for forcing structured output (overrides agent default).
        **kwargs: Additional parameters to pass to the event loop.[Deprecating]

    Yields:
        An async iterator that yields events. Each event is a dictionary containing
            information about the current state of processing, such as:

            - data: Text content being generated
            - complete: Whether this is the final chunk
            - current_tool_use: Information about tools being executed
            - And other event data provided by the callback handler

    Raises:
        ConcurrencyException: If another invocation is already in progress on this agent instance.
        Exception: Any exceptions from the agent invocation will be propagated to the caller.

    Example:
        ```python
        async for event in agent.stream_async("Analyze this data"):
            if "data" in event:
                yield event["data"]
        ```
    """
    # Acquire lock to prevent concurrent invocations
    # Using threading.Lock instead of asyncio.Lock because run_async() creates
    # separate event loops in different threads
    acquired = self._invocation_lock.acquire(blocking=False)
    if not acquired:
        raise ConcurrencyException(
            "Agent is already processing a request. Concurrent invocations are not supported."
        )

    try:
        self._interrupt_state.resume(prompt)

        self.event_loop_metrics.reset_usage_metrics()

        merged_state = {}
        if kwargs:
            warnings.warn("`**kwargs` parameter is deprecating, use `invocation_state` instead.", stacklevel=2)
            merged_state.update(kwargs)
            if invocation_state is not None:
                merged_state["invocation_state"] = invocation_state
        else:
            if invocation_state is not None:
                merged_state = invocation_state

        callback_handler = self.callback_handler
        if kwargs:
            callback_handler = kwargs.get("callback_handler", self.callback_handler)

        # Process input and get message to add (if any)
        messages = await self._convert_prompt_to_messages(prompt)

        self.trace_span = self._start_agent_trace_span(messages)

        with trace_api.use_span(self.trace_span):
            try:
                events = self._run_loop(messages, merged_state, structured_output_model, structured_output_prompt)

                async for event in events:
                    event.prepare(invocation_state=merged_state)

                    if event.is_callback_event:
                        as_dict = event.as_dict()
                        callback_handler(**as_dict)
                        yield as_dict

                result = AgentResult(*event["stop"])
                callback_handler(result=result)
                yield AgentResultEvent(result=result).as_dict()

                self._end_agent_trace_span(response=result)

            except Exception as e:
                self._end_agent_trace_span(error=e)
                raise

    finally:
        self._invocation_lock.release()

structured_output(output_model, prompt=None)

This method allows you to get structured output from the agent.

If you pass in a prompt, it will be used temporarily without adding it to the conversation history. If you don't pass in a prompt, it will use only the existing conversation history to respond.

For smaller models, you may want to use the optional prompt to add additional instructions to explicitly instruct the model to output the structured data.

Parameters:

Name Type Description Default
output_model type[T]

The output model (a JSON schema written as a Pydantic BaseModel) that the agent will use when responding.

required
prompt AgentInput

The prompt to use for the agent in various formats: - str: Simple text input - list[ContentBlock]: Multi-modal content blocks - list[Message]: Complete messages with roles - None: Use existing conversation history

None

Raises:

Type Description
ValueError

If no conversation history or prompt is provided.

Source code in strands/agent/agent.py
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
def structured_output(self, output_model: type[T], prompt: AgentInput = None) -> T:
    """This method allows you to get structured output from the agent.

    If you pass in a prompt, it will be used temporarily without adding it to the conversation history.
    If you don't pass in a prompt, it will use only the existing conversation history to respond.

    For smaller models, you may want to use the optional prompt to add additional instructions to explicitly
    instruct the model to output the structured data.

    Args:
        output_model: The output model (a JSON schema written as a Pydantic BaseModel)
            that the agent will use when responding.
        prompt: The prompt to use for the agent in various formats:
            - str: Simple text input
            - list[ContentBlock]: Multi-modal content blocks
            - list[Message]: Complete messages with roles
            - None: Use existing conversation history

    Raises:
        ValueError: If no conversation history or prompt is provided.
    """
    warnings.warn(
        "Agent.structured_output method is deprecated."
        " You should pass in `structured_output_model` directly into the agent invocation."
        " see: https://strandsagents.com/latest/documentation/docs/user-guide/concepts/agents/structured-output/",
        category=DeprecationWarning,
        stacklevel=2,
    )

    return run_async(lambda: self.structured_output_async(output_model, prompt))

structured_output_async(output_model, prompt=None) async

This method allows you to get structured output from the agent.

If you pass in a prompt, it will be used temporarily without adding it to the conversation history. If you don't pass in a prompt, it will use only the existing conversation history to respond.

For smaller models, you may want to use the optional prompt to add additional instructions to explicitly instruct the model to output the structured data.

Parameters:

Name Type Description Default
output_model type[T]

The output model (a JSON schema written as a Pydantic BaseModel) that the agent will use when responding.

required
prompt AgentInput

The prompt to use for the agent (will not be added to conversation history).

None

Raises:

Type Description
ValueError

If no conversation history or prompt is provided.

-

Source code in strands/agent/agent.py
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
async def structured_output_async(self, output_model: type[T], prompt: AgentInput = None) -> T:
    """This method allows you to get structured output from the agent.

    If you pass in a prompt, it will be used temporarily without adding it to the conversation history.
    If you don't pass in a prompt, it will use only the existing conversation history to respond.

    For smaller models, you may want to use the optional prompt to add additional instructions to explicitly
    instruct the model to output the structured data.

    Args:
        output_model: The output model (a JSON schema written as a Pydantic BaseModel)
            that the agent will use when responding.
        prompt: The prompt to use for the agent (will not be added to conversation history).

    Raises:
        ValueError: If no conversation history or prompt is provided.
    -
    """
    if self._interrupt_state.activated:
        raise RuntimeError("cannot call structured output during interrupt")

    warnings.warn(
        "Agent.structured_output_async method is deprecated."
        " You should pass in `structured_output_model` directly into the agent invocation."
        " see: https://strandsagents.com/latest/documentation/docs/user-guide/concepts/agents/structured-output/",
        category=DeprecationWarning,
        stacklevel=2,
    )
    await self.hooks.invoke_callbacks_async(BeforeInvocationEvent(agent=self, invocation_state={}))
    with self.tracer.tracer.start_as_current_span(
        "execute_structured_output", kind=trace_api.SpanKind.CLIENT
    ) as structured_output_span:
        try:
            if not self.messages and not prompt:
                raise ValueError("No conversation history or prompt provided")

            temp_messages: Messages = self.messages + await self._convert_prompt_to_messages(prompt)

            structured_output_span.set_attributes(
                {
                    "gen_ai.system": "strands-agents",
                    "gen_ai.agent.name": self.name,
                    "gen_ai.agent.id": self.agent_id,
                    "gen_ai.operation.name": "execute_structured_output",
                }
            )
            if self.system_prompt:
                structured_output_span.add_event(
                    "gen_ai.system.message",
                    attributes={"role": "system", "content": serialize([{"text": self.system_prompt}])},
                )
            for message in temp_messages:
                structured_output_span.add_event(
                    f"gen_ai.{message['role']}.message",
                    attributes={"role": message["role"], "content": serialize(message["content"])},
                )
            events = self.model.structured_output(output_model, temp_messages, system_prompt=self.system_prompt)
            async for event in events:
                if isinstance(event, TypedEvent):
                    event.prepare(invocation_state={})
                    if event.is_callback_event:
                        self.callback_handler(**event.as_dict())

            structured_output_span.add_event(
                "gen_ai.choice", attributes={"message": serialize(event["output"].model_dump())}
            )
            return event["output"]

        finally:
            await self.hooks.invoke_callbacks_async(AfterInvocationEvent(agent=self, invocation_state={}))

AgentBase

Bases: Protocol

Protocol defining the interface for all agent types in Strands.

This protocol defines the minimal contract that all agent implementations must satisfy.

Source code in strands/agent/base.py
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
@runtime_checkable
class AgentBase(Protocol):
    """Protocol defining the interface for all agent types in Strands.

    This protocol defines the minimal contract that all agent implementations
    must satisfy.
    """

    async def invoke_async(
        self,
        prompt: AgentInput = None,
        **kwargs: Any,
    ) -> AgentResult:
        """Asynchronously invoke the agent with the given prompt.

        Args:
            prompt: Input to the agent.
            **kwargs: Additional arguments.

        Returns:
            AgentResult containing the agent's response.
        """
        ...

    def __call__(
        self,
        prompt: AgentInput = None,
        **kwargs: Any,
    ) -> AgentResult:
        """Synchronously invoke the agent with the given prompt.

        Args:
            prompt: Input to the agent.
            **kwargs: Additional arguments.

        Returns:
            AgentResult containing the agent's response.
        """
        ...

    def stream_async(
        self,
        prompt: AgentInput = None,
        **kwargs: Any,
    ) -> AsyncIterator[Any]:
        """Stream agent execution asynchronously.

        Args:
            prompt: Input to the agent.
            **kwargs: Additional arguments.

        Yields:
            Events representing the streaming execution.
        """
        ...

__call__(prompt=None, **kwargs)

Synchronously invoke the agent with the given prompt.

Parameters:

Name Type Description Default
prompt AgentInput

Input to the agent.

None
**kwargs Any

Additional arguments.

{}

Returns:

Type Description
AgentResult

AgentResult containing the agent's response.

Source code in strands/agent/base.py
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
def __call__(
    self,
    prompt: AgentInput = None,
    **kwargs: Any,
) -> AgentResult:
    """Synchronously invoke the agent with the given prompt.

    Args:
        prompt: Input to the agent.
        **kwargs: Additional arguments.

    Returns:
        AgentResult containing the agent's response.
    """
    ...

invoke_async(prompt=None, **kwargs) async

Asynchronously invoke the agent with the given prompt.

Parameters:

Name Type Description Default
prompt AgentInput

Input to the agent.

None
**kwargs Any

Additional arguments.

{}

Returns:

Type Description
AgentResult

AgentResult containing the agent's response.

Source code in strands/agent/base.py
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
async def invoke_async(
    self,
    prompt: AgentInput = None,
    **kwargs: Any,
) -> AgentResult:
    """Asynchronously invoke the agent with the given prompt.

    Args:
        prompt: Input to the agent.
        **kwargs: Additional arguments.

    Returns:
        AgentResult containing the agent's response.
    """
    ...

stream_async(prompt=None, **kwargs)

Stream agent execution asynchronously.

Parameters:

Name Type Description Default
prompt AgentInput

Input to the agent.

None
**kwargs Any

Additional arguments.

{}

Yields:

Type Description
AsyncIterator[Any]

Events representing the streaming execution.

Source code in strands/agent/base.py
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
def stream_async(
    self,
    prompt: AgentInput = None,
    **kwargs: Any,
) -> AsyncIterator[Any]:
    """Stream agent execution asynchronously.

    Args:
        prompt: Input to the agent.
        **kwargs: Additional arguments.

    Yields:
        Events representing the streaming execution.
    """
    ...

AgentInitializedEvent dataclass

Bases: HookEvent

Event triggered when an agent has finished initialization.

This event is fired after the agent has been fully constructed and all built-in components have been initialized. Hook providers can use this event to perform setup tasks that require a fully initialized agent.

Source code in strands/hooks/events.py
25
26
27
28
29
30
31
32
33
34
@dataclass
class AgentInitializedEvent(HookEvent):
    """Event triggered when an agent has finished initialization.

    This event is fired after the agent has been fully constructed and all
    built-in components have been initialized. Hook providers can use this
    event to perform setup tasks that require a fully initialized agent.
    """

    pass

AgentResult dataclass

Represents the last result of invoking an agent with a prompt.

Attributes:

Name Type Description
stop_reason StopReason

The reason why the agent's processing stopped.

message Message

The last message generated by the agent.

metrics EventLoopMetrics

Performance metrics collected during processing.

state Any

Additional state information from the event loop.

interrupts Sequence[Interrupt] | None

List of interrupts if raised by user.

structured_output BaseModel | None

Parsed structured output when structured_output_model was specified.

Source code in strands/agent/agent_result.py
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
@dataclass
class AgentResult:
    """Represents the last result of invoking an agent with a prompt.

    Attributes:
        stop_reason: The reason why the agent's processing stopped.
        message: The last message generated by the agent.
        metrics: Performance metrics collected during processing.
        state: Additional state information from the event loop.
        interrupts: List of interrupts if raised by user.
        structured_output: Parsed structured output when structured_output_model was specified.
    """

    stop_reason: StopReason
    message: Message
    metrics: EventLoopMetrics
    state: Any
    interrupts: Sequence[Interrupt] | None = None
    structured_output: BaseModel | None = None

    def __str__(self) -> str:
        """Return a string representation of the agent result.

        Priority order:
        1. Interrupts (if present) → stringified list of interrupt dicts
        2. Structured output (if present) → JSON string
        3. Text content from message → concatenated text blocks

        Returns:
            String representation based on the priority order above.
        """
        if self.interrupts:
            return str([interrupt.to_dict() for interrupt in self.interrupts])

        if self.structured_output:
            return self.structured_output.model_dump_json()

        content_array = self.message.get("content", [])
        result = ""
        for item in content_array:
            if isinstance(item, dict):
                if "text" in item:
                    result += item.get("text", "") + "\n"
                elif "citationsContent" in item:
                    citations_block = item["citationsContent"]
                    if "content" in citations_block:
                        for content in citations_block["content"]:
                            if isinstance(content, dict) and "text" in content:
                                result += content.get("text", "") + "\n"

        return result

    @classmethod
    def from_dict(cls, data: dict[str, Any]) -> "AgentResult":
        """Rehydrate an AgentResult from persisted JSON.

        Args:
            data: Dictionary containing the serialized AgentResult data
        Returns:
            AgentResult instance
        Raises:
            TypeError: If the data format is invalid@
        """
        if data.get("type") != "agent_result":
            raise TypeError(f"AgentResult.from_dict: unexpected type {data.get('type')!r}")

        message = cast(Message, data.get("message"))
        stop_reason = cast(StopReason, data.get("stop_reason"))

        return cls(message=message, stop_reason=stop_reason, metrics=EventLoopMetrics(), state={})

    def to_dict(self) -> dict[str, Any]:
        """Convert this AgentResult to JSON-serializable dictionary.

        Returns:
            Dictionary containing serialized AgentResult data
        """
        return {
            "type": "agent_result",
            "message": self.message,
            "stop_reason": self.stop_reason,
        }

__str__()

Return a string representation of the agent result.

Priority order: 1. Interrupts (if present) → stringified list of interrupt dicts 2. Structured output (if present) → JSON string 3. Text content from message → concatenated text blocks

Returns:

Type Description
str

String representation based on the priority order above.

Source code in strands/agent/agent_result.py
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
def __str__(self) -> str:
    """Return a string representation of the agent result.

    Priority order:
    1. Interrupts (if present) → stringified list of interrupt dicts
    2. Structured output (if present) → JSON string
    3. Text content from message → concatenated text blocks

    Returns:
        String representation based on the priority order above.
    """
    if self.interrupts:
        return str([interrupt.to_dict() for interrupt in self.interrupts])

    if self.structured_output:
        return self.structured_output.model_dump_json()

    content_array = self.message.get("content", [])
    result = ""
    for item in content_array:
        if isinstance(item, dict):
            if "text" in item:
                result += item.get("text", "") + "\n"
            elif "citationsContent" in item:
                citations_block = item["citationsContent"]
                if "content" in citations_block:
                    for content in citations_block["content"]:
                        if isinstance(content, dict) and "text" in content:
                            result += content.get("text", "") + "\n"

    return result

from_dict(data) classmethod

Rehydrate an AgentResult from persisted JSON.

Parameters:

Name Type Description Default
data dict[str, Any]

Dictionary containing the serialized AgentResult data

required

Returns: AgentResult instance Raises: TypeError: If the data format is invalid@

Source code in strands/agent/agent_result.py
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
@classmethod
def from_dict(cls, data: dict[str, Any]) -> "AgentResult":
    """Rehydrate an AgentResult from persisted JSON.

    Args:
        data: Dictionary containing the serialized AgentResult data
    Returns:
        AgentResult instance
    Raises:
        TypeError: If the data format is invalid@
    """
    if data.get("type") != "agent_result":
        raise TypeError(f"AgentResult.from_dict: unexpected type {data.get('type')!r}")

    message = cast(Message, data.get("message"))
    stop_reason = cast(StopReason, data.get("stop_reason"))

    return cls(message=message, stop_reason=stop_reason, metrics=EventLoopMetrics(), state={})

to_dict()

Convert this AgentResult to JSON-serializable dictionary.

Returns:

Type Description
dict[str, Any]

Dictionary containing serialized AgentResult data

Source code in strands/agent/agent_result.py
89
90
91
92
93
94
95
96
97
98
99
def to_dict(self) -> dict[str, Any]:
    """Convert this AgentResult to JSON-serializable dictionary.

    Returns:
        Dictionary containing serialized AgentResult data
    """
    return {
        "type": "agent_result",
        "message": self.message,
        "stop_reason": self.stop_reason,
    }

AgentResultEvent

Bases: TypedEvent

Source code in strands/types/_events.py
412
413
414
class AgentResultEvent(TypedEvent):
    def __init__(self, result: "AgentResult"):
        super().__init__({"result": result})

BedrockModel

Bases: Model

AWS Bedrock model provider implementation.

The implementation handles Bedrock-specific features such as:

  • Tool configuration for function calling
  • Guardrails integration
  • Caching points for system prompts and tools
  • Streaming responses
  • Context window overflow detection
Source code in strands/models/bedrock.py
  59
  60
  61
  62
  63
  64
  65
  66
  67
  68
  69
  70
  71
  72
  73
  74
  75
  76
  77
  78
  79
  80
  81
  82
  83
  84
  85
  86
  87
  88
  89
  90
  91
  92
  93
  94
  95
  96
  97
  98
  99
 100
 101
 102
 103
 104
 105
 106
 107
 108
 109
 110
 111
 112
 113
 114
 115
 116
 117
 118
 119
 120
 121
 122
 123
 124
 125
 126
 127
 128
 129
 130
 131
 132
 133
 134
 135
 136
 137
 138
 139
 140
 141
 142
 143
 144
 145
 146
 147
 148
 149
 150
 151
 152
 153
 154
 155
 156
 157
 158
 159
 160
 161
 162
 163
 164
 165
 166
 167
 168
 169
 170
 171
 172
 173
 174
 175
 176
 177
 178
 179
 180
 181
 182
 183
 184
 185
 186
 187
 188
 189
 190
 191
 192
 193
 194
 195
 196
 197
 198
 199
 200
 201
 202
 203
 204
 205
 206
 207
 208
 209
 210
 211
 212
 213
 214
 215
 216
 217
 218
 219
 220
 221
 222
 223
 224
 225
 226
 227
 228
 229
 230
 231
 232
 233
 234
 235
 236
 237
 238
 239
 240
 241
 242
 243
 244
 245
 246
 247
 248
 249
 250
 251
 252
 253
 254
 255
 256
 257
 258
 259
 260
 261
 262
 263
 264
 265
 266
 267
 268
 269
 270
 271
 272
 273
 274
 275
 276
 277
 278
 279
 280
 281
 282
 283
 284
 285
 286
 287
 288
 289
 290
 291
 292
 293
 294
 295
 296
 297
 298
 299
 300
 301
 302
 303
 304
 305
 306
 307
 308
 309
 310
 311
 312
 313
 314
 315
 316
 317
 318
 319
 320
 321
 322
 323
 324
 325
 326
 327
 328
 329
 330
 331
 332
 333
 334
 335
 336
 337
 338
 339
 340
 341
 342
 343
 344
 345
 346
 347
 348
 349
 350
 351
 352
 353
 354
 355
 356
 357
 358
 359
 360
 361
 362
 363
 364
 365
 366
 367
 368
 369
 370
 371
 372
 373
 374
 375
 376
 377
 378
 379
 380
 381
 382
 383
 384
 385
 386
 387
 388
 389
 390
 391
 392
 393
 394
 395
 396
 397
 398
 399
 400
 401
 402
 403
 404
 405
 406
 407
 408
 409
 410
 411
 412
 413
 414
 415
 416
 417
 418
 419
 420
 421
 422
 423
 424
 425
 426
 427
 428
 429
 430
 431
 432
 433
 434
 435
 436
 437
 438
 439
 440
 441
 442
 443
 444
 445
 446
 447
 448
 449
 450
 451
 452
 453
 454
 455
 456
 457
 458
 459
 460
 461
 462
 463
 464
 465
 466
 467
 468
 469
 470
 471
 472
 473
 474
 475
 476
 477
 478
 479
 480
 481
 482
 483
 484
 485
 486
 487
 488
 489
 490
 491
 492
 493
 494
 495
 496
 497
 498
 499
 500
 501
 502
 503
 504
 505
 506
 507
 508
 509
 510
 511
 512
 513
 514
 515
 516
 517
 518
 519
 520
 521
 522
 523
 524
 525
 526
 527
 528
 529
 530
 531
 532
 533
 534
 535
 536
 537
 538
 539
 540
 541
 542
 543
 544
 545
 546
 547
 548
 549
 550
 551
 552
 553
 554
 555
 556
 557
 558
 559
 560
 561
 562
 563
 564
 565
 566
 567
 568
 569
 570
 571
 572
 573
 574
 575
 576
 577
 578
 579
 580
 581
 582
 583
 584
 585
 586
 587
 588
 589
 590
 591
 592
 593
 594
 595
 596
 597
 598
 599
 600
 601
 602
 603
 604
 605
 606
 607
 608
 609
 610
 611
 612
 613
 614
 615
 616
 617
 618
 619
 620
 621
 622
 623
 624
 625
 626
 627
 628
 629
 630
 631
 632
 633
 634
 635
 636
 637
 638
 639
 640
 641
 642
 643
 644
 645
 646
 647
 648
 649
 650
 651
 652
 653
 654
 655
 656
 657
 658
 659
 660
 661
 662
 663
 664
 665
 666
 667
 668
 669
 670
 671
 672
 673
 674
 675
 676
 677
 678
 679
 680
 681
 682
 683
 684
 685
 686
 687
 688
 689
 690
 691
 692
 693
 694
 695
 696
 697
 698
 699
 700
 701
 702
 703
 704
 705
 706
 707
 708
 709
 710
 711
 712
 713
 714
 715
 716
 717
 718
 719
 720
 721
 722
 723
 724
 725
 726
 727
 728
 729
 730
 731
 732
 733
 734
 735
 736
 737
 738
 739
 740
 741
 742
 743
 744
 745
 746
 747
 748
 749
 750
 751
 752
 753
 754
 755
 756
 757
 758
 759
 760
 761
 762
 763
 764
 765
 766
 767
 768
 769
 770
 771
 772
 773
 774
 775
 776
 777
 778
 779
 780
 781
 782
 783
 784
 785
 786
 787
 788
 789
 790
 791
 792
 793
 794
 795
 796
 797
 798
 799
 800
 801
 802
 803
 804
 805
 806
 807
 808
 809
 810
 811
 812
 813
 814
 815
 816
 817
 818
 819
 820
 821
 822
 823
 824
 825
 826
 827
 828
 829
 830
 831
 832
 833
 834
 835
 836
 837
 838
 839
 840
 841
 842
 843
 844
 845
 846
 847
 848
 849
 850
 851
 852
 853
 854
 855
 856
 857
 858
 859
 860
 861
 862
 863
 864
 865
 866
 867
 868
 869
 870
 871
 872
 873
 874
 875
 876
 877
 878
 879
 880
 881
 882
 883
 884
 885
 886
 887
 888
 889
 890
 891
 892
 893
 894
 895
 896
 897
 898
 899
 900
 901
 902
 903
 904
 905
 906
 907
 908
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
class BedrockModel(Model):
    """AWS Bedrock model provider implementation.

    The implementation handles Bedrock-specific features such as:

    - Tool configuration for function calling
    - Guardrails integration
    - Caching points for system prompts and tools
    - Streaming responses
    - Context window overflow detection
    """

    class BedrockConfig(TypedDict, total=False):
        """Configuration options for Bedrock models.

        Attributes:
            additional_args: Any additional arguments to include in the request
            additional_request_fields: Additional fields to include in the Bedrock request
            additional_response_field_paths: Additional response field paths to extract
            cache_prompt: Cache point type for the system prompt (deprecated, use cache_config)
            cache_config: Configuration for prompt caching. Use CacheConfig(strategy="auto") for automatic caching.
            cache_tools: Cache point type for tools
            guardrail_id: ID of the guardrail to apply
            guardrail_trace: Guardrail trace mode. Defaults to enabled.
            guardrail_version: Version of the guardrail to apply
            guardrail_stream_processing_mode: The guardrail processing mode
            guardrail_redact_input: Flag to redact input if a guardrail is triggered. Defaults to True.
            guardrail_redact_input_message: If a Bedrock Input guardrail triggers, replace the input with this message.
            guardrail_redact_output: Flag to redact output if guardrail is triggered. Defaults to False.
            guardrail_redact_output_message: If a Bedrock Output guardrail triggers, replace output with this message.
            guardrail_latest_message: Flag to send only the lastest user message to guardrails.
                Defaults to False.
            max_tokens: Maximum number of tokens to generate in the response
            model_id: The Bedrock model ID (e.g., "us.anthropic.claude-sonnet-4-20250514-v1:0")
            include_tool_result_status: Flag to include status field in tool results.
                True includes status, False removes status, "auto" determines based on model_id. Defaults to "auto".
            stop_sequences: List of sequences that will stop generation when encountered
            streaming: Flag to enable/disable streaming. Defaults to True.
            temperature: Controls randomness in generation (higher = more random)
            top_p: Controls diversity via nucleus sampling (alternative to temperature)
        """

        additional_args: dict[str, Any] | None
        additional_request_fields: dict[str, Any] | None
        additional_response_field_paths: list[str] | None
        cache_prompt: str | None
        cache_config: CacheConfig | None
        cache_tools: str | None
        guardrail_id: str | None
        guardrail_trace: Literal["enabled", "disabled", "enabled_full"] | None
        guardrail_stream_processing_mode: Literal["sync", "async"] | None
        guardrail_version: str | None
        guardrail_redact_input: bool | None
        guardrail_redact_input_message: str | None
        guardrail_redact_output: bool | None
        guardrail_redact_output_message: str | None
        guardrail_latest_message: bool | None
        max_tokens: int | None
        model_id: str
        include_tool_result_status: Literal["auto"] | bool | None
        stop_sequences: list[str] | None
        streaming: bool | None
        temperature: float | None
        top_p: float | None

    def __init__(
        self,
        *,
        boto_session: boto3.Session | None = None,
        boto_client_config: BotocoreConfig | None = None,
        region_name: str | None = None,
        endpoint_url: str | None = None,
        **model_config: Unpack[BedrockConfig],
    ):
        """Initialize provider instance.

        Args:
            boto_session: Boto Session to use when calling the Bedrock Model.
            boto_client_config: Configuration to use when creating the Bedrock-Runtime Boto Client.
            region_name: AWS region to use for the Bedrock service.
                Defaults to the AWS_REGION environment variable if set, or "us-west-2" if not set.
            endpoint_url: Custom endpoint URL for VPC endpoints (PrivateLink)
            **model_config: Configuration options for the Bedrock model.
        """
        if region_name and boto_session:
            raise ValueError("Cannot specify both `region_name` and `boto_session`.")

        session = boto_session or boto3.Session()
        resolved_region = region_name or session.region_name or os.environ.get("AWS_REGION") or DEFAULT_BEDROCK_REGION
        self.config = BedrockModel.BedrockConfig(
            model_id=BedrockModel._get_default_model_with_warning(resolved_region, model_config),
            include_tool_result_status="auto",
        )
        self.update_config(**model_config)

        logger.debug("config=<%s> | initializing", self.config)

        # Add strands-agents to the request user agent
        if boto_client_config:
            existing_user_agent = getattr(boto_client_config, "user_agent_extra", None)

            # Append 'strands-agents' to existing user_agent_extra or set it if not present
            if existing_user_agent:
                new_user_agent = f"{existing_user_agent} strands-agents"
            else:
                new_user_agent = "strands-agents"

            client_config = boto_client_config.merge(BotocoreConfig(user_agent_extra=new_user_agent))
        else:
            client_config = BotocoreConfig(user_agent_extra="strands-agents", read_timeout=DEFAULT_READ_TIMEOUT)

        self.client = session.client(
            service_name="bedrock-runtime",
            config=client_config,
            endpoint_url=endpoint_url,
            region_name=resolved_region,
        )

        logger.debug("region=<%s> | bedrock client created", self.client.meta.region_name)

    @property
    def _supports_caching(self) -> bool:
        """Whether this model supports prompt caching.

        Returns True for Claude models on Bedrock.
        """
        model_id = self.config.get("model_id", "").lower()
        return "claude" in model_id or "anthropic" in model_id

    @override
    def update_config(self, **model_config: Unpack[BedrockConfig]) -> None:  # type: ignore
        """Update the Bedrock Model configuration with the provided arguments.

        Args:
            **model_config: Configuration overrides.
        """
        validate_config_keys(model_config, self.BedrockConfig)
        self.config.update(model_config)

    @override
    def get_config(self) -> BedrockConfig:
        """Get the current Bedrock Model configuration.

        Returns:
            The Bedrock model configuration.
        """
        return self.config

    def _format_request(
        self,
        messages: Messages,
        tool_specs: list[ToolSpec] | None = None,
        system_prompt_content: list[SystemContentBlock] | None = None,
        tool_choice: ToolChoice | None = None,
    ) -> dict[str, Any]:
        """Format a Bedrock converse stream request.

        Args:
            messages: List of message objects to be processed by the model.
            tool_specs: List of tool specifications to make available to the model.
            tool_choice: Selection strategy for tool invocation.
            system_prompt_content: System prompt content blocks to provide context to the model.

        Returns:
            A Bedrock converse stream request.
        """
        if not tool_specs:
            has_tool_content = any(
                any("toolUse" in block or "toolResult" in block for block in msg.get("content", [])) for msg in messages
            )
            if has_tool_content:
                tool_specs = [noop_tool.tool_spec]

        # Use system_prompt_content directly (copy for mutability)
        system_blocks: list[SystemContentBlock] = system_prompt_content.copy() if system_prompt_content else []
        # Add cache point if configured (backwards compatibility)
        if cache_prompt := self.config.get("cache_prompt"):
            warnings.warn(
                "cache_prompt is deprecated. Use SystemContentBlock with cachePoint instead.", UserWarning, stacklevel=3
            )
            system_blocks.append({"cachePoint": {"type": cache_prompt}})

        return {
            "modelId": self.config["model_id"],
            "messages": self._format_bedrock_messages(messages),
            "system": system_blocks,
            **(
                {
                    "toolConfig": {
                        "tools": [
                            *[
                                {
                                    "toolSpec": {
                                        "name": tool_spec["name"],
                                        "description": tool_spec["description"],
                                        "inputSchema": tool_spec["inputSchema"],
                                    }
                                }
                                for tool_spec in tool_specs
                            ],
                            *(
                                [{"cachePoint": {"type": self.config["cache_tools"]}}]
                                if self.config.get("cache_tools")
                                else []
                            ),
                        ],
                        **({"toolChoice": tool_choice if tool_choice else {"auto": {}}}),
                    }
                }
                if tool_specs
                else {}
            ),
            **(self._get_additional_request_fields(tool_choice)),
            **(
                {"additionalModelResponseFieldPaths": self.config["additional_response_field_paths"]}
                if self.config.get("additional_response_field_paths")
                else {}
            ),
            **(
                {
                    "guardrailConfig": {
                        "guardrailIdentifier": self.config["guardrail_id"],
                        "guardrailVersion": self.config["guardrail_version"],
                        "trace": self.config.get("guardrail_trace", "enabled"),
                        **(
                            {"streamProcessingMode": self.config.get("guardrail_stream_processing_mode")}
                            if self.config.get("guardrail_stream_processing_mode")
                            else {}
                        ),
                    }
                }
                if self.config.get("guardrail_id") and self.config.get("guardrail_version")
                else {}
            ),
            "inferenceConfig": {
                key: value
                for key, value in [
                    ("maxTokens", self.config.get("max_tokens")),
                    ("temperature", self.config.get("temperature")),
                    ("topP", self.config.get("top_p")),
                    ("stopSequences", self.config.get("stop_sequences")),
                ]
                if value is not None
            },
            **(
                self.config["additional_args"]
                if "additional_args" in self.config and self.config["additional_args"] is not None
                else {}
            ),
        }

    def _get_additional_request_fields(self, tool_choice: ToolChoice | None) -> dict[str, Any]:
        """Get additional request fields, removing thinking if tool_choice forces tool use.

        Bedrock's API does not allow thinking mode when tool_choice forces tool use.
        When forcing a tool (e.g., for structured_output retry), we temporarily disable thinking.

        Args:
            tool_choice: The tool choice configuration.

        Returns:
            A dict containing additionalModelRequestFields if configured, or empty dict.
        """
        additional_fields = self.config.get("additional_request_fields")
        if not additional_fields:
            return {}

        # Check if tool_choice is forcing tool use ("any" or specific "tool")
        is_forcing_tool = tool_choice is not None and ("any" in tool_choice or "tool" in tool_choice)

        if is_forcing_tool and "thinking" in additional_fields:
            # Create a copy without the thinking key
            fields_without_thinking = {k: v for k, v in additional_fields.items() if k != "thinking"}
            if fields_without_thinking:
                return {"additionalModelRequestFields": fields_without_thinking}
            return {}

        return {"additionalModelRequestFields": additional_fields}

    def _inject_cache_point(self, messages: list[dict[str, Any]]) -> None:
        """Inject a cache point at the end of the last assistant message.

        Args:
            messages: List of messages to inject cache point into (modified in place).
        """
        if not messages:
            return

        last_assistant_idx: int | None = None
        for msg_idx, msg in enumerate(messages):
            content = msg.get("content", [])
            for block_idx, block in reversed(list(enumerate(content))):
                if "cachePoint" in block:
                    del content[block_idx]
                    logger.warning(
                        "msg_idx=<%s>, block_idx=<%s> | stripped existing cache point (auto mode manages cache points)",
                        msg_idx,
                        block_idx,
                    )
            if msg.get("role") == "assistant":
                last_assistant_idx = msg_idx

        if last_assistant_idx is not None and messages[last_assistant_idx].get("content"):
            messages[last_assistant_idx]["content"].append({"cachePoint": {"type": "default"}})
            logger.debug("msg_idx=<%s> | added cache point to last assistant message", last_assistant_idx)

    def _format_bedrock_messages(self, messages: Messages) -> list[dict[str, Any]]:
        """Format messages for Bedrock API compatibility.

        This function ensures messages conform to Bedrock's expected format by:
        - Filtering out SDK_UNKNOWN_MEMBER content blocks
        - Eagerly filtering content blocks to only include Bedrock-supported fields
        - Ensuring all message content blocks are properly formatted for the Bedrock API
        - Optionally wrapping the last user message in guardrailConverseContent blocks
        - Injecting cache points when cache_config is set with strategy="auto"

        Args:
            messages: List of messages to format

        Returns:
            Messages formatted for Bedrock API compatibility

        Note:
            Unlike other APIs that ignore unknown fields, Bedrock only accepts a strict
            subset of fields for each content block type and throws validation exceptions
            when presented with unexpected fields. Therefore, we must eagerly filter all
            content blocks to remove any additional fields before sending to Bedrock.
            https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ContentBlock.html
        """
        cleaned_messages: list[dict[str, Any]] = []

        filtered_unknown_members = False
        dropped_deepseek_reasoning_content = False

        guardrail_latest_message = self.config.get("guardrail_latest_message", False)

        for idx, message in enumerate(messages):
            cleaned_content: list[dict[str, Any]] = []

            for content_block in message["content"]:
                # Filter out SDK_UNKNOWN_MEMBER content blocks
                if "SDK_UNKNOWN_MEMBER" in content_block:
                    filtered_unknown_members = True
                    continue

                # DeepSeek models have issues with reasoningContent
                # TODO: Replace with systematic model configuration registry (https://github.com/strands-agents/sdk-python/issues/780)
                if "deepseek" in self.config["model_id"].lower() and "reasoningContent" in content_block:
                    dropped_deepseek_reasoning_content = True
                    continue

                # Format content blocks for Bedrock API compatibility
                formatted_content = self._format_request_message_content(content_block)
                if formatted_content is None:
                    continue

                # Wrap text or image content in guardrailContent if this is the last user message
                if (
                    guardrail_latest_message
                    and idx == len(messages) - 1
                    and message["role"] == "user"
                    and ("text" in formatted_content or "image" in formatted_content)
                ):
                    if "text" in formatted_content:
                        formatted_content = {"guardContent": {"text": {"text": formatted_content["text"]}}}
                    elif "image" in formatted_content:
                        formatted_content = {"guardContent": {"image": formatted_content["image"]}}

                cleaned_content.append(formatted_content)

            # Create new message with cleaned content (skip if empty)
            if cleaned_content:
                cleaned_messages.append({"content": cleaned_content, "role": message["role"]})

        if filtered_unknown_members:
            logger.warning(
                "Filtered out SDK_UNKNOWN_MEMBER content blocks from messages, consider upgrading boto3 version"
            )
        if dropped_deepseek_reasoning_content:
            logger.debug(
                "Filtered DeepSeek reasoningContent content blocks from messages - https://api-docs.deepseek.com/guides/reasoning_model#multi-round-conversation"
            )

        # Inject cache point into cleaned_messages (not original messages) if cache_config is set
        cache_config = self.config.get("cache_config")
        if cache_config and cache_config.strategy == "auto":
            if self._supports_caching:
                self._inject_cache_point(cleaned_messages)
            else:
                logger.warning(
                    "model_id=<%s> | cache_config is enabled but this model does not support caching",
                    self.config.get("model_id"),
                )

        return cleaned_messages

    def _should_include_tool_result_status(self) -> bool:
        """Determine whether to include tool result status based on current config."""
        include_status = self.config.get("include_tool_result_status", "auto")

        if include_status is True:
            return True
        elif include_status is False:
            return False
        else:  # "auto"
            return any(model in self.config["model_id"] for model in _MODELS_INCLUDE_STATUS)

    def _handle_location(self, location: SourceLocation) -> dict[str, Any] | None:
        """Convert location content block to Bedrock format if its an S3Location."""
        if location["type"] == "s3":
            s3_location = cast(S3Location, location)
            formatted_document_s3: dict[str, Any] = {"uri": s3_location["uri"]}
            if "bucketOwner" in s3_location:
                formatted_document_s3["bucketOwner"] = s3_location["bucketOwner"]
            return {"s3Location": formatted_document_s3}
        else:
            logger.warning("Non s3 location sources are not supported by Bedrock | skipping content block")
            return None

    def _format_request_message_content(self, content: ContentBlock) -> dict[str, Any] | None:
        """Format a Bedrock content block.

        Bedrock strictly validates content blocks and throws exceptions for unknown fields.
        This function extracts only the fields that Bedrock supports for each content type.

        Args:
            content: Content block to format.

        Returns:
            Bedrock formatted content block.

        Raises:
            TypeError: If the content block type is not supported by Bedrock.
        """
        # https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_CachePointBlock.html
        if "cachePoint" in content:
            return {"cachePoint": {"type": content["cachePoint"]["type"]}}

        # https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_DocumentBlock.html
        if "document" in content:
            document = content["document"]
            result: dict[str, Any] = {}

            # Handle required fields (all optional due to total=False)
            if "name" in document:
                result["name"] = document["name"]
            if "format" in document:
                result["format"] = document["format"]

            # Handle source - supports bytes or location
            if "source" in document:
                source = document["source"]
                formatted_document_source: dict[str, Any] | None
                if "location" in source:
                    formatted_document_source = self._handle_location(source["location"])
                    if formatted_document_source is None:
                        return None
                elif "bytes" in source:
                    formatted_document_source = {"bytes": source["bytes"]}
                result["source"] = formatted_document_source

            # Handle optional fields
            if "citations" in document and document["citations"] is not None:
                result["citations"] = {"enabled": document["citations"]["enabled"]}
            if "context" in document:
                result["context"] = document["context"]

            return {"document": result}

        # https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_GuardrailConverseContentBlock.html
        if "guardContent" in content:
            guard = content["guardContent"]
            guard_text = guard["text"]
            result = {"text": {"text": guard_text["text"], "qualifiers": guard_text["qualifiers"]}}
            return {"guardContent": result}

        # https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ImageBlock.html
        if "image" in content:
            image = content["image"]
            source = image["source"]
            formatted_image_source: dict[str, Any] | None
            if "location" in source:
                formatted_image_source = self._handle_location(source["location"])
                if formatted_image_source is None:
                    return None
            elif "bytes" in source:
                formatted_image_source = {"bytes": source["bytes"]}
            result = {"format": image["format"], "source": formatted_image_source}
            return {"image": result}

        # https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ReasoningContentBlock.html
        if "reasoningContent" in content:
            reasoning = content["reasoningContent"]
            result = {}

            if "reasoningText" in reasoning:
                reasoning_text = reasoning["reasoningText"]
                result["reasoningText"] = {}
                if "text" in reasoning_text:
                    result["reasoningText"]["text"] = reasoning_text["text"]
                # Only include signature if truthy (avoid empty strings)
                if reasoning_text.get("signature"):
                    result["reasoningText"]["signature"] = reasoning_text["signature"]

            if "redactedContent" in reasoning:
                result["redactedContent"] = reasoning["redactedContent"]

            return {"reasoningContent": result}

        # Pass through text and other simple content types
        if "text" in content:
            return {"text": content["text"]}

        # https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ToolResultBlock.html
        if "toolResult" in content:
            tool_result = content["toolResult"]
            formatted_content: list[dict[str, Any]] = []
            for tool_result_content in tool_result["content"]:
                if "json" in tool_result_content:
                    # Handle json field since not in ContentBlock but valid in ToolResultContent
                    formatted_content.append({"json": tool_result_content["json"]})
                else:
                    formatted_message_content = self._format_request_message_content(
                        cast(ContentBlock, tool_result_content)
                    )
                    if formatted_message_content is None:
                        continue
                    formatted_content.append(formatted_message_content)

            result = {
                "content": formatted_content,
                "toolUseId": tool_result["toolUseId"],
            }
            if "status" in tool_result and self._should_include_tool_result_status():
                result["status"] = tool_result["status"]
            return {"toolResult": result}

        # https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ToolUseBlock.html
        if "toolUse" in content:
            tool_use = content["toolUse"]
            return {
                "toolUse": {
                    "input": tool_use["input"],
                    "name": tool_use["name"],
                    "toolUseId": tool_use["toolUseId"],
                }
            }

        # https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_VideoBlock.html
        if "video" in content:
            video = content["video"]
            source = video["source"]
            formatted_video_source: dict[str, Any] | None
            if "location" in source:
                formatted_video_source = self._handle_location(source["location"])
                if formatted_video_source is None:
                    return None
            elif "bytes" in source:
                formatted_video_source = {"bytes": source["bytes"]}
            result = {"format": video["format"], "source": formatted_video_source}
            return {"video": result}

        # https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_CitationsContentBlock.html
        if "citationsContent" in content:
            citations = content["citationsContent"]
            result = {}

            if "citations" in citations:
                result["citations"] = []
                for citation in citations["citations"]:
                    filtered_citation: dict[str, Any] = {}
                    if "location" in citation:
                        filtered_citation["location"] = citation["location"]
                    if "sourceContent" in citation:
                        filtered_source_content: list[dict[str, Any]] = []
                        for source_content in citation["sourceContent"]:
                            if "text" in source_content:
                                filtered_source_content.append({"text": source_content["text"]})
                        if filtered_source_content:
                            filtered_citation["sourceContent"] = filtered_source_content
                    if "title" in citation:
                        filtered_citation["title"] = citation["title"]
                    result["citations"].append(filtered_citation)

            if "content" in citations:
                filtered_content: list[dict[str, Any]] = []
                for generated_content in citations["content"]:
                    if "text" in generated_content:
                        filtered_content.append({"text": generated_content["text"]})
                if filtered_content:
                    result["content"] = filtered_content

            return {"citationsContent": result}

        raise TypeError(f"content_type=<{next(iter(content))}> | unsupported type")

    def _has_blocked_guardrail(self, guardrail_data: dict[str, Any]) -> bool:
        """Check if guardrail data contains any blocked policies.

        Args:
            guardrail_data: Guardrail data from trace information.

        Returns:
            True if any blocked guardrail is detected, False otherwise.
        """
        input_assessment = guardrail_data.get("inputAssessment", {})
        output_assessments = guardrail_data.get("outputAssessments", {})

        # Check input assessments
        if any(self._find_detected_and_blocked_policy(assessment) for assessment in input_assessment.values()):
            return True

        # Check output assessments
        if any(self._find_detected_and_blocked_policy(assessment) for assessment in output_assessments.values()):
            return True

        return False

    def _generate_redaction_events(self) -> list[StreamEvent]:
        """Generate redaction events based on configuration.

        Returns:
            List of redaction events to yield.
        """
        events: list[StreamEvent] = []

        if self.config.get("guardrail_redact_input", True):
            logger.debug("Redacting user input due to guardrail.")
            events.append(
                {
                    "redactContent": {
                        "redactUserContentMessage": self.config.get(
                            "guardrail_redact_input_message", "[User input redacted.]"
                        )
                    }
                }
            )

        if self.config.get("guardrail_redact_output", False):
            logger.debug("Redacting assistant output due to guardrail.")
            events.append(
                {
                    "redactContent": {
                        "redactAssistantContentMessage": self.config.get(
                            "guardrail_redact_output_message",
                            "[Assistant output redacted.]",
                        )
                    }
                }
            )

        return events

    @override
    async def stream(
        self,
        messages: Messages,
        tool_specs: list[ToolSpec] | None = None,
        system_prompt: str | None = None,
        *,
        tool_choice: ToolChoice | None = None,
        system_prompt_content: list[SystemContentBlock] | None = None,
        **kwargs: Any,
    ) -> AsyncGenerator[StreamEvent, None]:
        """Stream conversation with the Bedrock model.

        This method calls either the Bedrock converse_stream API or the converse API
        based on the streaming parameter in the configuration.

        Args:
            messages: List of message objects to be processed by the model.
            tool_specs: List of tool specifications to make available to the model.
            system_prompt: System prompt to provide context to the model.
            tool_choice: Selection strategy for tool invocation.
            system_prompt_content: System prompt content blocks to provide context to the model.
            **kwargs: Additional keyword arguments for future extensibility.

        Yields:
            Model events.

        Raises:
            ContextWindowOverflowException: If the input exceeds the model's context window.
            ModelThrottledException: If the model service is throttling requests.
        """

        def callback(event: StreamEvent | None = None) -> None:
            loop.call_soon_threadsafe(queue.put_nowait, event)
            if event is None:
                return

        loop = asyncio.get_event_loop()
        queue: asyncio.Queue[StreamEvent | None] = asyncio.Queue()

        # Handle backward compatibility: if system_prompt is provided but system_prompt_content is None
        if system_prompt and system_prompt_content is None:
            system_prompt_content = [{"text": system_prompt}]

        thread = asyncio.to_thread(self._stream, callback, messages, tool_specs, system_prompt_content, tool_choice)
        task = asyncio.create_task(thread)

        while True:
            event = await queue.get()
            if event is None:
                break

            yield event

        await task

    def _stream(
        self,
        callback: Callable[..., None],
        messages: Messages,
        tool_specs: list[ToolSpec] | None = None,
        system_prompt_content: list[SystemContentBlock] | None = None,
        tool_choice: ToolChoice | None = None,
    ) -> None:
        """Stream conversation with the Bedrock model.

        This method operates in a separate thread to avoid blocking the async event loop with the call to
        Bedrock's converse_stream.

        Args:
            callback: Function to send events to the main thread.
            messages: List of message objects to be processed by the model.
            tool_specs: List of tool specifications to make available to the model.
            system_prompt_content: System prompt content blocks to provide context to the model.
            tool_choice: Selection strategy for tool invocation.

        Raises:
            ContextWindowOverflowException: If the input exceeds the model's context window.
            ModelThrottledException: If the model service is throttling requests.
        """
        try:
            logger.debug("formatting request")
            request = self._format_request(messages, tool_specs, system_prompt_content, tool_choice)
            logger.debug("request=<%s>", request)

            logger.debug("invoking model")
            streaming = self.config.get("streaming", True)

            logger.debug("got response from model")
            if streaming:
                response = self.client.converse_stream(**request)
                # Track tool use events to fix stopReason for streaming responses
                has_tool_use = False
                for chunk in response["stream"]:
                    if (
                        "metadata" in chunk
                        and "trace" in chunk["metadata"]
                        and "guardrail" in chunk["metadata"]["trace"]
                    ):
                        guardrail_data = chunk["metadata"]["trace"]["guardrail"]
                        if self._has_blocked_guardrail(guardrail_data):
                            for event in self._generate_redaction_events():
                                callback(event)

                    # Track if we see tool use events
                    if "contentBlockStart" in chunk and chunk["contentBlockStart"].get("start", {}).get("toolUse"):
                        has_tool_use = True

                    # Fix stopReason for streaming responses that contain tool use
                    if (
                        has_tool_use
                        and "messageStop" in chunk
                        and (message_stop := chunk["messageStop"]).get("stopReason") == "end_turn"
                    ):
                        # Create corrected chunk with tool_use stopReason
                        modified_chunk = chunk.copy()
                        modified_chunk["messageStop"] = message_stop.copy()
                        modified_chunk["messageStop"]["stopReason"] = "tool_use"
                        logger.warning("Override stop reason from end_turn to tool_use")
                        callback(modified_chunk)
                    else:
                        callback(chunk)

            else:
                response = self.client.converse(**request)
                for event in self._convert_non_streaming_to_streaming(response):
                    callback(event)

                if (
                    "trace" in response
                    and "guardrail" in response["trace"]
                    and self._has_blocked_guardrail(response["trace"]["guardrail"])
                ):
                    for event in self._generate_redaction_events():
                        callback(event)

        except ClientError as e:
            error_message = str(e)

            if (
                e.response["Error"]["Code"] == "ThrottlingException"
                or e.response["Error"]["Code"] == "throttlingException"
            ):
                raise ModelThrottledException(error_message) from e

            if any(overflow_message in error_message for overflow_message in BEDROCK_CONTEXT_WINDOW_OVERFLOW_MESSAGES):
                logger.warning("bedrock threw context window overflow error")
                raise ContextWindowOverflowException(e) from e

            region = self.client.meta.region_name

            # Aid in debugging by adding more information
            add_exception_note(e, f"└ Bedrock region: {region}")
            add_exception_note(e, f"└ Model id: {self.config.get('model_id')}")

            if (
                e.response["Error"]["Code"] == "AccessDeniedException"
                and "You don't have access to the model" in error_message
            ):
                add_exception_note(
                    e,
                    "└ For more information see "
                    "https://strandsagents.com/latest/user-guide/concepts/model-providers/amazon-bedrock/#model-access-issue",
                )

            if (
                e.response["Error"]["Code"] == "ValidationException"
                and "with on-demand throughput isn’t supported" in error_message
            ):
                add_exception_note(
                    e,
                    "└ For more information see "
                    "https://strandsagents.com/latest/user-guide/concepts/model-providers/amazon-bedrock/#on-demand-throughput-isnt-supported",
                )

            raise e

        finally:
            callback()
            logger.debug("finished streaming response from model")

    def _convert_non_streaming_to_streaming(self, response: dict[str, Any]) -> Iterable[StreamEvent]:
        """Convert a non-streaming response to the streaming format.

        Args:
            response: The non-streaming response from the Bedrock model.

        Returns:
            An iterable of response events in the streaming format.
        """
        # Yield messageStart event
        yield {"messageStart": {"role": response["output"]["message"]["role"]}}

        # Process content blocks
        for content in cast(list[ContentBlock], response["output"]["message"]["content"]):
            # Yield contentBlockStart event if needed
            if "toolUse" in content:
                yield {
                    "contentBlockStart": {
                        "start": {
                            "toolUse": {
                                "toolUseId": content["toolUse"]["toolUseId"],
                                "name": content["toolUse"]["name"],
                            }
                        },
                    }
                }

                # For tool use, we need to yield the input as a delta
                input_value = json.dumps(content["toolUse"]["input"])

                yield {"contentBlockDelta": {"delta": {"toolUse": {"input": input_value}}}}
            elif "text" in content:
                # Then yield the text as a delta
                yield {
                    "contentBlockDelta": {
                        "delta": {"text": content["text"]},
                    }
                }
            elif "reasoningContent" in content:
                # Then yield the reasoning content as a delta
                yield {
                    "contentBlockDelta": {
                        "delta": {"reasoningContent": {"text": content["reasoningContent"]["reasoningText"]["text"]}}
                    }
                }

                if "signature" in content["reasoningContent"]["reasoningText"]:
                    yield {
                        "contentBlockDelta": {
                            "delta": {
                                "reasoningContent": {
                                    "signature": content["reasoningContent"]["reasoningText"]["signature"]
                                }
                            }
                        }
                    }
            elif "citationsContent" in content:
                # For non-streaming citations, emit text and metadata deltas in sequence
                # to match streaming behavior where they flow naturally
                if "content" in content["citationsContent"]:
                    text_content = "".join([content["text"] for content in content["citationsContent"]["content"]])
                    yield {
                        "contentBlockDelta": {"delta": {"text": text_content}},
                    }

                for citation in content["citationsContent"]["citations"]:
                    # Then emit citation metadata (for structure)

                    citation_metadata: CitationsDelta = {
                        "title": citation["title"],
                        "location": citation["location"],
                        "sourceContent": citation["sourceContent"],
                    }
                    yield {"contentBlockDelta": {"delta": {"citation": citation_metadata}}}

            # Yield contentBlockStop event
            yield {"contentBlockStop": {}}

        # Yield messageStop event
        # Fix stopReason for models that return end_turn when they should return tool_use on non-streaming side
        current_stop_reason = response["stopReason"]
        if current_stop_reason == "end_turn":
            message_content = response["output"]["message"]["content"]
            if any("toolUse" in content for content in message_content):
                current_stop_reason = "tool_use"
                logger.warning("Override stop reason from end_turn to tool_use")

        yield {
            "messageStop": {
                "stopReason": current_stop_reason,
                "additionalModelResponseFields": response.get("additionalModelResponseFields"),
            }
        }

        # Yield metadata event
        if "usage" in response or "metrics" in response or "trace" in response:
            metadata: StreamEvent = {"metadata": {}}
            if "usage" in response:
                metadata["metadata"]["usage"] = response["usage"]
            if "metrics" in response:
                metadata["metadata"]["metrics"] = response["metrics"]
            if "trace" in response:
                metadata["metadata"]["trace"] = response["trace"]
            yield metadata

    def _find_detected_and_blocked_policy(self, input: Any) -> bool:
        """Recursively checks if the assessment contains a detected and blocked guardrail.

        Args:
            input: The assessment to check.

        Returns:
            True if the input contains a detected and blocked guardrail, False otherwise.

        """
        # Check if input is a dictionary
        if isinstance(input, dict):
            # Check if current dictionary has action: BLOCKED and detected: true
            if input.get("action") == "BLOCKED" and input.get("detected") and isinstance(input.get("detected"), bool):
                return True

            # Otherwise, recursively check all values in the dictionary
            return self._find_detected_and_blocked_policy(input.values())

        elif isinstance(input, (list, ValuesView)):
            # Handle case where input is a list or dict_values
            return any(self._find_detected_and_blocked_policy(item) for item in input)
        # Otherwise return False
        return False

    @override
    async def structured_output(
        self,
        output_model: type[T],
        prompt: Messages,
        system_prompt: str | None = None,
        **kwargs: Any,
    ) -> AsyncGenerator[dict[str, T | Any], None]:
        """Get structured output from the model.

        Args:
            output_model: The output model to use for the agent.
            prompt: The prompt messages to use for the agent.
            system_prompt: System prompt to provide context to the model.
            **kwargs: Additional keyword arguments for future extensibility.

        Yields:
            Model events with the last being the structured output.
        """
        tool_spec = convert_pydantic_to_tool_spec(output_model)

        response = self.stream(
            messages=prompt,
            tool_specs=[tool_spec],
            system_prompt=system_prompt,
            tool_choice=cast(ToolChoice, {"any": {}}),
            **kwargs,
        )
        async for event in streaming.process_stream(response):
            yield event

        stop_reason, messages, _, _ = event["stop"]

        if stop_reason != "tool_use":
            raise ValueError(f'Model returned stop_reason: {stop_reason} instead of "tool_use".')

        content = messages["content"]
        output_response: dict[str, Any] | None = None
        for block in content:
            # if the tool use name doesn't match the tool spec name, skip, and if the block is not a tool use, skip.
            # if the tool use name never matches, raise an error.
            if block.get("toolUse") and block["toolUse"]["name"] == tool_spec["name"]:
                output_response = block["toolUse"]["input"]
            else:
                continue

        if output_response is None:
            raise ValueError("No valid tool use or tool use input was found in the Bedrock response.")

        yield {"output": output_model(**output_response)}

    @staticmethod
    def _get_default_model_with_warning(region_name: str, model_config: BedrockConfig | None = None) -> str:
        """Get the default Bedrock modelId based on region.

        If the region is not **known** to support inference then we show a helpful warning
        that compliments the exception that Bedrock will throw.
        If the customer provided a model_id in their config or they overrode the `DEFAULT_BEDROCK_MODEL_ID`
        then we should not process further.

        Args:
            region_name (str): region for bedrock model
            model_config (Optional[dict[str, Any]]): Model Config that caller passes in on init
        """
        if DEFAULT_BEDROCK_MODEL_ID != _DEFAULT_BEDROCK_MODEL_ID.format("us"):
            return DEFAULT_BEDROCK_MODEL_ID

        model_config = model_config or {}
        if model_config.get("model_id"):
            return model_config["model_id"]

        prefix_inference_map = {"ap": "apac"}  # some inference endpoints can be a bit different than the region prefix

        prefix = "-".join(region_name.split("-")[:-2]).lower()  # handles `us-east-1` or `us-gov-east-1`
        if prefix not in {"us", "eu", "ap", "us-gov"}:
            warnings.warn(
                f"""
            ================== WARNING ==================

                This region {region_name} does not support
                our default inference endpoint: {_DEFAULT_BEDROCK_MODEL_ID.format(prefix)}.
                Update the agent to pass in a 'model_id' like so:
                ```
                Agent(..., model='valid_model_id', ...)
                ````
                Documentation: https://docs.aws.amazon.com/bedrock/latest/userguide/inference-profiles-support.html

            ==================================================
            """,
                stacklevel=2,
            )

        return _DEFAULT_BEDROCK_MODEL_ID.format(prefix_inference_map.get(prefix, prefix))

BedrockConfig

Bases: TypedDict

Configuration options for Bedrock models.

Attributes:

Name Type Description
additional_args dict[str, Any] | None

Any additional arguments to include in the request

additional_request_fields dict[str, Any] | None

Additional fields to include in the Bedrock request

additional_response_field_paths list[str] | None

Additional response field paths to extract

cache_prompt str | None

Cache point type for the system prompt (deprecated, use cache_config)

cache_config CacheConfig | None

Configuration for prompt caching. Use CacheConfig(strategy="auto") for automatic caching.

cache_tools str | None

Cache point type for tools

guardrail_id str | None

ID of the guardrail to apply

guardrail_trace Literal['enabled', 'disabled', 'enabled_full'] | None

Guardrail trace mode. Defaults to enabled.

guardrail_version str | None

Version of the guardrail to apply

guardrail_stream_processing_mode Literal['sync', 'async'] | None

The guardrail processing mode

guardrail_redact_input bool | None

Flag to redact input if a guardrail is triggered. Defaults to True.

guardrail_redact_input_message str | None

If a Bedrock Input guardrail triggers, replace the input with this message.

guardrail_redact_output bool | None

Flag to redact output if guardrail is triggered. Defaults to False.

guardrail_redact_output_message str | None

If a Bedrock Output guardrail triggers, replace output with this message.

guardrail_latest_message bool | None

Flag to send only the lastest user message to guardrails. Defaults to False.

max_tokens int | None

Maximum number of tokens to generate in the response

model_id str

The Bedrock model ID (e.g., "us.anthropic.claude-sonnet-4-20250514-v1:0")

include_tool_result_status Literal['auto'] | bool | None

Flag to include status field in tool results. True includes status, False removes status, "auto" determines based on model_id. Defaults to "auto".

stop_sequences list[str] | None

List of sequences that will stop generation when encountered

streaming bool | None

Flag to enable/disable streaming. Defaults to True.

temperature float | None

Controls randomness in generation (higher = more random)

top_p float | None

Controls diversity via nucleus sampling (alternative to temperature)

Source code in strands/models/bedrock.py
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
class BedrockConfig(TypedDict, total=False):
    """Configuration options for Bedrock models.

    Attributes:
        additional_args: Any additional arguments to include in the request
        additional_request_fields: Additional fields to include in the Bedrock request
        additional_response_field_paths: Additional response field paths to extract
        cache_prompt: Cache point type for the system prompt (deprecated, use cache_config)
        cache_config: Configuration for prompt caching. Use CacheConfig(strategy="auto") for automatic caching.
        cache_tools: Cache point type for tools
        guardrail_id: ID of the guardrail to apply
        guardrail_trace: Guardrail trace mode. Defaults to enabled.
        guardrail_version: Version of the guardrail to apply
        guardrail_stream_processing_mode: The guardrail processing mode
        guardrail_redact_input: Flag to redact input if a guardrail is triggered. Defaults to True.
        guardrail_redact_input_message: If a Bedrock Input guardrail triggers, replace the input with this message.
        guardrail_redact_output: Flag to redact output if guardrail is triggered. Defaults to False.
        guardrail_redact_output_message: If a Bedrock Output guardrail triggers, replace output with this message.
        guardrail_latest_message: Flag to send only the lastest user message to guardrails.
            Defaults to False.
        max_tokens: Maximum number of tokens to generate in the response
        model_id: The Bedrock model ID (e.g., "us.anthropic.claude-sonnet-4-20250514-v1:0")
        include_tool_result_status: Flag to include status field in tool results.
            True includes status, False removes status, "auto" determines based on model_id. Defaults to "auto".
        stop_sequences: List of sequences that will stop generation when encountered
        streaming: Flag to enable/disable streaming. Defaults to True.
        temperature: Controls randomness in generation (higher = more random)
        top_p: Controls diversity via nucleus sampling (alternative to temperature)
    """

    additional_args: dict[str, Any] | None
    additional_request_fields: dict[str, Any] | None
    additional_response_field_paths: list[str] | None
    cache_prompt: str | None
    cache_config: CacheConfig | None
    cache_tools: str | None
    guardrail_id: str | None
    guardrail_trace: Literal["enabled", "disabled", "enabled_full"] | None
    guardrail_stream_processing_mode: Literal["sync", "async"] | None
    guardrail_version: str | None
    guardrail_redact_input: bool | None
    guardrail_redact_input_message: str | None
    guardrail_redact_output: bool | None
    guardrail_redact_output_message: str | None
    guardrail_latest_message: bool | None
    max_tokens: int | None
    model_id: str
    include_tool_result_status: Literal["auto"] | bool | None
    stop_sequences: list[str] | None
    streaming: bool | None
    temperature: float | None
    top_p: float | None

__init__(*, boto_session=None, boto_client_config=None, region_name=None, endpoint_url=None, **model_config)

Initialize provider instance.

Parameters:

Name Type Description Default
boto_session Session | None

Boto Session to use when calling the Bedrock Model.

None
boto_client_config Config | None

Configuration to use when creating the Bedrock-Runtime Boto Client.

None
region_name str | None

AWS region to use for the Bedrock service. Defaults to the AWS_REGION environment variable if set, or "us-west-2" if not set.

None
endpoint_url str | None

Custom endpoint URL for VPC endpoints (PrivateLink)

None
**model_config Unpack[BedrockConfig]

Configuration options for the Bedrock model.

{}
Source code in strands/models/bedrock.py
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
def __init__(
    self,
    *,
    boto_session: boto3.Session | None = None,
    boto_client_config: BotocoreConfig | None = None,
    region_name: str | None = None,
    endpoint_url: str | None = None,
    **model_config: Unpack[BedrockConfig],
):
    """Initialize provider instance.

    Args:
        boto_session: Boto Session to use when calling the Bedrock Model.
        boto_client_config: Configuration to use when creating the Bedrock-Runtime Boto Client.
        region_name: AWS region to use for the Bedrock service.
            Defaults to the AWS_REGION environment variable if set, or "us-west-2" if not set.
        endpoint_url: Custom endpoint URL for VPC endpoints (PrivateLink)
        **model_config: Configuration options for the Bedrock model.
    """
    if region_name and boto_session:
        raise ValueError("Cannot specify both `region_name` and `boto_session`.")

    session = boto_session or boto3.Session()
    resolved_region = region_name or session.region_name or os.environ.get("AWS_REGION") or DEFAULT_BEDROCK_REGION
    self.config = BedrockModel.BedrockConfig(
        model_id=BedrockModel._get_default_model_with_warning(resolved_region, model_config),
        include_tool_result_status="auto",
    )
    self.update_config(**model_config)

    logger.debug("config=<%s> | initializing", self.config)

    # Add strands-agents to the request user agent
    if boto_client_config:
        existing_user_agent = getattr(boto_client_config, "user_agent_extra", None)

        # Append 'strands-agents' to existing user_agent_extra or set it if not present
        if existing_user_agent:
            new_user_agent = f"{existing_user_agent} strands-agents"
        else:
            new_user_agent = "strands-agents"

        client_config = boto_client_config.merge(BotocoreConfig(user_agent_extra=new_user_agent))
    else:
        client_config = BotocoreConfig(user_agent_extra="strands-agents", read_timeout=DEFAULT_READ_TIMEOUT)

    self.client = session.client(
        service_name="bedrock-runtime",
        config=client_config,
        endpoint_url=endpoint_url,
        region_name=resolved_region,
    )

    logger.debug("region=<%s> | bedrock client created", self.client.meta.region_name)

get_config()

Get the current Bedrock Model configuration.

Returns:

Type Description
BedrockConfig

The Bedrock model configuration.

Source code in strands/models/bedrock.py
198
199
200
201
202
203
204
205
@override
def get_config(self) -> BedrockConfig:
    """Get the current Bedrock Model configuration.

    Returns:
        The Bedrock model configuration.
    """
    return self.config

stream(messages, tool_specs=None, system_prompt=None, *, tool_choice=None, system_prompt_content=None, **kwargs) async

Stream conversation with the Bedrock model.

This method calls either the Bedrock converse_stream API or the converse API based on the streaming parameter in the configuration.

Parameters:

Name Type Description Default
messages Messages

List of message objects to be processed by the model.

required
tool_specs list[ToolSpec] | None

List of tool specifications to make available to the model.

None
system_prompt str | None

System prompt to provide context to the model.

None
tool_choice ToolChoice | None

Selection strategy for tool invocation.

None
system_prompt_content list[SystemContentBlock] | None

System prompt content blocks to provide context to the model.

None
**kwargs Any

Additional keyword arguments for future extensibility.

{}

Yields:

Type Description
AsyncGenerator[StreamEvent, None]

Model events.

Raises:

Type Description
ContextWindowOverflowException

If the input exceeds the model's context window.

ModelThrottledException

If the model service is throttling requests.

Source code in strands/models/bedrock.py
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
@override
async def stream(
    self,
    messages: Messages,
    tool_specs: list[ToolSpec] | None = None,
    system_prompt: str | None = None,
    *,
    tool_choice: ToolChoice | None = None,
    system_prompt_content: list[SystemContentBlock] | None = None,
    **kwargs: Any,
) -> AsyncGenerator[StreamEvent, None]:
    """Stream conversation with the Bedrock model.

    This method calls either the Bedrock converse_stream API or the converse API
    based on the streaming parameter in the configuration.

    Args:
        messages: List of message objects to be processed by the model.
        tool_specs: List of tool specifications to make available to the model.
        system_prompt: System prompt to provide context to the model.
        tool_choice: Selection strategy for tool invocation.
        system_prompt_content: System prompt content blocks to provide context to the model.
        **kwargs: Additional keyword arguments for future extensibility.

    Yields:
        Model events.

    Raises:
        ContextWindowOverflowException: If the input exceeds the model's context window.
        ModelThrottledException: If the model service is throttling requests.
    """

    def callback(event: StreamEvent | None = None) -> None:
        loop.call_soon_threadsafe(queue.put_nowait, event)
        if event is None:
            return

    loop = asyncio.get_event_loop()
    queue: asyncio.Queue[StreamEvent | None] = asyncio.Queue()

    # Handle backward compatibility: if system_prompt is provided but system_prompt_content is None
    if system_prompt and system_prompt_content is None:
        system_prompt_content = [{"text": system_prompt}]

    thread = asyncio.to_thread(self._stream, callback, messages, tool_specs, system_prompt_content, tool_choice)
    task = asyncio.create_task(thread)

    while True:
        event = await queue.get()
        if event is None:
            break

        yield event

    await task

structured_output(output_model, prompt, system_prompt=None, **kwargs) async

Get structured output from the model.

Parameters:

Name Type Description Default
output_model type[T]

The output model to use for the agent.

required
prompt Messages

The prompt messages to use for the agent.

required
system_prompt str | None

System prompt to provide context to the model.

None
**kwargs Any

Additional keyword arguments for future extensibility.

{}

Yields:

Type Description
AsyncGenerator[dict[str, T | Any], None]

Model events with the last being the structured output.

Source code in strands/models/bedrock.py
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
@override
async def structured_output(
    self,
    output_model: type[T],
    prompt: Messages,
    system_prompt: str | None = None,
    **kwargs: Any,
) -> AsyncGenerator[dict[str, T | Any], None]:
    """Get structured output from the model.

    Args:
        output_model: The output model to use for the agent.
        prompt: The prompt messages to use for the agent.
        system_prompt: System prompt to provide context to the model.
        **kwargs: Additional keyword arguments for future extensibility.

    Yields:
        Model events with the last being the structured output.
    """
    tool_spec = convert_pydantic_to_tool_spec(output_model)

    response = self.stream(
        messages=prompt,
        tool_specs=[tool_spec],
        system_prompt=system_prompt,
        tool_choice=cast(ToolChoice, {"any": {}}),
        **kwargs,
    )
    async for event in streaming.process_stream(response):
        yield event

    stop_reason, messages, _, _ = event["stop"]

    if stop_reason != "tool_use":
        raise ValueError(f'Model returned stop_reason: {stop_reason} instead of "tool_use".')

    content = messages["content"]
    output_response: dict[str, Any] | None = None
    for block in content:
        # if the tool use name doesn't match the tool spec name, skip, and if the block is not a tool use, skip.
        # if the tool use name never matches, raise an error.
        if block.get("toolUse") and block["toolUse"]["name"] == tool_spec["name"]:
            output_response = block["toolUse"]["input"]
        else:
            continue

    if output_response is None:
        raise ValueError("No valid tool use or tool use input was found in the Bedrock response.")

    yield {"output": output_model(**output_response)}

update_config(**model_config)

Update the Bedrock Model configuration with the provided arguments.

Parameters:

Name Type Description Default
**model_config Unpack[BedrockConfig]

Configuration overrides.

{}
Source code in strands/models/bedrock.py
188
189
190
191
192
193
194
195
196
@override
def update_config(self, **model_config: Unpack[BedrockConfig]) -> None:  # type: ignore
    """Update the Bedrock Model configuration with the provided arguments.

    Args:
        **model_config: Configuration overrides.
    """
    validate_config_keys(model_config, self.BedrockConfig)
    self.config.update(model_config)

BeforeInvocationEvent dataclass

Bases: HookEvent

Event triggered at the beginning of a new agent request.

This event is fired before the agent begins processing a new user request, before any model inference or tool execution occurs. Hook providers can use this event to perform request-level setup, logging, or validation.

This event is triggered at the beginning of the following api calls
  • Agent.call
  • Agent.stream_async
  • Agent.structured_output

Attributes:

Name Type Description
invocation_state dict[str, Any]

State and configuration passed through the agent invocation. This can include shared context for multi-agent coordination, request tracking, and dynamic configuration.

messages Messages | None

The input messages for this invocation. Can be modified by hooks to redact or transform content before processing.

Source code in strands/hooks/events.py
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
@dataclass
class BeforeInvocationEvent(HookEvent):
    """Event triggered at the beginning of a new agent request.

    This event is fired before the agent begins processing a new user request,
    before any model inference or tool execution occurs. Hook providers can
    use this event to perform request-level setup, logging, or validation.

    This event is triggered at the beginning of the following api calls:
      - Agent.__call__
      - Agent.stream_async
      - Agent.structured_output

    Attributes:
        invocation_state: State and configuration passed through the agent invocation.
            This can include shared context for multi-agent coordination, request tracking,
            and dynamic configuration.
        messages: The input messages for this invocation. Can be modified by hooks
            to redact or transform content before processing.
    """

    invocation_state: dict[str, Any] = field(default_factory=dict)
    messages: Messages | None = None

    def _can_write(self, name: str) -> bool:
        return name == "messages"

ConcurrencyException

Bases: Exception

Exception raised when concurrent invocations are attempted on an agent instance.

Agent instances maintain internal state that cannot be safely accessed concurrently. This exception is raised when an invocation is attempted while another invocation is already in progress on the same agent instance.

Source code in strands/types/exceptions.py
 99
100
101
102
103
104
105
106
107
class ConcurrencyException(Exception):
    """Exception raised when concurrent invocations are attempted on an agent instance.

    Agent instances maintain internal state that cannot be safely accessed concurrently.
    This exception is raised when an invocation is attempted while another invocation
    is already in progress on the same agent instance.
    """

    pass

ConcurrentToolExecutor

Bases: ToolExecutor

Concurrent tool executor.

Source code in strands/tools/executors/concurrent.py
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
class ConcurrentToolExecutor(ToolExecutor):
    """Concurrent tool executor."""

    @override
    async def _execute(
        self,
        agent: "Agent",
        tool_uses: list[ToolUse],
        tool_results: list[ToolResult],
        cycle_trace: Trace,
        cycle_span: Any,
        invocation_state: dict[str, Any],
        structured_output_context: "StructuredOutputContext | None" = None,
    ) -> AsyncGenerator[TypedEvent, None]:
        """Execute tools concurrently.

        Args:
            agent: The agent for which tools are being executed.
            tool_uses: Metadata and inputs for the tools to be executed.
            tool_results: List of tool results from each tool execution.
            cycle_trace: Trace object for the current event loop cycle.
            cycle_span: Span object for tracing the cycle.
            invocation_state: Context for the tool invocation.
            structured_output_context: Context for structured output handling.

        Yields:
            Events from the tool execution stream.
        """
        task_queue: asyncio.Queue[tuple[int, Any]] = asyncio.Queue()
        task_events = [asyncio.Event() for _ in tool_uses]
        stop_event = object()

        tasks = [
            asyncio.create_task(
                self._task(
                    agent,
                    tool_use,
                    tool_results,
                    cycle_trace,
                    cycle_span,
                    invocation_state,
                    task_id,
                    task_queue,
                    task_events[task_id],
                    stop_event,
                    structured_output_context,
                )
            )
            for task_id, tool_use in enumerate(tool_uses)
        ]

        task_count = len(tasks)
        while task_count:
            task_id, event = await task_queue.get()
            if event is stop_event:
                task_count -= 1
                continue

            yield event
            task_events[task_id].set()

    async def _task(
        self,
        agent: "Agent",
        tool_use: ToolUse,
        tool_results: list[ToolResult],
        cycle_trace: Trace,
        cycle_span: Any,
        invocation_state: dict[str, Any],
        task_id: int,
        task_queue: asyncio.Queue,
        task_event: asyncio.Event,
        stop_event: object,
        structured_output_context: "StructuredOutputContext | None",
    ) -> None:
        """Execute a single tool and put results in the task queue.

        Args:
            agent: The agent executing the tool.
            tool_use: Tool use metadata and inputs.
            tool_results: List of tool results from each tool execution.
            cycle_trace: Trace object for the current event loop cycle.
            cycle_span: Span object for tracing the cycle.
            invocation_state: Context for tool execution.
            task_id: Unique identifier for this task.
            task_queue: Queue to put tool events into.
            task_event: Event to signal when task can continue.
            stop_event: Sentinel object to signal task completion.
            structured_output_context: Context for structured output handling.
        """
        try:
            events = ToolExecutor._stream_with_trace(
                agent, tool_use, tool_results, cycle_trace, cycle_span, invocation_state, structured_output_context
            )
            async for event in events:
                task_queue.put_nowait((task_id, event))
                await task_event.wait()
                task_event.clear()

        finally:
            task_queue.put_nowait((task_id, stop_event))

ContentBlock

Bases: TypedDict

A block of content for a message that you pass to, or receive from, a model.

Attributes:

Name Type Description
cachePoint CachePoint

A cache point configuration to optimize conversation history.

document DocumentContent

A document to include in the message.

guardContent GuardContent

Contains the content to assess with the guardrail.

image ImageContent

Image to include in the message.

reasoningContent ReasoningContentBlock

Contains content regarding the reasoning that is carried out by the model.

text str

Text to include in the message.

toolResult ToolResult

The result for a tool request that a model makes.

toolUse ToolUse

Information about a tool use request from a model.

video VideoContent

Video to include in the message.

citationsContent CitationsContentBlock

Contains the citations for a document.

Source code in strands/types/content.py
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
class ContentBlock(TypedDict, total=False):
    """A block of content for a message that you pass to, or receive from, a model.

    Attributes:
        cachePoint: A cache point configuration to optimize conversation history.
        document: A document to include in the message.
        guardContent: Contains the content to assess with the guardrail.
        image: Image to include in the message.
        reasoningContent: Contains content regarding the reasoning that is carried out by the model.
        text: Text to include in the message.
        toolResult: The result for a tool request that a model makes.
        toolUse: Information about a tool use request from a model.
        video: Video to include in the message.
        citationsContent: Contains the citations for a document.
    """

    cachePoint: CachePoint
    document: DocumentContent
    guardContent: GuardContent
    image: ImageContent
    reasoningContent: ReasoningContentBlock
    text: str
    toolResult: ToolResult
    toolUse: ToolUse
    video: VideoContent
    citationsContent: CitationsContentBlock

ContextWindowOverflowException

Bases: Exception

Exception raised when the context window is exceeded.

This exception is raised when the input to a model exceeds the maximum context window size that the model can handle. This typically occurs when the combined length of the conversation history, system prompt, and current message is too large for the model to process.

Source code in strands/types/exceptions.py
38
39
40
41
42
43
44
45
46
class ContextWindowOverflowException(Exception):
    """Exception raised when the context window is exceeded.

    This exception is raised when the input to a model exceeds the maximum context window size that the model can
    handle. This typically occurs when the combined length of the conversation history, system prompt, and current
    message is too large for the model to process.
    """

    pass

ConversationManager

Bases: ABC, HookProvider

Abstract base class for managing conversation history.

This class provides an interface for implementing conversation management strategies to control the size of message arrays/conversation histories, helping to:

  • Manage memory usage
  • Control context length
  • Maintain relevant conversation state

ConversationManager implements the HookProvider protocol, allowing derived classes to register hooks for agent lifecycle events. Derived classes that override register_hooks must call the base implementation to ensure proper hook registration.

Example
class MyConversationManager(ConversationManager):
    def register_hooks(self, registry: HookRegistry, **kwargs: Any) -> None:
        super().register_hooks(registry, **kwargs)
        # Register additional hooks here
Source code in strands/agent/conversation_manager/conversation_manager.py
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
class ConversationManager(ABC, HookProvider):
    """Abstract base class for managing conversation history.

    This class provides an interface for implementing conversation management strategies to control the size of message
    arrays/conversation histories, helping to:

    - Manage memory usage
    - Control context length
    - Maintain relevant conversation state

    ConversationManager implements the HookProvider protocol, allowing derived classes to register hooks for agent
    lifecycle events. Derived classes that override register_hooks must call the base implementation to ensure proper
    hook registration.

    Example:
        ```python
        class MyConversationManager(ConversationManager):
            def register_hooks(self, registry: HookRegistry, **kwargs: Any) -> None:
                super().register_hooks(registry, **kwargs)
                # Register additional hooks here
        ```
    """

    def __init__(self) -> None:
        """Initialize the ConversationManager.

        Attributes:
          removed_message_count: The messages that have been removed from the agents messages array.
              These represent messages provided by the user or LLM that have been removed, not messages
              included by the conversation manager through something like summarization.
        """
        self.removed_message_count = 0

    def register_hooks(self, registry: HookRegistry, **kwargs: Any) -> None:
        """Register hooks for agent lifecycle events.

        Derived classes that override this method must call the base implementation to ensure proper hook
        registration chain.

        Args:
            registry: The hook registry to register callbacks with.
            **kwargs: Additional keyword arguments for future extensibility.

        Example:
            ```python
            def register_hooks(self, registry: HookRegistry, **kwargs: Any) -> None:
                super().register_hooks(registry, **kwargs)
                registry.add_callback(SomeEvent, self.on_some_event)
            ```
        """
        pass

    def restore_from_session(self, state: dict[str, Any]) -> list[Message] | None:
        """Restore the Conversation Manager's state from a session.

        Args:
            state: Previous state of the conversation manager
        Returns:
            Optional list of messages to prepend to the agents messages. By default returns None.
        """
        if state.get("__name__") != self.__class__.__name__:
            raise ValueError("Invalid conversation manager state.")
        self.removed_message_count = state["removed_message_count"]
        return None

    def get_state(self) -> dict[str, Any]:
        """Get the current state of a Conversation Manager as a Json serializable dictionary."""
        return {
            "__name__": self.__class__.__name__,
            "removed_message_count": self.removed_message_count,
        }

    @abstractmethod
    def apply_management(self, agent: "Agent", **kwargs: Any) -> None:
        """Applies management strategy to the provided agent.

        Processes the conversation history to maintain appropriate size by modifying the messages list in-place.
        Implementations should handle message pruning, summarization, or other size management techniques to keep the
        conversation context within desired bounds.

        Args:
            agent: The agent whose conversation history will be manage.
                This list is modified in-place.
            **kwargs: Additional keyword arguments for future extensibility.
        """
        pass

    @abstractmethod
    def reduce_context(self, agent: "Agent", e: Exception | None = None, **kwargs: Any) -> None:
        """Called when the model's context window is exceeded.

        This method should implement the specific strategy for reducing the window size when a context overflow occurs.
        It is typically called after a ContextWindowOverflowException is caught.

        Implementations might use strategies such as:

        - Removing the N oldest messages
        - Summarizing older context
        - Applying importance-based filtering
        - Maintaining critical conversation markers

        Args:
            agent: The agent whose conversation history will be reduced.
                This list is modified in-place.
            e: The exception that triggered the context reduction, if any.
            **kwargs: Additional keyword arguments for future extensibility.
        """
        pass

__init__()

Initialize the ConversationManager.

Attributes:

Name Type Description
removed_message_count

The messages that have been removed from the agents messages array. These represent messages provided by the user or LLM that have been removed, not messages included by the conversation manager through something like summarization.

Source code in strands/agent/conversation_manager/conversation_manager.py
36
37
38
39
40
41
42
43
44
def __init__(self) -> None:
    """Initialize the ConversationManager.

    Attributes:
      removed_message_count: The messages that have been removed from the agents messages array.
          These represent messages provided by the user or LLM that have been removed, not messages
          included by the conversation manager through something like summarization.
    """
    self.removed_message_count = 0

apply_management(agent, **kwargs) abstractmethod

Applies management strategy to the provided agent.

Processes the conversation history to maintain appropriate size by modifying the messages list in-place. Implementations should handle message pruning, summarization, or other size management techniques to keep the conversation context within desired bounds.

Parameters:

Name Type Description Default
agent Agent

The agent whose conversation history will be manage. This list is modified in-place.

required
**kwargs Any

Additional keyword arguments for future extensibility.

{}
Source code in strands/agent/conversation_manager/conversation_manager.py
85
86
87
88
89
90
91
92
93
94
95
96
97
98
@abstractmethod
def apply_management(self, agent: "Agent", **kwargs: Any) -> None:
    """Applies management strategy to the provided agent.

    Processes the conversation history to maintain appropriate size by modifying the messages list in-place.
    Implementations should handle message pruning, summarization, or other size management techniques to keep the
    conversation context within desired bounds.

    Args:
        agent: The agent whose conversation history will be manage.
            This list is modified in-place.
        **kwargs: Additional keyword arguments for future extensibility.
    """
    pass

get_state()

Get the current state of a Conversation Manager as a Json serializable dictionary.

Source code in strands/agent/conversation_manager/conversation_manager.py
78
79
80
81
82
83
def get_state(self) -> dict[str, Any]:
    """Get the current state of a Conversation Manager as a Json serializable dictionary."""
    return {
        "__name__": self.__class__.__name__,
        "removed_message_count": self.removed_message_count,
    }

reduce_context(agent, e=None, **kwargs) abstractmethod

Called when the model's context window is exceeded.

This method should implement the specific strategy for reducing the window size when a context overflow occurs. It is typically called after a ContextWindowOverflowException is caught.

Implementations might use strategies such as:

  • Removing the N oldest messages
  • Summarizing older context
  • Applying importance-based filtering
  • Maintaining critical conversation markers

Parameters:

Name Type Description Default
agent Agent

The agent whose conversation history will be reduced. This list is modified in-place.

required
e Exception | None

The exception that triggered the context reduction, if any.

None
**kwargs Any

Additional keyword arguments for future extensibility.

{}
Source code in strands/agent/conversation_manager/conversation_manager.py
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
@abstractmethod
def reduce_context(self, agent: "Agent", e: Exception | None = None, **kwargs: Any) -> None:
    """Called when the model's context window is exceeded.

    This method should implement the specific strategy for reducing the window size when a context overflow occurs.
    It is typically called after a ContextWindowOverflowException is caught.

    Implementations might use strategies such as:

    - Removing the N oldest messages
    - Summarizing older context
    - Applying importance-based filtering
    - Maintaining critical conversation markers

    Args:
        agent: The agent whose conversation history will be reduced.
            This list is modified in-place.
        e: The exception that triggered the context reduction, if any.
        **kwargs: Additional keyword arguments for future extensibility.
    """
    pass

register_hooks(registry, **kwargs)

Register hooks for agent lifecycle events.

Derived classes that override this method must call the base implementation to ensure proper hook registration chain.

Parameters:

Name Type Description Default
registry HookRegistry

The hook registry to register callbacks with.

required
**kwargs Any

Additional keyword arguments for future extensibility.

{}
Example
def register_hooks(self, registry: HookRegistry, **kwargs: Any) -> None:
    super().register_hooks(registry, **kwargs)
    registry.add_callback(SomeEvent, self.on_some_event)
Source code in strands/agent/conversation_manager/conversation_manager.py
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
def register_hooks(self, registry: HookRegistry, **kwargs: Any) -> None:
    """Register hooks for agent lifecycle events.

    Derived classes that override this method must call the base implementation to ensure proper hook
    registration chain.

    Args:
        registry: The hook registry to register callbacks with.
        **kwargs: Additional keyword arguments for future extensibility.

    Example:
        ```python
        def register_hooks(self, registry: HookRegistry, **kwargs: Any) -> None:
            super().register_hooks(registry, **kwargs)
            registry.add_callback(SomeEvent, self.on_some_event)
        ```
    """
    pass

restore_from_session(state)

Restore the Conversation Manager's state from a session.

Parameters:

Name Type Description Default
state dict[str, Any]

Previous state of the conversation manager

required

Returns: Optional list of messages to prepend to the agents messages. By default returns None.

Source code in strands/agent/conversation_manager/conversation_manager.py
65
66
67
68
69
70
71
72
73
74
75
76
def restore_from_session(self, state: dict[str, Any]) -> list[Message] | None:
    """Restore the Conversation Manager's state from a session.

    Args:
        state: Previous state of the conversation manager
    Returns:
        Optional list of messages to prepend to the agents messages. By default returns None.
    """
    if state.get("__name__") != self.__class__.__name__:
        raise ValueError("Invalid conversation manager state.")
    self.removed_message_count = state["removed_message_count"]
    return None

EventLoopMetrics dataclass

Aggregated metrics for an event loop's execution.

Attributes:

Name Type Description
cycle_count int

Number of event loop cycles executed.

tool_metrics dict[str, ToolMetrics]

Metrics for each tool used, keyed by tool name.

cycle_durations list[float]

List of durations for each cycle in seconds.

agent_invocations list[AgentInvocation]

Agent invocation metrics containing cycles and usage data.

traces list[Trace]

List of execution traces.

accumulated_usage Usage

Accumulated token usage across all model invocations (across all requests).

accumulated_metrics Metrics

Accumulated performance metrics across all model invocations.

Source code in strands/telemetry/metrics.py
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
@dataclass
class EventLoopMetrics:
    """Aggregated metrics for an event loop's execution.

    Attributes:
        cycle_count: Number of event loop cycles executed.
        tool_metrics: Metrics for each tool used, keyed by tool name.
        cycle_durations: List of durations for each cycle in seconds.
        agent_invocations: Agent invocation metrics containing cycles and usage data.
        traces: List of execution traces.
        accumulated_usage: Accumulated token usage across all model invocations (across all requests).
        accumulated_metrics: Accumulated performance metrics across all model invocations.
    """

    cycle_count: int = 0
    tool_metrics: dict[str, ToolMetrics] = field(default_factory=dict)
    cycle_durations: list[float] = field(default_factory=list)
    agent_invocations: list[AgentInvocation] = field(default_factory=list)
    traces: list[Trace] = field(default_factory=list)
    accumulated_usage: Usage = field(default_factory=lambda: Usage(inputTokens=0, outputTokens=0, totalTokens=0))
    accumulated_metrics: Metrics = field(default_factory=lambda: Metrics(latencyMs=0))

    @property
    def _metrics_client(self) -> "MetricsClient":
        """Get the singleton MetricsClient instance."""
        return MetricsClient()

    @property
    def latest_agent_invocation(self) -> AgentInvocation | None:
        """Get the most recent agent invocation.

        Returns:
            The most recent AgentInvocation, or None if no invocations exist.
        """
        return self.agent_invocations[-1] if self.agent_invocations else None

    def start_cycle(
        self,
        attributes: dict[str, Any],
    ) -> tuple[float, Trace]:
        """Start a new event loop cycle and create a trace for it.

        Args:
            attributes: attributes of the metrics, including event_loop_cycle_id.

        Returns:
            A tuple containing the start time and the cycle trace object.
        """
        self._metrics_client.event_loop_cycle_count.add(1, attributes=attributes)
        self._metrics_client.event_loop_start_cycle.add(1, attributes=attributes)
        self.cycle_count += 1
        start_time = time.time()
        cycle_trace = Trace(f"Cycle {self.cycle_count}", start_time=start_time)
        self.traces.append(cycle_trace)

        self.agent_invocations[-1].cycles.append(
            EventLoopCycleMetric(
                event_loop_cycle_id=attributes["event_loop_cycle_id"],
                usage=Usage(inputTokens=0, outputTokens=0, totalTokens=0),
            )
        )

        return start_time, cycle_trace

    def end_cycle(self, start_time: float, cycle_trace: Trace, attributes: dict[str, Any] | None = None) -> None:
        """End the current event loop cycle and record its duration.

        Args:
            start_time: The timestamp when the cycle started.
            cycle_trace: The trace object for this cycle.
            attributes: attributes of the metrics.
        """
        self._metrics_client.event_loop_end_cycle.add(1, attributes)
        end_time = time.time()
        duration = end_time - start_time
        self._metrics_client.event_loop_cycle_duration.record(duration, attributes)
        self.cycle_durations.append(duration)
        cycle_trace.end(end_time)

    def add_tool_usage(
        self,
        tool: ToolUse,
        duration: float,
        tool_trace: Trace,
        success: bool,
        message: Message,
    ) -> None:
        """Record metrics for a tool invocation.

        Args:
            tool: The tool that was used.
            duration: How long the tool call took in seconds.
            tool_trace: The trace object for this tool call.
            success: Whether the tool call was successful.
            message: The message associated with the tool call.
        """
        tool_name = tool.get("name", "unknown_tool")
        tool_use_id = tool.get("toolUseId", "unknown")

        tool_trace.metadata.update(
            {
                "toolUseId": tool_use_id,
                "tool_name": tool_name,
            }
        )
        tool_trace.raw_name = f"{tool_name} - {tool_use_id}"
        tool_trace.add_message(message)

        self.tool_metrics.setdefault(tool_name, ToolMetrics(tool)).add_call(
            tool,
            duration,
            success,
            self._metrics_client,
            attributes={
                "tool_name": tool_name,
                "tool_use_id": tool_use_id,
            },
        )
        tool_trace.end()

    def _accumulate_usage(self, target: Usage, source: Usage) -> None:
        """Helper method to accumulate usage from source to target.

        Args:
            target: The Usage object to accumulate into.
            source: The Usage object to accumulate from.
        """
        target["inputTokens"] += source["inputTokens"]
        target["outputTokens"] += source["outputTokens"]
        target["totalTokens"] += source["totalTokens"]

        if "cacheReadInputTokens" in source:
            target["cacheReadInputTokens"] = target.get("cacheReadInputTokens", 0) + source["cacheReadInputTokens"]

        if "cacheWriteInputTokens" in source:
            target["cacheWriteInputTokens"] = target.get("cacheWriteInputTokens", 0) + source["cacheWriteInputTokens"]

    def update_usage(self, usage: Usage) -> None:
        """Update the accumulated token usage with new usage data.

        Args:
            usage: The usage data to add to the accumulated totals.
        """
        # Record metrics to OpenTelemetry
        self._metrics_client.event_loop_input_tokens.record(usage["inputTokens"])
        self._metrics_client.event_loop_output_tokens.record(usage["outputTokens"])

        # Handle optional cached token metrics for OpenTelemetry
        if "cacheReadInputTokens" in usage:
            self._metrics_client.event_loop_cache_read_input_tokens.record(usage["cacheReadInputTokens"])
        if "cacheWriteInputTokens" in usage:
            self._metrics_client.event_loop_cache_write_input_tokens.record(usage["cacheWriteInputTokens"])

        self._accumulate_usage(self.accumulated_usage, usage)
        self._accumulate_usage(self.agent_invocations[-1].usage, usage)

        if self.agent_invocations[-1].cycles:
            current_cycle = self.agent_invocations[-1].cycles[-1]
            self._accumulate_usage(current_cycle.usage, usage)

    def reset_usage_metrics(self) -> None:
        """Start a new agent invocation by creating a new AgentInvocation.

        This should be called at the start of a new request to begin tracking
        a new agent invocation with fresh usage and cycle data.
        """
        self.agent_invocations.append(AgentInvocation())

    def update_metrics(self, metrics: Metrics) -> None:
        """Update the accumulated performance metrics with new metrics data.

        Args:
            metrics: The metrics data to add to the accumulated totals.
        """
        self._metrics_client.event_loop_latency.record(metrics["latencyMs"])
        if metrics.get("timeToFirstByteMs") is not None:
            self._metrics_client.model_time_to_first_token.record(metrics["timeToFirstByteMs"])
        self.accumulated_metrics["latencyMs"] += metrics["latencyMs"]

    def get_summary(self) -> dict[str, Any]:
        """Generate a comprehensive summary of all collected metrics.

        Returns:
            A dictionary containing summarized metrics data.
            This includes cycle statistics, tool usage, traces, and accumulated usage information.
        """
        summary = {
            "total_cycles": self.cycle_count,
            "total_duration": sum(self.cycle_durations),
            "average_cycle_time": (sum(self.cycle_durations) / self.cycle_count if self.cycle_count > 0 else 0),
            "tool_usage": {
                tool_name: {
                    "tool_info": {
                        "tool_use_id": metrics.tool.get("toolUseId", "N/A"),
                        "name": metrics.tool.get("name", "unknown"),
                        "input_params": metrics.tool.get("input", {}),
                    },
                    "execution_stats": {
                        "call_count": metrics.call_count,
                        "success_count": metrics.success_count,
                        "error_count": metrics.error_count,
                        "total_time": metrics.total_time,
                        "average_time": (metrics.total_time / metrics.call_count if metrics.call_count > 0 else 0),
                        "success_rate": (metrics.success_count / metrics.call_count if metrics.call_count > 0 else 0),
                    },
                }
                for tool_name, metrics in self.tool_metrics.items()
            },
            "traces": [trace.to_dict() for trace in self.traces],
            "accumulated_usage": self.accumulated_usage,
            "accumulated_metrics": self.accumulated_metrics,
            "agent_invocations": [
                {
                    "usage": invocation.usage,
                    "cycles": [
                        {"event_loop_cycle_id": cycle.event_loop_cycle_id, "usage": cycle.usage}
                        for cycle in invocation.cycles
                    ],
                }
                for invocation in self.agent_invocations
            ],
        }
        return summary

latest_agent_invocation property

Get the most recent agent invocation.

Returns:

Type Description
AgentInvocation | None

The most recent AgentInvocation, or None if no invocations exist.

add_tool_usage(tool, duration, tool_trace, success, message)

Record metrics for a tool invocation.

Parameters:

Name Type Description Default
tool ToolUse

The tool that was used.

required
duration float

How long the tool call took in seconds.

required
tool_trace Trace

The trace object for this tool call.

required
success bool

Whether the tool call was successful.

required
message Message

The message associated with the tool call.

required
Source code in strands/telemetry/metrics.py
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
def add_tool_usage(
    self,
    tool: ToolUse,
    duration: float,
    tool_trace: Trace,
    success: bool,
    message: Message,
) -> None:
    """Record metrics for a tool invocation.

    Args:
        tool: The tool that was used.
        duration: How long the tool call took in seconds.
        tool_trace: The trace object for this tool call.
        success: Whether the tool call was successful.
        message: The message associated with the tool call.
    """
    tool_name = tool.get("name", "unknown_tool")
    tool_use_id = tool.get("toolUseId", "unknown")

    tool_trace.metadata.update(
        {
            "toolUseId": tool_use_id,
            "tool_name": tool_name,
        }
    )
    tool_trace.raw_name = f"{tool_name} - {tool_use_id}"
    tool_trace.add_message(message)

    self.tool_metrics.setdefault(tool_name, ToolMetrics(tool)).add_call(
        tool,
        duration,
        success,
        self._metrics_client,
        attributes={
            "tool_name": tool_name,
            "tool_use_id": tool_use_id,
        },
    )
    tool_trace.end()

end_cycle(start_time, cycle_trace, attributes=None)

End the current event loop cycle and record its duration.

Parameters:

Name Type Description Default
start_time float

The timestamp when the cycle started.

required
cycle_trace Trace

The trace object for this cycle.

required
attributes dict[str, Any] | None

attributes of the metrics.

None
Source code in strands/telemetry/metrics.py
247
248
249
250
251
252
253
254
255
256
257
258
259
260
def end_cycle(self, start_time: float, cycle_trace: Trace, attributes: dict[str, Any] | None = None) -> None:
    """End the current event loop cycle and record its duration.

    Args:
        start_time: The timestamp when the cycle started.
        cycle_trace: The trace object for this cycle.
        attributes: attributes of the metrics.
    """
    self._metrics_client.event_loop_end_cycle.add(1, attributes)
    end_time = time.time()
    duration = end_time - start_time
    self._metrics_client.event_loop_cycle_duration.record(duration, attributes)
    self.cycle_durations.append(duration)
    cycle_trace.end(end_time)

get_summary()

Generate a comprehensive summary of all collected metrics.

Returns:

Type Description
dict[str, Any]

A dictionary containing summarized metrics data.

dict[str, Any]

This includes cycle statistics, tool usage, traces, and accumulated usage information.

Source code in strands/telemetry/metrics.py
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
def get_summary(self) -> dict[str, Any]:
    """Generate a comprehensive summary of all collected metrics.

    Returns:
        A dictionary containing summarized metrics data.
        This includes cycle statistics, tool usage, traces, and accumulated usage information.
    """
    summary = {
        "total_cycles": self.cycle_count,
        "total_duration": sum(self.cycle_durations),
        "average_cycle_time": (sum(self.cycle_durations) / self.cycle_count if self.cycle_count > 0 else 0),
        "tool_usage": {
            tool_name: {
                "tool_info": {
                    "tool_use_id": metrics.tool.get("toolUseId", "N/A"),
                    "name": metrics.tool.get("name", "unknown"),
                    "input_params": metrics.tool.get("input", {}),
                },
                "execution_stats": {
                    "call_count": metrics.call_count,
                    "success_count": metrics.success_count,
                    "error_count": metrics.error_count,
                    "total_time": metrics.total_time,
                    "average_time": (metrics.total_time / metrics.call_count if metrics.call_count > 0 else 0),
                    "success_rate": (metrics.success_count / metrics.call_count if metrics.call_count > 0 else 0),
                },
            }
            for tool_name, metrics in self.tool_metrics.items()
        },
        "traces": [trace.to_dict() for trace in self.traces],
        "accumulated_usage": self.accumulated_usage,
        "accumulated_metrics": self.accumulated_metrics,
        "agent_invocations": [
            {
                "usage": invocation.usage,
                "cycles": [
                    {"event_loop_cycle_id": cycle.event_loop_cycle_id, "usage": cycle.usage}
                    for cycle in invocation.cycles
                ],
            }
            for invocation in self.agent_invocations
        ],
    }
    return summary

reset_usage_metrics()

Start a new agent invocation by creating a new AgentInvocation.

This should be called at the start of a new request to begin tracking a new agent invocation with fresh usage and cycle data.

Source code in strands/telemetry/metrics.py
343
344
345
346
347
348
349
def reset_usage_metrics(self) -> None:
    """Start a new agent invocation by creating a new AgentInvocation.

    This should be called at the start of a new request to begin tracking
    a new agent invocation with fresh usage and cycle data.
    """
    self.agent_invocations.append(AgentInvocation())

start_cycle(attributes)

Start a new event loop cycle and create a trace for it.

Parameters:

Name Type Description Default
attributes dict[str, Any]

attributes of the metrics, including event_loop_cycle_id.

required

Returns:

Type Description
tuple[float, Trace]

A tuple containing the start time and the cycle trace object.

Source code in strands/telemetry/metrics.py
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
def start_cycle(
    self,
    attributes: dict[str, Any],
) -> tuple[float, Trace]:
    """Start a new event loop cycle and create a trace for it.

    Args:
        attributes: attributes of the metrics, including event_loop_cycle_id.

    Returns:
        A tuple containing the start time and the cycle trace object.
    """
    self._metrics_client.event_loop_cycle_count.add(1, attributes=attributes)
    self._metrics_client.event_loop_start_cycle.add(1, attributes=attributes)
    self.cycle_count += 1
    start_time = time.time()
    cycle_trace = Trace(f"Cycle {self.cycle_count}", start_time=start_time)
    self.traces.append(cycle_trace)

    self.agent_invocations[-1].cycles.append(
        EventLoopCycleMetric(
            event_loop_cycle_id=attributes["event_loop_cycle_id"],
            usage=Usage(inputTokens=0, outputTokens=0, totalTokens=0),
        )
    )

    return start_time, cycle_trace

update_metrics(metrics)

Update the accumulated performance metrics with new metrics data.

Parameters:

Name Type Description Default
metrics Metrics

The metrics data to add to the accumulated totals.

required
Source code in strands/telemetry/metrics.py
351
352
353
354
355
356
357
358
359
360
def update_metrics(self, metrics: Metrics) -> None:
    """Update the accumulated performance metrics with new metrics data.

    Args:
        metrics: The metrics data to add to the accumulated totals.
    """
    self._metrics_client.event_loop_latency.record(metrics["latencyMs"])
    if metrics.get("timeToFirstByteMs") is not None:
        self._metrics_client.model_time_to_first_token.record(metrics["timeToFirstByteMs"])
    self.accumulated_metrics["latencyMs"] += metrics["latencyMs"]

update_usage(usage)

Update the accumulated token usage with new usage data.

Parameters:

Name Type Description Default
usage Usage

The usage data to add to the accumulated totals.

required
Source code in strands/telemetry/metrics.py
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
def update_usage(self, usage: Usage) -> None:
    """Update the accumulated token usage with new usage data.

    Args:
        usage: The usage data to add to the accumulated totals.
    """
    # Record metrics to OpenTelemetry
    self._metrics_client.event_loop_input_tokens.record(usage["inputTokens"])
    self._metrics_client.event_loop_output_tokens.record(usage["outputTokens"])

    # Handle optional cached token metrics for OpenTelemetry
    if "cacheReadInputTokens" in usage:
        self._metrics_client.event_loop_cache_read_input_tokens.record(usage["cacheReadInputTokens"])
    if "cacheWriteInputTokens" in usage:
        self._metrics_client.event_loop_cache_write_input_tokens.record(usage["cacheWriteInputTokens"])

    self._accumulate_usage(self.accumulated_usage, usage)
    self._accumulate_usage(self.agent_invocations[-1].usage, usage)

    if self.agent_invocations[-1].cycles:
        current_cycle = self.agent_invocations[-1].cycles[-1]
        self._accumulate_usage(current_cycle.usage, usage)

EventLoopStopEvent

Bases: TypedEvent

Event emitted when the agent execution completes normally.

Source code in strands/types/_events.py
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
class EventLoopStopEvent(TypedEvent):
    """Event emitted when the agent execution completes normally."""

    def __init__(
        self,
        stop_reason: StopReason,
        message: Message,
        metrics: "EventLoopMetrics",
        request_state: Any,
        interrupts: Sequence[Interrupt] | None = None,
        structured_output: BaseModel | None = None,
    ) -> None:
        """Initialize with the final execution results.

        Args:
            stop_reason: Why the agent execution stopped
            message: Final message from the model
            metrics: Execution metrics and performance data
            request_state: Final state of the agent execution
            interrupts: Interrupts raised by user during agent execution.
            structured_output: Optional structured output result
        """
        super().__init__({"stop": (stop_reason, message, metrics, request_state, interrupts, structured_output)})

    @property
    @override
    def is_callback_event(self) -> bool:
        return False

__init__(stop_reason, message, metrics, request_state, interrupts=None, structured_output=None)

Initialize with the final execution results.

Parameters:

Name Type Description Default
stop_reason StopReason

Why the agent execution stopped

required
message Message

Final message from the model

required
metrics EventLoopMetrics

Execution metrics and performance data

required
request_state Any

Final state of the agent execution

required
interrupts Sequence[Interrupt] | None

Interrupts raised by user during agent execution.

None
structured_output BaseModel | None

Optional structured output result

None
Source code in strands/types/_events.py
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
def __init__(
    self,
    stop_reason: StopReason,
    message: Message,
    metrics: "EventLoopMetrics",
    request_state: Any,
    interrupts: Sequence[Interrupt] | None = None,
    structured_output: BaseModel | None = None,
) -> None:
    """Initialize with the final execution results.

    Args:
        stop_reason: Why the agent execution stopped
        message: Final message from the model
        metrics: Execution metrics and performance data
        request_state: Final state of the agent execution
        interrupts: Interrupts raised by user during agent execution.
        structured_output: Optional structured output result
    """
    super().__init__({"stop": (stop_reason, message, metrics, request_state, interrupts, structured_output)})

HookProvider

Bases: Protocol

Protocol for objects that provide hook callbacks to an agent.

Hook providers offer a composable way to extend agent functionality by subscribing to various events in the agent lifecycle. This protocol enables building reusable components that can hook into agent events.

Example
class MyHookProvider(HookProvider):
    def register_hooks(self, registry: HookRegistry) -> None:
        registry.add_callback(StartRequestEvent, self.on_request_start)
        registry.add_callback(EndRequestEvent, self.on_request_end)

agent = Agent(hooks=[MyHookProvider()])
Source code in strands/hooks/registry.py
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
@runtime_checkable
class HookProvider(Protocol):
    """Protocol for objects that provide hook callbacks to an agent.

    Hook providers offer a composable way to extend agent functionality by
    subscribing to various events in the agent lifecycle. This protocol enables
    building reusable components that can hook into agent events.

    Example:
        ```python
        class MyHookProvider(HookProvider):
            def register_hooks(self, registry: HookRegistry) -> None:
                registry.add_callback(StartRequestEvent, self.on_request_start)
                registry.add_callback(EndRequestEvent, self.on_request_end)

        agent = Agent(hooks=[MyHookProvider()])
        ```
    """

    def register_hooks(self, registry: "HookRegistry", **kwargs: Any) -> None:
        """Register callback functions for specific event types.

        Args:
            registry: The hook registry to register callbacks with.
            **kwargs: Additional keyword arguments for future extensibility.
        """
        ...

register_hooks(registry, **kwargs)

Register callback functions for specific event types.

Parameters:

Name Type Description Default
registry HookRegistry

The hook registry to register callbacks with.

required
**kwargs Any

Additional keyword arguments for future extensibility.

{}
Source code in strands/hooks/registry.py
107
108
109
110
111
112
113
114
def register_hooks(self, registry: "HookRegistry", **kwargs: Any) -> None:
    """Register callback functions for specific event types.

    Args:
        registry: The hook registry to register callbacks with.
        **kwargs: Additional keyword arguments for future extensibility.
    """
    ...

HookRegistry

Registry for managing hook callbacks associated with event types.

The HookRegistry maintains a mapping of event types to callback functions and provides methods for registering callbacks and invoking them when events occur.

The registry handles callback ordering, including reverse ordering for cleanup events, and provides type-safe event dispatching.

Source code in strands/hooks/registry.py
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
class HookRegistry:
    """Registry for managing hook callbacks associated with event types.

    The HookRegistry maintains a mapping of event types to callback functions
    and provides methods for registering callbacks and invoking them when
    events occur.

    The registry handles callback ordering, including reverse ordering for
    cleanup events, and provides type-safe event dispatching.
    """

    def __init__(self) -> None:
        """Initialize an empty hook registry."""
        self._registered_callbacks: dict[type, list[HookCallback]] = {}

    def add_callback(self, event_type: type[TEvent], callback: HookCallback[TEvent]) -> None:
        """Register a callback function for a specific event type.

        Args:
            event_type: The class type of events this callback should handle.
            callback: The callback function to invoke when events of this type occur.

        Example:
            ```python
            def my_handler(event: StartRequestEvent):
                print("Request started")

            registry.add_callback(StartRequestEvent, my_handler)
            ```
        """
        # Related issue: https://github.com/strands-agents/sdk-python/issues/330
        if event_type.__name__ == "AgentInitializedEvent" and inspect.iscoroutinefunction(callback):
            raise ValueError("AgentInitializedEvent can only be registered with a synchronous callback")

        callbacks = self._registered_callbacks.setdefault(event_type, [])
        callbacks.append(callback)

    def add_hook(self, hook: HookProvider) -> None:
        """Register all callbacks from a hook provider.

        This method allows bulk registration of callbacks by delegating to
        the hook provider's register_hooks method. This is the preferred
        way to register multiple related callbacks.

        Args:
            hook: The hook provider containing callbacks to register.

        Example:
            ```python
            class MyHooks(HookProvider):
                def register_hooks(self, registry: HookRegistry):
                    registry.add_callback(StartRequestEvent, self.on_start)
                    registry.add_callback(EndRequestEvent, self.on_end)

            registry.add_hook(MyHooks())
            ```
        """
        hook.register_hooks(self)

    async def invoke_callbacks_async(self, event: TInvokeEvent) -> tuple[TInvokeEvent, list[Interrupt]]:
        """Invoke all registered callbacks for the given event.

        This method finds all callbacks registered for the event's type and
        invokes them in the appropriate order. For events with should_reverse_callbacks=True,
        callbacks are invoked in reverse registration order. Any exceptions raised by callback
        functions will propagate to the caller.

        Additionally, this method aggregates interrupts raised by the user to instantiate human-in-the-loop workflows.

        Args:
            event: The event to dispatch to registered callbacks.

        Returns:
            The event dispatched to registered callbacks and any interrupts raised by the user.

        Raises:
            ValueError: If interrupt name is used more than once.

        Example:
            ```python
            event = StartRequestEvent(agent=my_agent)
            await registry.invoke_callbacks_async(event)
            ```
        """
        interrupts: dict[str, Interrupt] = {}

        for callback in self.get_callbacks_for(event):
            try:
                if inspect.iscoroutinefunction(callback):
                    await callback(event)
                else:
                    callback(event)

            except InterruptException as exception:
                interrupt = exception.interrupt
                if interrupt.name in interrupts:
                    message = f"interrupt_name=<{interrupt.name}> | interrupt name used more than once"
                    logger.error(message)
                    raise ValueError(message) from exception

                # Each callback is allowed to raise their own interrupt.
                interrupts[interrupt.name] = interrupt

        return event, list(interrupts.values())

    def invoke_callbacks(self, event: TInvokeEvent) -> tuple[TInvokeEvent, list[Interrupt]]:
        """Invoke all registered callbacks for the given event.

        This method finds all callbacks registered for the event's type and
        invokes them in the appropriate order. For events with should_reverse_callbacks=True,
        callbacks are invoked in reverse registration order. Any exceptions raised by callback
        functions will propagate to the caller.

        Additionally, this method aggregates interrupts raised by the user to instantiate human-in-the-loop workflows.

        Args:
            event: The event to dispatch to registered callbacks.

        Returns:
            The event dispatched to registered callbacks and any interrupts raised by the user.

        Raises:
            RuntimeError: If at least one callback is async.
            ValueError: If interrupt name is used more than once.

        Example:
            ```python
            event = StartRequestEvent(agent=my_agent)
            registry.invoke_callbacks(event)
            ```
        """
        callbacks = list(self.get_callbacks_for(event))
        interrupts: dict[str, Interrupt] = {}

        if any(inspect.iscoroutinefunction(callback) for callback in callbacks):
            raise RuntimeError(f"event=<{event}> | use invoke_callbacks_async to invoke async callback")

        for callback in callbacks:
            try:
                callback(event)
            except InterruptException as exception:
                interrupt = exception.interrupt
                if interrupt.name in interrupts:
                    message = f"interrupt_name=<{interrupt.name}> | interrupt name used more than once"
                    logger.error(message)
                    raise ValueError(message) from exception

                # Each callback is allowed to raise their own interrupt.
                interrupts[interrupt.name] = interrupt

        return event, list(interrupts.values())

    def has_callbacks(self) -> bool:
        """Check if the registry has any registered callbacks.

        Returns:
            True if there are any registered callbacks, False otherwise.

        Example:
            ```python
            if registry.has_callbacks():
                print("Registry has callbacks registered")
            ```
        """
        return bool(self._registered_callbacks)

    def get_callbacks_for(self, event: TEvent) -> Generator[HookCallback[TEvent], None, None]:
        """Get callbacks registered for the given event in the appropriate order.

        This method returns callbacks in registration order for normal events,
        or reverse registration order for events that have should_reverse_callbacks=True.
        This enables proper cleanup ordering for teardown events.

        Args:
            event: The event to get callbacks for.

        Yields:
            Callback functions registered for this event type, in the appropriate order.

        Example:
            ```python
            event = EndRequestEvent(agent=my_agent)
            for callback in registry.get_callbacks_for(event):
                callback(event)
            ```
        """
        event_type = type(event)

        callbacks = self._registered_callbacks.get(event_type, [])
        if event.should_reverse_callbacks:
            yield from reversed(callbacks)
        else:
            yield from callbacks

__init__()

Initialize an empty hook registry.

Source code in strands/hooks/registry.py
156
157
158
def __init__(self) -> None:
    """Initialize an empty hook registry."""
    self._registered_callbacks: dict[type, list[HookCallback]] = {}

add_callback(event_type, callback)

Register a callback function for a specific event type.

Parameters:

Name Type Description Default
event_type type[TEvent]

The class type of events this callback should handle.

required
callback HookCallback[TEvent]

The callback function to invoke when events of this type occur.

required
Example
def my_handler(event: StartRequestEvent):
    print("Request started")

registry.add_callback(StartRequestEvent, my_handler)
Source code in strands/hooks/registry.py
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
def add_callback(self, event_type: type[TEvent], callback: HookCallback[TEvent]) -> None:
    """Register a callback function for a specific event type.

    Args:
        event_type: The class type of events this callback should handle.
        callback: The callback function to invoke when events of this type occur.

    Example:
        ```python
        def my_handler(event: StartRequestEvent):
            print("Request started")

        registry.add_callback(StartRequestEvent, my_handler)
        ```
    """
    # Related issue: https://github.com/strands-agents/sdk-python/issues/330
    if event_type.__name__ == "AgentInitializedEvent" and inspect.iscoroutinefunction(callback):
        raise ValueError("AgentInitializedEvent can only be registered with a synchronous callback")

    callbacks = self._registered_callbacks.setdefault(event_type, [])
    callbacks.append(callback)

add_hook(hook)

Register all callbacks from a hook provider.

This method allows bulk registration of callbacks by delegating to the hook provider's register_hooks method. This is the preferred way to register multiple related callbacks.

Parameters:

Name Type Description Default
hook HookProvider

The hook provider containing callbacks to register.

required
Example
class MyHooks(HookProvider):
    def register_hooks(self, registry: HookRegistry):
        registry.add_callback(StartRequestEvent, self.on_start)
        registry.add_callback(EndRequestEvent, self.on_end)

registry.add_hook(MyHooks())
Source code in strands/hooks/registry.py
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
def add_hook(self, hook: HookProvider) -> None:
    """Register all callbacks from a hook provider.

    This method allows bulk registration of callbacks by delegating to
    the hook provider's register_hooks method. This is the preferred
    way to register multiple related callbacks.

    Args:
        hook: The hook provider containing callbacks to register.

    Example:
        ```python
        class MyHooks(HookProvider):
            def register_hooks(self, registry: HookRegistry):
                registry.add_callback(StartRequestEvent, self.on_start)
                registry.add_callback(EndRequestEvent, self.on_end)

        registry.add_hook(MyHooks())
        ```
    """
    hook.register_hooks(self)

get_callbacks_for(event)

Get callbacks registered for the given event in the appropriate order.

This method returns callbacks in registration order for normal events, or reverse registration order for events that have should_reverse_callbacks=True. This enables proper cleanup ordering for teardown events.

Parameters:

Name Type Description Default
event TEvent

The event to get callbacks for.

required

Yields:

Type Description
HookCallback[TEvent]

Callback functions registered for this event type, in the appropriate order.

Example
event = EndRequestEvent(agent=my_agent)
for callback in registry.get_callbacks_for(event):
    callback(event)
Source code in strands/hooks/registry.py
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
def get_callbacks_for(self, event: TEvent) -> Generator[HookCallback[TEvent], None, None]:
    """Get callbacks registered for the given event in the appropriate order.

    This method returns callbacks in registration order for normal events,
    or reverse registration order for events that have should_reverse_callbacks=True.
    This enables proper cleanup ordering for teardown events.

    Args:
        event: The event to get callbacks for.

    Yields:
        Callback functions registered for this event type, in the appropriate order.

    Example:
        ```python
        event = EndRequestEvent(agent=my_agent)
        for callback in registry.get_callbacks_for(event):
            callback(event)
        ```
    """
    event_type = type(event)

    callbacks = self._registered_callbacks.get(event_type, [])
    if event.should_reverse_callbacks:
        yield from reversed(callbacks)
    else:
        yield from callbacks

has_callbacks()

Check if the registry has any registered callbacks.

Returns:

Type Description
bool

True if there are any registered callbacks, False otherwise.

Example
if registry.has_callbacks():
    print("Registry has callbacks registered")
Source code in strands/hooks/registry.py
297
298
299
300
301
302
303
304
305
306
307
308
309
def has_callbacks(self) -> bool:
    """Check if the registry has any registered callbacks.

    Returns:
        True if there are any registered callbacks, False otherwise.

    Example:
        ```python
        if registry.has_callbacks():
            print("Registry has callbacks registered")
        ```
    """
    return bool(self._registered_callbacks)

invoke_callbacks(event)

Invoke all registered callbacks for the given event.

This method finds all callbacks registered for the event's type and invokes them in the appropriate order. For events with should_reverse_callbacks=True, callbacks are invoked in reverse registration order. Any exceptions raised by callback functions will propagate to the caller.

Additionally, this method aggregates interrupts raised by the user to instantiate human-in-the-loop workflows.

Parameters:

Name Type Description Default
event TInvokeEvent

The event to dispatch to registered callbacks.

required

Returns:

Type Description
tuple[TInvokeEvent, list[Interrupt]]

The event dispatched to registered callbacks and any interrupts raised by the user.

Raises:

Type Description
RuntimeError

If at least one callback is async.

ValueError

If interrupt name is used more than once.

Example
event = StartRequestEvent(agent=my_agent)
registry.invoke_callbacks(event)
Source code in strands/hooks/registry.py
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
def invoke_callbacks(self, event: TInvokeEvent) -> tuple[TInvokeEvent, list[Interrupt]]:
    """Invoke all registered callbacks for the given event.

    This method finds all callbacks registered for the event's type and
    invokes them in the appropriate order. For events with should_reverse_callbacks=True,
    callbacks are invoked in reverse registration order. Any exceptions raised by callback
    functions will propagate to the caller.

    Additionally, this method aggregates interrupts raised by the user to instantiate human-in-the-loop workflows.

    Args:
        event: The event to dispatch to registered callbacks.

    Returns:
        The event dispatched to registered callbacks and any interrupts raised by the user.

    Raises:
        RuntimeError: If at least one callback is async.
        ValueError: If interrupt name is used more than once.

    Example:
        ```python
        event = StartRequestEvent(agent=my_agent)
        registry.invoke_callbacks(event)
        ```
    """
    callbacks = list(self.get_callbacks_for(event))
    interrupts: dict[str, Interrupt] = {}

    if any(inspect.iscoroutinefunction(callback) for callback in callbacks):
        raise RuntimeError(f"event=<{event}> | use invoke_callbacks_async to invoke async callback")

    for callback in callbacks:
        try:
            callback(event)
        except InterruptException as exception:
            interrupt = exception.interrupt
            if interrupt.name in interrupts:
                message = f"interrupt_name=<{interrupt.name}> | interrupt name used more than once"
                logger.error(message)
                raise ValueError(message) from exception

            # Each callback is allowed to raise their own interrupt.
            interrupts[interrupt.name] = interrupt

    return event, list(interrupts.values())

invoke_callbacks_async(event) async

Invoke all registered callbacks for the given event.

This method finds all callbacks registered for the event's type and invokes them in the appropriate order. For events with should_reverse_callbacks=True, callbacks are invoked in reverse registration order. Any exceptions raised by callback functions will propagate to the caller.

Additionally, this method aggregates interrupts raised by the user to instantiate human-in-the-loop workflows.

Parameters:

Name Type Description Default
event TInvokeEvent

The event to dispatch to registered callbacks.

required

Returns:

Type Description
tuple[TInvokeEvent, list[Interrupt]]

The event dispatched to registered callbacks and any interrupts raised by the user.

Raises:

Type Description
ValueError

If interrupt name is used more than once.

Example
event = StartRequestEvent(agent=my_agent)
await registry.invoke_callbacks_async(event)
Source code in strands/hooks/registry.py
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
async def invoke_callbacks_async(self, event: TInvokeEvent) -> tuple[TInvokeEvent, list[Interrupt]]:
    """Invoke all registered callbacks for the given event.

    This method finds all callbacks registered for the event's type and
    invokes them in the appropriate order. For events with should_reverse_callbacks=True,
    callbacks are invoked in reverse registration order. Any exceptions raised by callback
    functions will propagate to the caller.

    Additionally, this method aggregates interrupts raised by the user to instantiate human-in-the-loop workflows.

    Args:
        event: The event to dispatch to registered callbacks.

    Returns:
        The event dispatched to registered callbacks and any interrupts raised by the user.

    Raises:
        ValueError: If interrupt name is used more than once.

    Example:
        ```python
        event = StartRequestEvent(agent=my_agent)
        await registry.invoke_callbacks_async(event)
        ```
    """
    interrupts: dict[str, Interrupt] = {}

    for callback in self.get_callbacks_for(event):
        try:
            if inspect.iscoroutinefunction(callback):
                await callback(event)
            else:
                callback(event)

        except InterruptException as exception:
            interrupt = exception.interrupt
            if interrupt.name in interrupts:
                message = f"interrupt_name=<{interrupt.name}> | interrupt name used more than once"
                logger.error(message)
                raise ValueError(message) from exception

            # Each callback is allowed to raise their own interrupt.
            interrupts[interrupt.name] = interrupt

    return event, list(interrupts.values())

InitEventLoopEvent

Bases: TypedEvent

Event emitted at the very beginning of agent execution.

This event is fired before any processing begins and provides access to the initial invocation state.

Parameters:

Name Type Description Default
invocation_state

The invocation state passed into the request

required
Source code in strands/types/_events.py
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
class InitEventLoopEvent(TypedEvent):
    """Event emitted at the very beginning of agent execution.

    This event is fired before any processing begins and provides access to the
    initial invocation state.

    Args:
            invocation_state: The invocation state passed into the request
    """

    def __init__(self) -> None:
        """Initialize the event loop initialization event."""
        super().__init__({"init_event_loop": True})

    @override
    def prepare(self, invocation_state: dict) -> None:
        self.update(invocation_state)

__init__()

Initialize the event loop initialization event.

Source code in strands/types/_events.py
66
67
68
def __init__(self) -> None:
    """Initialize the event loop initialization event."""
    super().__init__({"init_event_loop": True})

Message

Bases: TypedDict

A message in a conversation with the agent.

Attributes:

Name Type Description
content list[ContentBlock]

The message content.

role Role

The role of the message sender.

Source code in strands/types/content.py
178
179
180
181
182
183
184
185
186
187
class Message(TypedDict):
    """A message in a conversation with the agent.

    Attributes:
        content: The message content.
        role: The role of the message sender.
    """

    content: list[ContentBlock]
    role: Role

MessageAddedEvent dataclass

Bases: HookEvent

Event triggered when a message is added to the agent's conversation.

This event is fired whenever the agent adds a new message to its internal message history, including user messages, assistant responses, and tool results. Hook providers can use this event for logging, monitoring, or implementing custom message processing logic.

Note: This event is only triggered for messages added by the framework itself, not for messages manually added by tools or external code.

Attributes:

Name Type Description
message Message

The message that was added to the conversation history.

Source code in strands/hooks/events.py
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
@dataclass
class MessageAddedEvent(HookEvent):
    """Event triggered when a message is added to the agent's conversation.

    This event is fired whenever the agent adds a new message to its internal
    message history, including user messages, assistant responses, and tool
    results. Hook providers can use this event for logging, monitoring, or
    implementing custom message processing logic.

    Note: This event is only triggered for messages added by the framework
    itself, not for messages manually added by tools or external code.

    Attributes:
        message: The message that was added to the conversation history.
    """

    message: Message

Model

Bases: ABC

Abstract base class for Agent model providers.

This class defines the interface for all model implementations in the Strands Agents SDK. It provides a standardized way to configure and process requests for different AI model providers.

Source code in strands/models/model.py
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
class Model(abc.ABC):
    """Abstract base class for Agent model providers.

    This class defines the interface for all model implementations in the Strands Agents SDK. It provides a
    standardized way to configure and process requests for different AI model providers.
    """

    @abc.abstractmethod
    # pragma: no cover
    def update_config(self, **model_config: Any) -> None:
        """Update the model configuration with the provided arguments.

        Args:
            **model_config: Configuration overrides.
        """
        pass

    @abc.abstractmethod
    # pragma: no cover
    def get_config(self) -> Any:
        """Return the model configuration.

        Returns:
            The model's configuration.
        """
        pass

    @abc.abstractmethod
    # pragma: no cover
    def structured_output(
        self, output_model: type[T], prompt: Messages, system_prompt: str | None = None, **kwargs: Any
    ) -> AsyncGenerator[dict[str, T | Any], None]:
        """Get structured output from the model.

        Args:
            output_model: The output model to use for the agent.
            prompt: The prompt messages to use for the agent.
            system_prompt: System prompt to provide context to the model.
            **kwargs: Additional keyword arguments for future extensibility.

        Yields:
            Model events with the last being the structured output.

        Raises:
            ValidationException: The response format from the model does not match the output_model
        """
        pass

    @abc.abstractmethod
    # pragma: no cover
    def stream(
        self,
        messages: Messages,
        tool_specs: list[ToolSpec] | None = None,
        system_prompt: str | None = None,
        *,
        tool_choice: ToolChoice | None = None,
        system_prompt_content: list[SystemContentBlock] | None = None,
        invocation_state: dict[str, Any] | None = None,
        **kwargs: Any,
    ) -> AsyncIterable[StreamEvent]:
        """Stream conversation with the model.

        This method handles the full lifecycle of conversing with the model:

        1. Format the messages, tool specs, and configuration into a streaming request
        2. Send the request to the model
        3. Yield the formatted message chunks

        Args:
            messages: List of message objects to be processed by the model.
            tool_specs: List of tool specifications to make available to the model.
            system_prompt: System prompt to provide context to the model.
            tool_choice: Selection strategy for tool invocation.
            system_prompt_content: System prompt content blocks for advanced features like caching.
            invocation_state: Caller-provided state/context that was passed to the agent when it was invoked.
            **kwargs: Additional keyword arguments for future extensibility.

        Yields:
            Formatted message chunks from the model.

        Raises:
            ModelThrottledException: When the model service is throttling requests from the client.
        """
        pass

get_config() abstractmethod

Return the model configuration.

Returns:

Type Description
Any

The model's configuration.

Source code in strands/models/model.py
49
50
51
52
53
54
55
56
57
@abc.abstractmethod
# pragma: no cover
def get_config(self) -> Any:
    """Return the model configuration.

    Returns:
        The model's configuration.
    """
    pass

stream(messages, tool_specs=None, system_prompt=None, *, tool_choice=None, system_prompt_content=None, invocation_state=None, **kwargs) abstractmethod

Stream conversation with the model.

This method handles the full lifecycle of conversing with the model:

  1. Format the messages, tool specs, and configuration into a streaming request
  2. Send the request to the model
  3. Yield the formatted message chunks

Parameters:

Name Type Description Default
messages Messages

List of message objects to be processed by the model.

required
tool_specs list[ToolSpec] | None

List of tool specifications to make available to the model.

None
system_prompt str | None

System prompt to provide context to the model.

None
tool_choice ToolChoice | None

Selection strategy for tool invocation.

None
system_prompt_content list[SystemContentBlock] | None

System prompt content blocks for advanced features like caching.

None
invocation_state dict[str, Any] | None

Caller-provided state/context that was passed to the agent when it was invoked.

None
**kwargs Any

Additional keyword arguments for future extensibility.

{}

Yields:

Type Description
AsyncIterable[StreamEvent]

Formatted message chunks from the model.

Raises:

Type Description
ModelThrottledException

When the model service is throttling requests from the client.

Source code in strands/models/model.py
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
@abc.abstractmethod
# pragma: no cover
def stream(
    self,
    messages: Messages,
    tool_specs: list[ToolSpec] | None = None,
    system_prompt: str | None = None,
    *,
    tool_choice: ToolChoice | None = None,
    system_prompt_content: list[SystemContentBlock] | None = None,
    invocation_state: dict[str, Any] | None = None,
    **kwargs: Any,
) -> AsyncIterable[StreamEvent]:
    """Stream conversation with the model.

    This method handles the full lifecycle of conversing with the model:

    1. Format the messages, tool specs, and configuration into a streaming request
    2. Send the request to the model
    3. Yield the formatted message chunks

    Args:
        messages: List of message objects to be processed by the model.
        tool_specs: List of tool specifications to make available to the model.
        system_prompt: System prompt to provide context to the model.
        tool_choice: Selection strategy for tool invocation.
        system_prompt_content: System prompt content blocks for advanced features like caching.
        invocation_state: Caller-provided state/context that was passed to the agent when it was invoked.
        **kwargs: Additional keyword arguments for future extensibility.

    Yields:
        Formatted message chunks from the model.

    Raises:
        ModelThrottledException: When the model service is throttling requests from the client.
    """
    pass

structured_output(output_model, prompt, system_prompt=None, **kwargs) abstractmethod

Get structured output from the model.

Parameters:

Name Type Description Default
output_model type[T]

The output model to use for the agent.

required
prompt Messages

The prompt messages to use for the agent.

required
system_prompt str | None

System prompt to provide context to the model.

None
**kwargs Any

Additional keyword arguments for future extensibility.

{}

Yields:

Type Description
AsyncGenerator[dict[str, T | Any], None]

Model events with the last being the structured output.

Raises:

Type Description
ValidationException

The response format from the model does not match the output_model

Source code in strands/models/model.py
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
@abc.abstractmethod
# pragma: no cover
def structured_output(
    self, output_model: type[T], prompt: Messages, system_prompt: str | None = None, **kwargs: Any
) -> AsyncGenerator[dict[str, T | Any], None]:
    """Get structured output from the model.

    Args:
        output_model: The output model to use for the agent.
        prompt: The prompt messages to use for the agent.
        system_prompt: System prompt to provide context to the model.
        **kwargs: Additional keyword arguments for future extensibility.

    Yields:
        Model events with the last being the structured output.

    Raises:
        ValidationException: The response format from the model does not match the output_model
    """
    pass

update_config(**model_config) abstractmethod

Update the model configuration with the provided arguments.

Parameters:

Name Type Description Default
**model_config Any

Configuration overrides.

{}
Source code in strands/models/model.py
39
40
41
42
43
44
45
46
47
@abc.abstractmethod
# pragma: no cover
def update_config(self, **model_config: Any) -> None:
    """Update the model configuration with the provided arguments.

    Args:
        **model_config: Configuration overrides.
    """
    pass

ModelRetryStrategy

Bases: HookProvider

Default retry strategy for model throttling with exponential backoff.

Retries model calls on ModelThrottledException using exponential backoff. Delay doubles after each attempt: initial_delay, initial_delay2, initial_delay4, etc., capped at max_delay. State resets after successful calls.

With defaults (initial_delay=4, max_delay=240, max_attempts=6), delays are: 4s → 8s → 16s → 32s → 64s (5 retries before giving up on the 6th attempt).

Parameters:

Name Type Description Default
max_attempts int

Total model attempts before re-raising the exception.

6
initial_delay int

Base delay in seconds; used for first two retries, then doubles.

4
max_delay int

Upper bound in seconds for the exponential backoff.

240
Source code in strands/event_loop/_retry.py
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
class ModelRetryStrategy(HookProvider):
    """Default retry strategy for model throttling with exponential backoff.

    Retries model calls on ModelThrottledException using exponential backoff.
    Delay doubles after each attempt: initial_delay, initial_delay*2, initial_delay*4,
    etc., capped at max_delay. State resets after successful calls.

    With defaults (initial_delay=4, max_delay=240, max_attempts=6), delays are:
    4s → 8s → 16s → 32s → 64s (5 retries before giving up on the 6th attempt).

    Args:
        max_attempts: Total model attempts before re-raising the exception.
        initial_delay: Base delay in seconds; used for first two retries, then doubles.
        max_delay: Upper bound in seconds for the exponential backoff.
    """

    def __init__(
        self,
        *,
        max_attempts: int = 6,
        initial_delay: int = 4,
        max_delay: int = 240,
    ):
        """Initialize the retry strategy.

        Args:
            max_attempts: Total model attempts before re-raising the exception. Defaults to 6.
            initial_delay: Base delay in seconds; used for first two retries, then doubles.
                Defaults to 4.
            max_delay: Upper bound in seconds for the exponential backoff. Defaults to 240.
        """
        self._max_attempts = max_attempts
        self._initial_delay = initial_delay
        self._max_delay = max_delay
        self._current_attempt = 0
        self._backwards_compatible_event_to_yield: TypedEvent | None = None

    def register_hooks(self, registry: HookRegistry, **kwargs: Any) -> None:
        """Register callbacks for AfterModelCallEvent and AfterInvocationEvent.

        Args:
            registry: The hook registry to register callbacks with.
            **kwargs: Additional keyword arguments for future extensibility.
        """
        registry.add_callback(AfterModelCallEvent, self._handle_after_model_call)
        registry.add_callback(AfterInvocationEvent, self._handle_after_invocation)

    def _calculate_delay(self, attempt: int) -> int:
        """Calculate retry delay using exponential backoff.

        Args:
            attempt: The attempt number (0-indexed) to calculate delay for.

        Returns:
            Delay in seconds for the given attempt.
        """
        delay: int = self._initial_delay * (2**attempt)
        return min(delay, self._max_delay)

    def _reset_retry_state(self) -> None:
        """Reset retry state to initial values."""
        self._current_attempt = 0

    async def _handle_after_invocation(self, event: AfterInvocationEvent) -> None:
        """Reset retry state after invocation completes.

        Args:
            event: The AfterInvocationEvent signaling invocation completion.
        """
        self._reset_retry_state()

    async def _handle_after_model_call(self, event: AfterModelCallEvent) -> None:
        """Handle model call completion and determine if retry is needed.

        This callback is invoked after each model call. If the call failed with
        a ModelThrottledException and we haven't exceeded max_attempts, it sets
        event.retry to True and sleeps for the current delay before returning.

        On successful calls, it resets the retry state to prepare for future calls.

        Args:
            event: The AfterModelCallEvent containing call results or exception.
        """
        delay = self._calculate_delay(self._current_attempt)

        self._backwards_compatible_event_to_yield = None

        # If already retrying, skip processing (another hook may have triggered retry)
        if event.retry:
            return

        # If model call succeeded, reset retry state
        if event.stop_response is not None:
            logger.debug(
                "stop_reason=<%s> | model call succeeded, resetting retry state",
                event.stop_response.stop_reason,
            )
            self._reset_retry_state()
            return

        # Check if we have an exception and reset state if no exception
        if event.exception is None:
            self._reset_retry_state()
            return

        # Only retry on ModelThrottledException
        if not isinstance(event.exception, ModelThrottledException):
            return

        # Increment attempt counter first
        self._current_attempt += 1

        # Check if we've exceeded max attempts
        if self._current_attempt >= self._max_attempts:
            logger.debug(
                "current_attempt=<%d>, max_attempts=<%d> | max retry attempts reached, not retrying",
                self._current_attempt,
                self._max_attempts,
            )
            return

        self._backwards_compatible_event_to_yield = EventLoopThrottleEvent(delay=delay)

        # Retry the model call
        logger.debug(
            "retry_delay_seconds=<%s>, max_attempts=<%s>, current_attempt=<%s> "
            "| throttling exception encountered | delaying before next retry",
            delay,
            self._max_attempts,
            self._current_attempt,
        )

        # Sleep for current delay
        await asyncio.sleep(delay)

        # Set retry flag and track that this strategy triggered it
        event.retry = True

__init__(*, max_attempts=6, initial_delay=4, max_delay=240)

Initialize the retry strategy.

Parameters:

Name Type Description Default
max_attempts int

Total model attempts before re-raising the exception. Defaults to 6.

6
initial_delay int

Base delay in seconds; used for first two retries, then doubles. Defaults to 4.

4
max_delay int

Upper bound in seconds for the exponential backoff. Defaults to 240.

240
Source code in strands/event_loop/_retry.py
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
def __init__(
    self,
    *,
    max_attempts: int = 6,
    initial_delay: int = 4,
    max_delay: int = 240,
):
    """Initialize the retry strategy.

    Args:
        max_attempts: Total model attempts before re-raising the exception. Defaults to 6.
        initial_delay: Base delay in seconds; used for first two retries, then doubles.
            Defaults to 4.
        max_delay: Upper bound in seconds for the exponential backoff. Defaults to 240.
    """
    self._max_attempts = max_attempts
    self._initial_delay = initial_delay
    self._max_delay = max_delay
    self._current_attempt = 0
    self._backwards_compatible_event_to_yield: TypedEvent | None = None

register_hooks(registry, **kwargs)

Register callbacks for AfterModelCallEvent and AfterInvocationEvent.

Parameters:

Name Type Description Default
registry HookRegistry

The hook registry to register callbacks with.

required
**kwargs Any

Additional keyword arguments for future extensibility.

{}
Source code in strands/event_loop/_retry.py
58
59
60
61
62
63
64
65
66
def register_hooks(self, registry: HookRegistry, **kwargs: Any) -> None:
    """Register callbacks for AfterModelCallEvent and AfterInvocationEvent.

    Args:
        registry: The hook registry to register callbacks with.
        **kwargs: Additional keyword arguments for future extensibility.
    """
    registry.add_callback(AfterModelCallEvent, self._handle_after_model_call)
    registry.add_callback(AfterInvocationEvent, self._handle_after_invocation)

ModelStreamChunkEvent

Bases: TypedEvent

Event emitted during model response streaming for each raw chunk.

Source code in strands/types/_events.py
102
103
104
105
106
107
108
109
110
111
112
113
114
115
class ModelStreamChunkEvent(TypedEvent):
    """Event emitted during model response streaming for each raw chunk."""

    def __init__(self, chunk: StreamEvent) -> None:
        """Initialize with streaming delta data from the model.

        Args:
            chunk: Incremental streaming data from the model response
        """
        super().__init__({"event": chunk})

    @property
    def chunk(self) -> StreamEvent:
        return cast(StreamEvent, self.get("event"))

__init__(chunk)

Initialize with streaming delta data from the model.

Parameters:

Name Type Description Default
chunk StreamEvent

Incremental streaming data from the model response

required
Source code in strands/types/_events.py
105
106
107
108
109
110
111
def __init__(self, chunk: StreamEvent) -> None:
    """Initialize with streaming delta data from the model.

    Args:
        chunk: Incremental streaming data from the model response
    """
    super().__init__({"event": chunk})

PrintingCallbackHandler

Handler for streaming text output and tool invocations to stdout.

Source code in strands/handlers/callback_handler.py
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
class PrintingCallbackHandler:
    """Handler for streaming text output and tool invocations to stdout."""

    def __init__(self, verbose_tool_use: bool = True) -> None:
        """Initialize handler.

        Args:
            verbose_tool_use: Print out verbose information about tool calls.
        """
        self.tool_count = 0
        self._verbose_tool_use = verbose_tool_use

    def __call__(self, **kwargs: Any) -> None:
        """Stream text output and tool invocations to stdout.

        Args:
            **kwargs: Callback event data including:
                - reasoningText (Optional[str]): Reasoning text to print if provided.
                - data (str): Text content to stream.
                - complete (bool): Whether this is the final chunk of a response.
                - event (dict): ModelStreamChunkEvent.
        """
        reasoningText = kwargs.get("reasoningText", False)
        data = kwargs.get("data", "")
        complete = kwargs.get("complete", False)
        tool_use = kwargs.get("event", {}).get("contentBlockStart", {}).get("start", {}).get("toolUse")

        if reasoningText:
            print(reasoningText, end="")

        if data:
            print(data, end="" if not complete else "\n")

        if tool_use:
            self.tool_count += 1
            if self._verbose_tool_use:
                tool_name = tool_use["name"]
                print(f"\nTool #{self.tool_count}: {tool_name}")

        if complete and data:
            print("\n")

__call__(**kwargs)

Stream text output and tool invocations to stdout.

Parameters:

Name Type Description Default
**kwargs Any

Callback event data including: - reasoningText (Optional[str]): Reasoning text to print if provided. - data (str): Text content to stream. - complete (bool): Whether this is the final chunk of a response. - event (dict): ModelStreamChunkEvent.

{}
Source code in strands/handlers/callback_handler.py
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
def __call__(self, **kwargs: Any) -> None:
    """Stream text output and tool invocations to stdout.

    Args:
        **kwargs: Callback event data including:
            - reasoningText (Optional[str]): Reasoning text to print if provided.
            - data (str): Text content to stream.
            - complete (bool): Whether this is the final chunk of a response.
            - event (dict): ModelStreamChunkEvent.
    """
    reasoningText = kwargs.get("reasoningText", False)
    data = kwargs.get("data", "")
    complete = kwargs.get("complete", False)
    tool_use = kwargs.get("event", {}).get("contentBlockStart", {}).get("start", {}).get("toolUse")

    if reasoningText:
        print(reasoningText, end="")

    if data:
        print(data, end="" if not complete else "\n")

    if tool_use:
        self.tool_count += 1
        if self._verbose_tool_use:
            tool_name = tool_use["name"]
            print(f"\nTool #{self.tool_count}: {tool_name}")

    if complete and data:
        print("\n")

__init__(verbose_tool_use=True)

Initialize handler.

Parameters:

Name Type Description Default
verbose_tool_use bool

Print out verbose information about tool calls.

True
Source code in strands/handlers/callback_handler.py
10
11
12
13
14
15
16
17
def __init__(self, verbose_tool_use: bool = True) -> None:
    """Initialize handler.

    Args:
        verbose_tool_use: Print out verbose information about tool calls.
    """
    self.tool_count = 0
    self._verbose_tool_use = verbose_tool_use

SessionManager

Bases: HookProvider, ABC

Abstract interface for managing sessions.

A session manager is in charge of persisting the conversation and state of an agent across its interaction. Changes made to the agents conversation, state, or other attributes should be persisted immediately after they are changed. The different methods introduced in this class are called at important lifecycle events for an agent, and should be persisted in the session.

Source code in strands/session/session_manager.py
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
class SessionManager(HookProvider, ABC):
    """Abstract interface for managing sessions.

    A session manager is in charge of persisting the conversation and state of an agent across its interaction.
    Changes made to the agents conversation, state, or other attributes should be persisted immediately after
    they are changed. The different methods introduced in this class are called at important lifecycle events
    for an agent, and should be persisted in the session.
    """

    def register_hooks(self, registry: HookRegistry, **kwargs: Any) -> None:
        """Register hooks for persisting the agent to the session."""
        # After the normal Agent initialization behavior, call the session initialize function to restore the agent
        registry.add_callback(AgentInitializedEvent, lambda event: self.initialize(event.agent))

        # For each message appended to the Agents messages, store that message in the session
        registry.add_callback(MessageAddedEvent, lambda event: self.append_message(event.message, event.agent))

        # Sync the agent into the session for each message in case the agent state was updated
        registry.add_callback(MessageAddedEvent, lambda event: self.sync_agent(event.agent))

        # After an agent was invoked, sync it with the session to capture any conversation manager state updates
        registry.add_callback(AfterInvocationEvent, lambda event: self.sync_agent(event.agent))

        registry.add_callback(MultiAgentInitializedEvent, lambda event: self.initialize_multi_agent(event.source))
        registry.add_callback(AfterNodeCallEvent, lambda event: self.sync_multi_agent(event.source))
        registry.add_callback(AfterMultiAgentInvocationEvent, lambda event: self.sync_multi_agent(event.source))

        # Register BidiAgent hooks
        registry.add_callback(BidiAgentInitializedEvent, lambda event: self.initialize_bidi_agent(event.agent))
        registry.add_callback(BidiMessageAddedEvent, lambda event: self.append_bidi_message(event.message, event.agent))
        registry.add_callback(BidiMessageAddedEvent, lambda event: self.sync_bidi_agent(event.agent))
        registry.add_callback(BidiAfterInvocationEvent, lambda event: self.sync_bidi_agent(event.agent))

    @abstractmethod
    def redact_latest_message(self, redact_message: Message, agent: "Agent", **kwargs: Any) -> None:
        """Redact the message most recently appended to the agent in the session.

        Args:
            redact_message: New message to use that contains the redact content
            agent: Agent to apply the message redaction to
            **kwargs: Additional keyword arguments for future extensibility.
        """

    @abstractmethod
    def append_message(self, message: Message, agent: "Agent", **kwargs: Any) -> None:
        """Append a message to the agent's session.

        Args:
            message: Message to add to the agent in the session
            agent: Agent to append the message to
            **kwargs: Additional keyword arguments for future extensibility.
        """

    @abstractmethod
    def sync_agent(self, agent: "Agent", **kwargs: Any) -> None:
        """Serialize and sync the agent with the session storage.

        Args:
            agent: Agent who should be synchronized with the session storage
            **kwargs: Additional keyword arguments for future extensibility.
        """

    @abstractmethod
    def initialize(self, agent: "Agent", **kwargs: Any) -> None:
        """Initialize an agent with a session.

        Args:
            agent: Agent to initialize
            **kwargs: Additional keyword arguments for future extensibility.
        """

    def sync_multi_agent(self, source: "MultiAgentBase", **kwargs: Any) -> None:
        """Serialize and sync multi-agent with the session storage.

        Args:
            source: Multi-agent source object to persist
            **kwargs: Additional keyword arguments for future extensibility.
        """
        raise NotImplementedError(
            f"{self.__class__.__name__} does not support multi-agent persistence "
            "(sync_multi_agent). Provide an implementation or use a "
            "SessionManager with session_type=SessionType.MULTI_AGENT."
        )

    def initialize_multi_agent(self, source: "MultiAgentBase", **kwargs: Any) -> None:
        """Read multi-agent state from persistent storage.

        Args:
            **kwargs: Additional keyword arguments for future extensibility.
            source: Multi-agent state to initialize.

        Returns:
            Multi-agent state dictionary or empty dict if not found.

        """
        raise NotImplementedError(
            f"{self.__class__.__name__} does not support multi-agent persistence "
            "(initialize_multi_agent). Provide an implementation or use a "
            "SessionManager with session_type=SessionType.MULTI_AGENT."
        )

    def initialize_bidi_agent(self, agent: "BidiAgent", **kwargs: Any) -> None:
        """Initialize a bidirectional agent with a session.

        Args:
            agent: BidiAgent to initialize
            **kwargs: Additional keyword arguments for future extensibility.
        """
        raise NotImplementedError(
            f"{self.__class__.__name__} does not support bidirectional agent persistence "
            "(initialize_bidi_agent). Provide an implementation or use a "
            "SessionManager with bidirectional agent support."
        )

    def append_bidi_message(self, message: Message, agent: "BidiAgent", **kwargs: Any) -> None:
        """Append a message to the bidirectional agent's session.

        Args:
            message: Message to add to the agent in the session
            agent: BidiAgent to append the message to
            **kwargs: Additional keyword arguments for future extensibility.
        """
        raise NotImplementedError(
            f"{self.__class__.__name__} does not support bidirectional agent persistence "
            "(append_bidi_message). Provide an implementation or use a "
            "SessionManager with bidirectional agent support."
        )

    def sync_bidi_agent(self, agent: "BidiAgent", **kwargs: Any) -> None:
        """Serialize and sync the bidirectional agent with the session storage.

        Args:
            agent: BidiAgent who should be synchronized with the session storage
            **kwargs: Additional keyword arguments for future extensibility.
        """
        raise NotImplementedError(
            f"{self.__class__.__name__} does not support bidirectional agent persistence "
            "(sync_bidi_agent). Provide an implementation or use a "
            "SessionManager with bidirectional agent support."
        )

append_bidi_message(message, agent, **kwargs)

Append a message to the bidirectional agent's session.

Parameters:

Name Type Description Default
message Message

Message to add to the agent in the session

required
agent BidiAgent

BidiAgent to append the message to

required
**kwargs Any

Additional keyword arguments for future extensibility.

{}
Source code in strands/session/session_manager.py
145
146
147
148
149
150
151
152
153
154
155
156
157
def append_bidi_message(self, message: Message, agent: "BidiAgent", **kwargs: Any) -> None:
    """Append a message to the bidirectional agent's session.

    Args:
        message: Message to add to the agent in the session
        agent: BidiAgent to append the message to
        **kwargs: Additional keyword arguments for future extensibility.
    """
    raise NotImplementedError(
        f"{self.__class__.__name__} does not support bidirectional agent persistence "
        "(append_bidi_message). Provide an implementation or use a "
        "SessionManager with bidirectional agent support."
    )

append_message(message, agent, **kwargs) abstractmethod

Append a message to the agent's session.

Parameters:

Name Type Description Default
message Message

Message to add to the agent in the session

required
agent Agent

Agent to append the message to

required
**kwargs Any

Additional keyword arguments for future extensibility.

{}
Source code in strands/session/session_manager.py
74
75
76
77
78
79
80
81
82
@abstractmethod
def append_message(self, message: Message, agent: "Agent", **kwargs: Any) -> None:
    """Append a message to the agent's session.

    Args:
        message: Message to add to the agent in the session
        agent: Agent to append the message to
        **kwargs: Additional keyword arguments for future extensibility.
    """

initialize(agent, **kwargs) abstractmethod

Initialize an agent with a session.

Parameters:

Name Type Description Default
agent Agent

Agent to initialize

required
**kwargs Any

Additional keyword arguments for future extensibility.

{}
Source code in strands/session/session_manager.py
 93
 94
 95
 96
 97
 98
 99
100
@abstractmethod
def initialize(self, agent: "Agent", **kwargs: Any) -> None:
    """Initialize an agent with a session.

    Args:
        agent: Agent to initialize
        **kwargs: Additional keyword arguments for future extensibility.
    """

initialize_bidi_agent(agent, **kwargs)

Initialize a bidirectional agent with a session.

Parameters:

Name Type Description Default
agent BidiAgent

BidiAgent to initialize

required
**kwargs Any

Additional keyword arguments for future extensibility.

{}
Source code in strands/session/session_manager.py
132
133
134
135
136
137
138
139
140
141
142
143
def initialize_bidi_agent(self, agent: "BidiAgent", **kwargs: Any) -> None:
    """Initialize a bidirectional agent with a session.

    Args:
        agent: BidiAgent to initialize
        **kwargs: Additional keyword arguments for future extensibility.
    """
    raise NotImplementedError(
        f"{self.__class__.__name__} does not support bidirectional agent persistence "
        "(initialize_bidi_agent). Provide an implementation or use a "
        "SessionManager with bidirectional agent support."
    )

initialize_multi_agent(source, **kwargs)

Read multi-agent state from persistent storage.

Parameters:

Name Type Description Default
**kwargs Any

Additional keyword arguments for future extensibility.

{}
source MultiAgentBase

Multi-agent state to initialize.

required

Returns:

Type Description
None

Multi-agent state dictionary or empty dict if not found.

Source code in strands/session/session_manager.py
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
def initialize_multi_agent(self, source: "MultiAgentBase", **kwargs: Any) -> None:
    """Read multi-agent state from persistent storage.

    Args:
        **kwargs: Additional keyword arguments for future extensibility.
        source: Multi-agent state to initialize.

    Returns:
        Multi-agent state dictionary or empty dict if not found.

    """
    raise NotImplementedError(
        f"{self.__class__.__name__} does not support multi-agent persistence "
        "(initialize_multi_agent). Provide an implementation or use a "
        "SessionManager with session_type=SessionType.MULTI_AGENT."
    )

redact_latest_message(redact_message, agent, **kwargs) abstractmethod

Redact the message most recently appended to the agent in the session.

Parameters:

Name Type Description Default
redact_message Message

New message to use that contains the redact content

required
agent Agent

Agent to apply the message redaction to

required
**kwargs Any

Additional keyword arguments for future extensibility.

{}
Source code in strands/session/session_manager.py
64
65
66
67
68
69
70
71
72
@abstractmethod
def redact_latest_message(self, redact_message: Message, agent: "Agent", **kwargs: Any) -> None:
    """Redact the message most recently appended to the agent in the session.

    Args:
        redact_message: New message to use that contains the redact content
        agent: Agent to apply the message redaction to
        **kwargs: Additional keyword arguments for future extensibility.
    """

register_hooks(registry, **kwargs)

Register hooks for persisting the agent to the session.

Source code in strands/session/session_manager.py
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
def register_hooks(self, registry: HookRegistry, **kwargs: Any) -> None:
    """Register hooks for persisting the agent to the session."""
    # After the normal Agent initialization behavior, call the session initialize function to restore the agent
    registry.add_callback(AgentInitializedEvent, lambda event: self.initialize(event.agent))

    # For each message appended to the Agents messages, store that message in the session
    registry.add_callback(MessageAddedEvent, lambda event: self.append_message(event.message, event.agent))

    # Sync the agent into the session for each message in case the agent state was updated
    registry.add_callback(MessageAddedEvent, lambda event: self.sync_agent(event.agent))

    # After an agent was invoked, sync it with the session to capture any conversation manager state updates
    registry.add_callback(AfterInvocationEvent, lambda event: self.sync_agent(event.agent))

    registry.add_callback(MultiAgentInitializedEvent, lambda event: self.initialize_multi_agent(event.source))
    registry.add_callback(AfterNodeCallEvent, lambda event: self.sync_multi_agent(event.source))
    registry.add_callback(AfterMultiAgentInvocationEvent, lambda event: self.sync_multi_agent(event.source))

    # Register BidiAgent hooks
    registry.add_callback(BidiAgentInitializedEvent, lambda event: self.initialize_bidi_agent(event.agent))
    registry.add_callback(BidiMessageAddedEvent, lambda event: self.append_bidi_message(event.message, event.agent))
    registry.add_callback(BidiMessageAddedEvent, lambda event: self.sync_bidi_agent(event.agent))
    registry.add_callback(BidiAfterInvocationEvent, lambda event: self.sync_bidi_agent(event.agent))

sync_agent(agent, **kwargs) abstractmethod

Serialize and sync the agent with the session storage.

Parameters:

Name Type Description Default
agent Agent

Agent who should be synchronized with the session storage

required
**kwargs Any

Additional keyword arguments for future extensibility.

{}
Source code in strands/session/session_manager.py
84
85
86
87
88
89
90
91
@abstractmethod
def sync_agent(self, agent: "Agent", **kwargs: Any) -> None:
    """Serialize and sync the agent with the session storage.

    Args:
        agent: Agent who should be synchronized with the session storage
        **kwargs: Additional keyword arguments for future extensibility.
    """

sync_bidi_agent(agent, **kwargs)

Serialize and sync the bidirectional agent with the session storage.

Parameters:

Name Type Description Default
agent BidiAgent

BidiAgent who should be synchronized with the session storage

required
**kwargs Any

Additional keyword arguments for future extensibility.

{}
Source code in strands/session/session_manager.py
159
160
161
162
163
164
165
166
167
168
169
170
def sync_bidi_agent(self, agent: "BidiAgent", **kwargs: Any) -> None:
    """Serialize and sync the bidirectional agent with the session storage.

    Args:
        agent: BidiAgent who should be synchronized with the session storage
        **kwargs: Additional keyword arguments for future extensibility.
    """
    raise NotImplementedError(
        f"{self.__class__.__name__} does not support bidirectional agent persistence "
        "(sync_bidi_agent). Provide an implementation or use a "
        "SessionManager with bidirectional agent support."
    )

sync_multi_agent(source, **kwargs)

Serialize and sync multi-agent with the session storage.

Parameters:

Name Type Description Default
source MultiAgentBase

Multi-agent source object to persist

required
**kwargs Any

Additional keyword arguments for future extensibility.

{}
Source code in strands/session/session_manager.py
102
103
104
105
106
107
108
109
110
111
112
113
def sync_multi_agent(self, source: "MultiAgentBase", **kwargs: Any) -> None:
    """Serialize and sync multi-agent with the session storage.

    Args:
        source: Multi-agent source object to persist
        **kwargs: Additional keyword arguments for future extensibility.
    """
    raise NotImplementedError(
        f"{self.__class__.__name__} does not support multi-agent persistence "
        "(sync_multi_agent). Provide an implementation or use a "
        "SessionManager with session_type=SessionType.MULTI_AGENT."
    )

SlidingWindowConversationManager

Bases: ConversationManager

Implements a sliding window strategy for managing conversation history.

This class handles the logic of maintaining a conversation window that preserves tool usage pairs and avoids invalid window states.

Supports proactive management during agent loop execution via the per_turn parameter.

Source code in strands/agent/conversation_manager/sliding_window_conversation_manager.py
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
class SlidingWindowConversationManager(ConversationManager):
    """Implements a sliding window strategy for managing conversation history.

    This class handles the logic of maintaining a conversation window that preserves tool usage pairs and avoids
    invalid window states.

    Supports proactive management during agent loop execution via the per_turn parameter.
    """

    def __init__(self, window_size: int = 40, should_truncate_results: bool = True, *, per_turn: bool | int = False):
        """Initialize the sliding window conversation manager.

        Args:
            window_size: Maximum number of messages to keep in the agent's history.
                Defaults to 40 messages.
            should_truncate_results: Truncate tool results when a message is too large for the model's context window
            per_turn: Controls when to apply message management during agent execution.
                - False (default): Only apply management at the end (default behavior)
                - True: Apply management before every model call
                - int (e.g., 3): Apply management before every N model calls

                When to use per_turn: If your agent performs many tool operations in loops
                (e.g., web browsing with frequent screenshots), enable per_turn to proactively
                manage message history and prevent the agent loop from slowing down. Start with
                per_turn=True and adjust to a specific frequency (e.g., per_turn=5) if needed
                for performance tuning.

        Raises:
            ValueError: If per_turn is 0 or a negative integer.
        """
        super().__init__()

        self.window_size = window_size
        self.should_truncate_results = should_truncate_results
        self.per_turn = per_turn
        self._model_call_count = 0

    def register_hooks(self, registry: "HookRegistry", **kwargs: Any) -> None:
        """Register hook callbacks for per-turn conversation management.

        Args:
            registry: The hook registry to register callbacks with.
            **kwargs: Additional keyword arguments for future extensibility.
        """
        super().register_hooks(registry, **kwargs)

        # Always register the callback - per_turn check happens in the callback
        registry.add_callback(BeforeModelCallEvent, self._on_before_model_call)

    def _on_before_model_call(self, event: BeforeModelCallEvent) -> None:
        """Handle before model call event for per-turn management.

        This callback is invoked before each model call. It tracks the model call count and applies message management
        based on the per_turn configuration.

        Args:
            event: The before model call event containing the agent and model execution details.
        """
        # Check if per_turn is enabled
        if self.per_turn is False:
            return

        self._model_call_count += 1

        # Determine if we should apply management
        should_apply = False
        if self.per_turn is True:
            should_apply = True
        elif isinstance(self.per_turn, int) and self.per_turn > 0:
            should_apply = self._model_call_count % self.per_turn == 0

        if should_apply:
            logger.debug(
                "model_call_count=<%d>, per_turn=<%s> | applying per-turn conversation management",
                self._model_call_count,
                self.per_turn,
            )
            self.apply_management(event.agent)

    def get_state(self) -> dict[str, Any]:
        """Get the current state of the conversation manager.

        Returns:
            Dictionary containing the manager's state, including model call count for per-turn tracking.
        """
        state = super().get_state()
        state["model_call_count"] = self._model_call_count
        return state

    def restore_from_session(self, state: dict[str, Any]) -> list | None:
        """Restore the conversation manager's state from a session.

        Args:
            state: Previous state of the conversation manager

        Returns:
            Optional list of messages to prepend to the agent's messages.
        """
        result = super().restore_from_session(state)
        self._model_call_count = state.get("model_call_count", 0)
        return result

    def apply_management(self, agent: "Agent", **kwargs: Any) -> None:
        """Apply the sliding window to the agent's messages array to maintain a manageable history size.

        This method is called after every event loop cycle to apply a sliding window if the message count
        exceeds the window size.

        Args:
            agent: The agent whose messages will be managed.
                This list is modified in-place.
            **kwargs: Additional keyword arguments for future extensibility.
        """
        messages = agent.messages

        if len(messages) <= self.window_size:
            logger.debug(
                "message_count=<%s>, window_size=<%s> | skipping context reduction", len(messages), self.window_size
            )
            return
        self.reduce_context(agent)

    def reduce_context(self, agent: "Agent", e: Exception | None = None, **kwargs: Any) -> None:
        """Trim the oldest messages to reduce the conversation context size.

        The method handles special cases where trimming the messages leads to:
         - toolResult with no corresponding toolUse
         - toolUse with no corresponding toolResult

        Args:
            agent: The agent whose messages will be reduce.
                This list is modified in-place.
            e: The exception that triggered the context reduction, if any.
            **kwargs: Additional keyword arguments for future extensibility.

        Raises:
            ContextWindowOverflowException: If the context cannot be reduced further.
                Such as when the conversation is already minimal or when tool result messages cannot be properly
                converted.
        """
        messages = agent.messages

        # Try to truncate the tool result first
        last_message_idx_with_tool_results = self._find_last_message_with_tool_results(messages)
        if last_message_idx_with_tool_results is not None and self.should_truncate_results:
            logger.debug(
                "message_index=<%s> | found message with tool results at index", last_message_idx_with_tool_results
            )
            results_truncated = self._truncate_tool_results(messages, last_message_idx_with_tool_results)
            if results_truncated:
                logger.debug("message_index=<%s> | tool results truncated", last_message_idx_with_tool_results)
                return

        # Try to trim index id when tool result cannot be truncated anymore
        # If the number of messages is less than the window_size, then we default to 2, otherwise, trim to window size
        trim_index = 2 if len(messages) <= self.window_size else len(messages) - self.window_size

        # Find the next valid trim_index
        while trim_index < len(messages):
            if (
                # Oldest message cannot be a toolResult because it needs a toolUse preceding it
                any("toolResult" in content for content in messages[trim_index]["content"])
                or (
                    # Oldest message can be a toolUse only if a toolResult immediately follows it.
                    any("toolUse" in content for content in messages[trim_index]["content"])
                    and trim_index + 1 < len(messages)
                    and not any("toolResult" in content for content in messages[trim_index + 1]["content"])
                )
            ):
                trim_index += 1
            else:
                break
        else:
            # If we didn't find a valid trim_index, then we throw
            raise ContextWindowOverflowException("Unable to trim conversation context!") from e

        # trim_index represents the number of messages being removed from the agents messages array
        self.removed_message_count += trim_index

        # Overwrite message history
        messages[:] = messages[trim_index:]

    def _truncate_tool_results(self, messages: Messages, msg_idx: int) -> bool:
        """Truncate tool results in a message to reduce context size.

        When a message contains tool results that are too large for the model's context window, this function
        replaces the content of those tool results with a simple error message.

        Args:
            messages: The conversation message history.
            msg_idx: Index of the message containing tool results to truncate.

        Returns:
            True if any changes were made to the message, False otherwise.
        """
        if msg_idx >= len(messages) or msg_idx < 0:
            return False

        message = messages[msg_idx]
        changes_made = False
        tool_result_too_large_message = "The tool result was too large!"
        for i, content in enumerate(message.get("content", [])):
            if isinstance(content, dict) and "toolResult" in content:
                tool_result_content_text = next(
                    (item["text"] for item in content["toolResult"]["content"] if "text" in item),
                    "",
                )
                # make the overwriting logic togglable
                if (
                    message["content"][i]["toolResult"]["status"] == "error"
                    and tool_result_content_text == tool_result_too_large_message
                ):
                    logger.info("ToolResult has already been updated, skipping overwrite")
                    return False
                # Update status to error with informative message
                message["content"][i]["toolResult"]["status"] = "error"
                message["content"][i]["toolResult"]["content"] = [{"text": tool_result_too_large_message}]
                changes_made = True

        return changes_made

    def _find_last_message_with_tool_results(self, messages: Messages) -> int | None:
        """Find the index of the last message containing tool results.

        This is useful for identifying messages that might need to be truncated to reduce context size.

        Args:
            messages: The conversation message history.

        Returns:
            Index of the last message with tool results, or None if no such message exists.
        """
        # Iterate backwards through all messages (from newest to oldest)
        for idx in range(len(messages) - 1, -1, -1):
            # Check if this message has any content with toolResult
            current_message = messages[idx]
            has_tool_result = False

            for content in current_message.get("content", []):
                if isinstance(content, dict) and "toolResult" in content:
                    has_tool_result = True
                    break

            if has_tool_result:
                return idx

        return None

__init__(window_size=40, should_truncate_results=True, *, per_turn=False)

Initialize the sliding window conversation manager.

Parameters:

Name Type Description Default
window_size int

Maximum number of messages to keep in the agent's history. Defaults to 40 messages.

40
should_truncate_results bool

Truncate tool results when a message is too large for the model's context window

True
per_turn bool | int

Controls when to apply message management during agent execution. - False (default): Only apply management at the end (default behavior) - True: Apply management before every model call - int (e.g., 3): Apply management before every N model calls

When to use per_turn: If your agent performs many tool operations in loops (e.g., web browsing with frequent screenshots), enable per_turn to proactively manage message history and prevent the agent loop from slowing down. Start with per_turn=True and adjust to a specific frequency (e.g., per_turn=5) if needed for performance tuning.

False

Raises:

Type Description
ValueError

If per_turn is 0 or a negative integer.

Source code in strands/agent/conversation_manager/sliding_window_conversation_manager.py
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
def __init__(self, window_size: int = 40, should_truncate_results: bool = True, *, per_turn: bool | int = False):
    """Initialize the sliding window conversation manager.

    Args:
        window_size: Maximum number of messages to keep in the agent's history.
            Defaults to 40 messages.
        should_truncate_results: Truncate tool results when a message is too large for the model's context window
        per_turn: Controls when to apply message management during agent execution.
            - False (default): Only apply management at the end (default behavior)
            - True: Apply management before every model call
            - int (e.g., 3): Apply management before every N model calls

            When to use per_turn: If your agent performs many tool operations in loops
            (e.g., web browsing with frequent screenshots), enable per_turn to proactively
            manage message history and prevent the agent loop from slowing down. Start with
            per_turn=True and adjust to a specific frequency (e.g., per_turn=5) if needed
            for performance tuning.

    Raises:
        ValueError: If per_turn is 0 or a negative integer.
    """
    super().__init__()

    self.window_size = window_size
    self.should_truncate_results = should_truncate_results
    self.per_turn = per_turn
    self._model_call_count = 0

apply_management(agent, **kwargs)

Apply the sliding window to the agent's messages array to maintain a manageable history size.

This method is called after every event loop cycle to apply a sliding window if the message count exceeds the window size.

Parameters:

Name Type Description Default
agent Agent

The agent whose messages will be managed. This list is modified in-place.

required
**kwargs Any

Additional keyword arguments for future extensibility.

{}
Source code in strands/agent/conversation_manager/sliding_window_conversation_manager.py
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
def apply_management(self, agent: "Agent", **kwargs: Any) -> None:
    """Apply the sliding window to the agent's messages array to maintain a manageable history size.

    This method is called after every event loop cycle to apply a sliding window if the message count
    exceeds the window size.

    Args:
        agent: The agent whose messages will be managed.
            This list is modified in-place.
        **kwargs: Additional keyword arguments for future extensibility.
    """
    messages = agent.messages

    if len(messages) <= self.window_size:
        logger.debug(
            "message_count=<%s>, window_size=<%s> | skipping context reduction", len(messages), self.window_size
        )
        return
    self.reduce_context(agent)

get_state()

Get the current state of the conversation manager.

Returns:

Type Description
dict[str, Any]

Dictionary containing the manager's state, including model call count for per-turn tracking.

Source code in strands/agent/conversation_manager/sliding_window_conversation_manager.py
 96
 97
 98
 99
100
101
102
103
104
def get_state(self) -> dict[str, Any]:
    """Get the current state of the conversation manager.

    Returns:
        Dictionary containing the manager's state, including model call count for per-turn tracking.
    """
    state = super().get_state()
    state["model_call_count"] = self._model_call_count
    return state

reduce_context(agent, e=None, **kwargs)

Trim the oldest messages to reduce the conversation context size.

The method handles special cases where trimming the messages leads to
  • toolResult with no corresponding toolUse
  • toolUse with no corresponding toolResult

Parameters:

Name Type Description Default
agent Agent

The agent whose messages will be reduce. This list is modified in-place.

required
e Exception | None

The exception that triggered the context reduction, if any.

None
**kwargs Any

Additional keyword arguments for future extensibility.

{}

Raises:

Type Description
ContextWindowOverflowException

If the context cannot be reduced further. Such as when the conversation is already minimal or when tool result messages cannot be properly converted.

Source code in strands/agent/conversation_manager/sliding_window_conversation_manager.py
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
def reduce_context(self, agent: "Agent", e: Exception | None = None, **kwargs: Any) -> None:
    """Trim the oldest messages to reduce the conversation context size.

    The method handles special cases where trimming the messages leads to:
     - toolResult with no corresponding toolUse
     - toolUse with no corresponding toolResult

    Args:
        agent: The agent whose messages will be reduce.
            This list is modified in-place.
        e: The exception that triggered the context reduction, if any.
        **kwargs: Additional keyword arguments for future extensibility.

    Raises:
        ContextWindowOverflowException: If the context cannot be reduced further.
            Such as when the conversation is already minimal or when tool result messages cannot be properly
            converted.
    """
    messages = agent.messages

    # Try to truncate the tool result first
    last_message_idx_with_tool_results = self._find_last_message_with_tool_results(messages)
    if last_message_idx_with_tool_results is not None and self.should_truncate_results:
        logger.debug(
            "message_index=<%s> | found message with tool results at index", last_message_idx_with_tool_results
        )
        results_truncated = self._truncate_tool_results(messages, last_message_idx_with_tool_results)
        if results_truncated:
            logger.debug("message_index=<%s> | tool results truncated", last_message_idx_with_tool_results)
            return

    # Try to trim index id when tool result cannot be truncated anymore
    # If the number of messages is less than the window_size, then we default to 2, otherwise, trim to window size
    trim_index = 2 if len(messages) <= self.window_size else len(messages) - self.window_size

    # Find the next valid trim_index
    while trim_index < len(messages):
        if (
            # Oldest message cannot be a toolResult because it needs a toolUse preceding it
            any("toolResult" in content for content in messages[trim_index]["content"])
            or (
                # Oldest message can be a toolUse only if a toolResult immediately follows it.
                any("toolUse" in content for content in messages[trim_index]["content"])
                and trim_index + 1 < len(messages)
                and not any("toolResult" in content for content in messages[trim_index + 1]["content"])
            )
        ):
            trim_index += 1
        else:
            break
    else:
        # If we didn't find a valid trim_index, then we throw
        raise ContextWindowOverflowException("Unable to trim conversation context!") from e

    # trim_index represents the number of messages being removed from the agents messages array
    self.removed_message_count += trim_index

    # Overwrite message history
    messages[:] = messages[trim_index:]

register_hooks(registry, **kwargs)

Register hook callbacks for per-turn conversation management.

Parameters:

Name Type Description Default
registry HookRegistry

The hook registry to register callbacks with.

required
**kwargs Any

Additional keyword arguments for future extensibility.

{}
Source code in strands/agent/conversation_manager/sliding_window_conversation_manager.py
54
55
56
57
58
59
60
61
62
63
64
def register_hooks(self, registry: "HookRegistry", **kwargs: Any) -> None:
    """Register hook callbacks for per-turn conversation management.

    Args:
        registry: The hook registry to register callbacks with.
        **kwargs: Additional keyword arguments for future extensibility.
    """
    super().register_hooks(registry, **kwargs)

    # Always register the callback - per_turn check happens in the callback
    registry.add_callback(BeforeModelCallEvent, self._on_before_model_call)

restore_from_session(state)

Restore the conversation manager's state from a session.

Parameters:

Name Type Description Default
state dict[str, Any]

Previous state of the conversation manager

required

Returns:

Type Description
list | None

Optional list of messages to prepend to the agent's messages.

Source code in strands/agent/conversation_manager/sliding_window_conversation_manager.py
106
107
108
109
110
111
112
113
114
115
116
117
def restore_from_session(self, state: dict[str, Any]) -> list | None:
    """Restore the conversation manager's state from a session.

    Args:
        state: Previous state of the conversation manager

    Returns:
        Optional list of messages to prepend to the agent's messages.
    """
    result = super().restore_from_session(state)
    self._model_call_count = state.get("model_call_count", 0)
    return result

StructuredOutputContext

Per-invocation context for structured output execution.

Source code in strands/tools/structured_output/_structured_output_context.py
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
class StructuredOutputContext:
    """Per-invocation context for structured output execution."""

    def __init__(
        self,
        structured_output_model: type[BaseModel] | None = None,
        structured_output_prompt: str | None = None,
    ):
        """Initialize a new structured output context.

        Args:
            structured_output_model: Optional Pydantic model type for structured output.
            structured_output_prompt: Optional custom prompt message to use when forcing structured output.
                Defaults to "You must format the previous response as structured output."
        """
        self.results: dict[str, BaseModel] = {}
        self.structured_output_model: type[BaseModel] | None = structured_output_model
        self.structured_output_tool: StructuredOutputTool | None = None
        self.forced_mode: bool = False
        self.force_attempted: bool = False
        self.tool_choice: ToolChoice | None = None
        self.stop_loop: bool = False
        self.expected_tool_name: str | None = None
        self.structured_output_prompt: str = structured_output_prompt or DEFAULT_STRUCTURED_OUTPUT_PROMPT

        if structured_output_model:
            self.structured_output_tool = StructuredOutputTool(structured_output_model)
            self.expected_tool_name = self.structured_output_tool.tool_name

    @property
    def is_enabled(self) -> bool:
        """Check if structured output is enabled for this context.

        Returns:
            True if a structured output model is configured, False otherwise.
        """
        return self.structured_output_model is not None

    def store_result(self, tool_use_id: str, result: BaseModel) -> None:
        """Store a validated structured output result.

        Args:
            tool_use_id: Unique identifier for the tool use.
            result: Validated Pydantic model instance.
        """
        self.results[tool_use_id] = result

    def get_result(self, tool_use_id: str) -> BaseModel | None:
        """Retrieve a stored structured output result.

        Args:
            tool_use_id: Unique identifier for the tool use.

        Returns:
            The validated Pydantic model instance, or None if not found.
        """
        return self.results.get(tool_use_id)

    def set_forced_mode(self, tool_choice: dict | None = None) -> None:
        """Mark this context as being in forced structured output mode.

        Args:
            tool_choice: Optional tool choice configuration.
        """
        if not self.is_enabled:
            return
        self.forced_mode = True
        self.force_attempted = True
        self.tool_choice = tool_choice or {"any": {}}

    def has_structured_output_tool(self, tool_uses: list[ToolUse]) -> bool:
        """Check if any tool uses are for the structured output tool.

        Args:
            tool_uses: List of tool use dictionaries to check.

        Returns:
            True if any tool use matches the expected structured output tool name,
            False if no structured output tool is present or expected.
        """
        if not self.expected_tool_name:
            return False
        return any(tool_use.get("name") == self.expected_tool_name for tool_use in tool_uses)

    def get_tool_spec(self) -> ToolSpec | None:
        """Get the tool specification for structured output.

        Returns:
            Tool specification, or None if no structured output model.
        """
        if self.structured_output_tool:
            return self.structured_output_tool.tool_spec
        return None

    def extract_result(self, tool_uses: list[ToolUse]) -> BaseModel | None:
        """Extract and remove structured output result from stored results.

        Args:
            tool_uses: List of tool use dictionaries from the current execution cycle.

        Returns:
            The structured output result if found, or None if no result available.
        """
        if not self.has_structured_output_tool(tool_uses):
            return None

        for tool_use in tool_uses:
            if tool_use.get("name") == self.expected_tool_name:
                tool_use_id = str(tool_use.get("toolUseId", ""))
                result = self.results.pop(tool_use_id, None)
                if result is not None:
                    logger.debug("Extracted structured output for %s", tool_use.get("name"))
                    return result
        return None

    def register_tool(self, registry: "ToolRegistry") -> None:
        """Register the structured output tool with the registry.

        Args:
            registry: The tool registry to register the tool with.
        """
        if self.structured_output_tool and self.structured_output_tool.tool_name not in registry.dynamic_tools:
            registry.register_dynamic_tool(self.structured_output_tool)
            logger.debug("Registered structured output tool: %s", self.structured_output_tool.tool_name)

    def cleanup(self, registry: "ToolRegistry") -> None:
        """Clean up the registered structured output tool from the registry.

        Args:
            registry: The tool registry to clean up the tool from.
        """
        if self.structured_output_tool and self.structured_output_tool.tool_name in registry.dynamic_tools:
            del registry.dynamic_tools[self.structured_output_tool.tool_name]
            logger.debug("Cleaned up structured output tool: %s", self.structured_output_tool.tool_name)

is_enabled property

Check if structured output is enabled for this context.

Returns:

Type Description
bool

True if a structured output model is configured, False otherwise.

__init__(structured_output_model=None, structured_output_prompt=None)

Initialize a new structured output context.

Parameters:

Name Type Description Default
structured_output_model type[BaseModel] | None

Optional Pydantic model type for structured output.

None
structured_output_prompt str | None

Optional custom prompt message to use when forcing structured output. Defaults to "You must format the previous response as structured output."

None
Source code in strands/tools/structured_output/_structured_output_context.py
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
def __init__(
    self,
    structured_output_model: type[BaseModel] | None = None,
    structured_output_prompt: str | None = None,
):
    """Initialize a new structured output context.

    Args:
        structured_output_model: Optional Pydantic model type for structured output.
        structured_output_prompt: Optional custom prompt message to use when forcing structured output.
            Defaults to "You must format the previous response as structured output."
    """
    self.results: dict[str, BaseModel] = {}
    self.structured_output_model: type[BaseModel] | None = structured_output_model
    self.structured_output_tool: StructuredOutputTool | None = None
    self.forced_mode: bool = False
    self.force_attempted: bool = False
    self.tool_choice: ToolChoice | None = None
    self.stop_loop: bool = False
    self.expected_tool_name: str | None = None
    self.structured_output_prompt: str = structured_output_prompt or DEFAULT_STRUCTURED_OUTPUT_PROMPT

    if structured_output_model:
        self.structured_output_tool = StructuredOutputTool(structured_output_model)
        self.expected_tool_name = self.structured_output_tool.tool_name

cleanup(registry)

Clean up the registered structured output tool from the registry.

Parameters:

Name Type Description Default
registry ToolRegistry

The tool registry to clean up the tool from.

required
Source code in strands/tools/structured_output/_structured_output_context.py
144
145
146
147
148
149
150
151
152
def cleanup(self, registry: "ToolRegistry") -> None:
    """Clean up the registered structured output tool from the registry.

    Args:
        registry: The tool registry to clean up the tool from.
    """
    if self.structured_output_tool and self.structured_output_tool.tool_name in registry.dynamic_tools:
        del registry.dynamic_tools[self.structured_output_tool.tool_name]
        logger.debug("Cleaned up structured output tool: %s", self.structured_output_tool.tool_name)

extract_result(tool_uses)

Extract and remove structured output result from stored results.

Parameters:

Name Type Description Default
tool_uses list[ToolUse]

List of tool use dictionaries from the current execution cycle.

required

Returns:

Type Description
BaseModel | None

The structured output result if found, or None if no result available.

Source code in strands/tools/structured_output/_structured_output_context.py
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
def extract_result(self, tool_uses: list[ToolUse]) -> BaseModel | None:
    """Extract and remove structured output result from stored results.

    Args:
        tool_uses: List of tool use dictionaries from the current execution cycle.

    Returns:
        The structured output result if found, or None if no result available.
    """
    if not self.has_structured_output_tool(tool_uses):
        return None

    for tool_use in tool_uses:
        if tool_use.get("name") == self.expected_tool_name:
            tool_use_id = str(tool_use.get("toolUseId", ""))
            result = self.results.pop(tool_use_id, None)
            if result is not None:
                logger.debug("Extracted structured output for %s", tool_use.get("name"))
                return result
    return None

get_result(tool_use_id)

Retrieve a stored structured output result.

Parameters:

Name Type Description Default
tool_use_id str

Unique identifier for the tool use.

required

Returns:

Type Description
BaseModel | None

The validated Pydantic model instance, or None if not found.

Source code in strands/tools/structured_output/_structured_output_context.py
66
67
68
69
70
71
72
73
74
75
def get_result(self, tool_use_id: str) -> BaseModel | None:
    """Retrieve a stored structured output result.

    Args:
        tool_use_id: Unique identifier for the tool use.

    Returns:
        The validated Pydantic model instance, or None if not found.
    """
    return self.results.get(tool_use_id)

get_tool_spec()

Get the tool specification for structured output.

Returns:

Type Description
ToolSpec | None

Tool specification, or None if no structured output model.

Source code in strands/tools/structured_output/_structured_output_context.py
103
104
105
106
107
108
109
110
111
def get_tool_spec(self) -> ToolSpec | None:
    """Get the tool specification for structured output.

    Returns:
        Tool specification, or None if no structured output model.
    """
    if self.structured_output_tool:
        return self.structured_output_tool.tool_spec
    return None

has_structured_output_tool(tool_uses)

Check if any tool uses are for the structured output tool.

Parameters:

Name Type Description Default
tool_uses list[ToolUse]

List of tool use dictionaries to check.

required

Returns:

Type Description
bool

True if any tool use matches the expected structured output tool name,

bool

False if no structured output tool is present or expected.

Source code in strands/tools/structured_output/_structured_output_context.py
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
def has_structured_output_tool(self, tool_uses: list[ToolUse]) -> bool:
    """Check if any tool uses are for the structured output tool.

    Args:
        tool_uses: List of tool use dictionaries to check.

    Returns:
        True if any tool use matches the expected structured output tool name,
        False if no structured output tool is present or expected.
    """
    if not self.expected_tool_name:
        return False
    return any(tool_use.get("name") == self.expected_tool_name for tool_use in tool_uses)

register_tool(registry)

Register the structured output tool with the registry.

Parameters:

Name Type Description Default
registry ToolRegistry

The tool registry to register the tool with.

required
Source code in strands/tools/structured_output/_structured_output_context.py
134
135
136
137
138
139
140
141
142
def register_tool(self, registry: "ToolRegistry") -> None:
    """Register the structured output tool with the registry.

    Args:
        registry: The tool registry to register the tool with.
    """
    if self.structured_output_tool and self.structured_output_tool.tool_name not in registry.dynamic_tools:
        registry.register_dynamic_tool(self.structured_output_tool)
        logger.debug("Registered structured output tool: %s", self.structured_output_tool.tool_name)

set_forced_mode(tool_choice=None)

Mark this context as being in forced structured output mode.

Parameters:

Name Type Description Default
tool_choice dict | None

Optional tool choice configuration.

None
Source code in strands/tools/structured_output/_structured_output_context.py
77
78
79
80
81
82
83
84
85
86
87
def set_forced_mode(self, tool_choice: dict | None = None) -> None:
    """Mark this context as being in forced structured output mode.

    Args:
        tool_choice: Optional tool choice configuration.
    """
    if not self.is_enabled:
        return
    self.forced_mode = True
    self.force_attempted = True
    self.tool_choice = tool_choice or {"any": {}}

store_result(tool_use_id, result)

Store a validated structured output result.

Parameters:

Name Type Description Default
tool_use_id str

Unique identifier for the tool use.

required
result BaseModel

Validated Pydantic model instance.

required
Source code in strands/tools/structured_output/_structured_output_context.py
57
58
59
60
61
62
63
64
def store_result(self, tool_use_id: str, result: BaseModel) -> None:
    """Store a validated structured output result.

    Args:
        tool_use_id: Unique identifier for the tool use.
        result: Validated Pydantic model instance.
    """
    self.results[tool_use_id] = result

SystemContentBlock

Bases: TypedDict

Contains configurations for instructions to provide the model for how to handle input.

Attributes:

Name Type Description
cachePoint CachePoint

A cache point configuration to optimize conversation history.

text str

A system prompt for the model.

Source code in strands/types/content.py
102
103
104
105
106
107
108
109
110
111
class SystemContentBlock(TypedDict, total=False):
    """Contains configurations for instructions to provide the model for how to handle input.

    Attributes:
        cachePoint: A cache point configuration to optimize conversation history.
        text: A system prompt for the model.
    """

    cachePoint: CachePoint
    text: str

ToolExecutor

Bases: ABC

Abstract base class for tool executors.

Source code in strands/tools/executors/_executor.py
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
class ToolExecutor(abc.ABC):
    """Abstract base class for tool executors."""

    @staticmethod
    def _is_agent(agent: "Agent | BidiAgent") -> bool:
        """Check if the agent is an Agent instance, otherwise we assume BidiAgent.

        Note, we use a runtime import to avoid a circular dependency error.
        """
        from ...agent import Agent

        return isinstance(agent, Agent)

    @staticmethod
    async def _invoke_before_tool_call_hook(
        agent: "Agent | BidiAgent",
        tool_func: Any,
        tool_use: ToolUse,
        invocation_state: dict[str, Any],
    ) -> tuple[BeforeToolCallEvent | BidiBeforeToolCallEvent, list[Interrupt]]:
        """Invoke the appropriate before tool call hook based on agent type."""
        kwargs = {
            "selected_tool": tool_func,
            "tool_use": tool_use,
            "invocation_state": invocation_state,
        }
        event = (
            BeforeToolCallEvent(agent=cast("Agent", agent), **kwargs)
            if ToolExecutor._is_agent(agent)
            else BidiBeforeToolCallEvent(agent=cast("BidiAgent", agent), **kwargs)
        )

        return await agent.hooks.invoke_callbacks_async(event)

    @staticmethod
    async def _invoke_after_tool_call_hook(
        agent: "Agent | BidiAgent",
        selected_tool: Any,
        tool_use: ToolUse,
        invocation_state: dict[str, Any],
        result: ToolResult,
        exception: Exception | None = None,
        cancel_message: str | None = None,
    ) -> tuple[AfterToolCallEvent | BidiAfterToolCallEvent, list[Interrupt]]:
        """Invoke the appropriate after tool call hook based on agent type."""
        kwargs = {
            "selected_tool": selected_tool,
            "tool_use": tool_use,
            "invocation_state": invocation_state,
            "result": result,
            "exception": exception,
            "cancel_message": cancel_message,
        }
        event = (
            AfterToolCallEvent(agent=cast("Agent", agent), **kwargs)
            if ToolExecutor._is_agent(agent)
            else BidiAfterToolCallEvent(agent=cast("BidiAgent", agent), **kwargs)
        )

        return await agent.hooks.invoke_callbacks_async(event)

    @staticmethod
    async def _stream(
        agent: "Agent | BidiAgent",
        tool_use: ToolUse,
        tool_results: list[ToolResult],
        invocation_state: dict[str, Any],
        structured_output_context: StructuredOutputContext | None = None,
        **kwargs: Any,
    ) -> AsyncGenerator[TypedEvent, None]:
        """Stream tool events.

        This method adds additional logic to the stream invocation including:

        - Tool lookup and validation
        - Before/after hook execution
        - Tracing and metrics collection
        - Error handling and recovery
        - Interrupt handling for human-in-the-loop workflows

        Args:
            agent: The agent (Agent or BidiAgent) for which the tool is being executed.
            tool_use: Metadata and inputs for the tool to be executed.
            tool_results: List of tool results from each tool execution.
            invocation_state: Context for the tool invocation.
            structured_output_context: Context for structured output management.
            **kwargs: Additional keyword arguments for future extensibility.

        Yields:
            Tool events with the last being the tool result.
        """
        logger.debug("tool_use=<%s> | streaming", tool_use)
        tool_name = tool_use["name"]
        structured_output_context = structured_output_context or StructuredOutputContext()

        tool_info = agent.tool_registry.dynamic_tools.get(tool_name)
        tool_func = tool_info if tool_info is not None else agent.tool_registry.registry.get(tool_name)
        tool_spec = tool_func.tool_spec if tool_func is not None else None

        current_span = trace_api.get_current_span()
        if current_span and tool_spec is not None:
            current_span.set_attribute("gen_ai.tool.description", tool_spec["description"])
            input_schema = tool_spec["inputSchema"]
            if "json" in input_schema:
                current_span.set_attribute("gen_ai.tool.json_schema", serialize(input_schema["json"]))

        invocation_state.update(
            {
                "agent": agent,
                "model": agent.model,
                "messages": agent.messages,
                "system_prompt": agent.system_prompt,
                "tool_config": ToolConfig(  # for backwards compatibility
                    tools=[{"toolSpec": tool_spec} for tool_spec in agent.tool_registry.get_all_tool_specs()],
                    toolChoice=cast(ToolChoice, {"auto": ToolChoiceAuto()}),
                ),
            }
        )

        # Retry loop for tool execution - hooks can set after_event.retry = True to retry
        while True:
            before_event, interrupts = await ToolExecutor._invoke_before_tool_call_hook(
                agent, tool_func, tool_use, invocation_state
            )

            if interrupts:
                yield ToolInterruptEvent(tool_use, interrupts)
                return

            if before_event.cancel_tool:
                cancel_message = (
                    before_event.cancel_tool if isinstance(before_event.cancel_tool, str) else "tool cancelled by user"
                )
                yield ToolCancelEvent(tool_use, cancel_message)

                cancel_result: ToolResult = {
                    "toolUseId": str(tool_use.get("toolUseId")),
                    "status": "error",
                    "content": [{"text": cancel_message}],
                }

                after_event, _ = await ToolExecutor._invoke_after_tool_call_hook(
                    agent, None, tool_use, invocation_state, cancel_result, cancel_message=cancel_message
                )
                yield ToolResultEvent(after_event.result)
                tool_results.append(after_event.result)
                return

            try:
                selected_tool = before_event.selected_tool
                tool_use = before_event.tool_use
                invocation_state = before_event.invocation_state

                if not selected_tool:
                    if tool_func == selected_tool:
                        logger.error(
                            "tool_name=<%s>, available_tools=<%s> | tool not found in registry",
                            tool_name,
                            list(agent.tool_registry.registry.keys()),
                        )
                    else:
                        logger.debug(
                            "tool_name=<%s>, tool_use_id=<%s> | a hook resulted in a non-existing tool call",
                            tool_name,
                            str(tool_use.get("toolUseId")),
                        )

                    result: ToolResult = {
                        "toolUseId": str(tool_use.get("toolUseId")),
                        "status": "error",
                        "content": [{"text": f"Unknown tool: {tool_name}"}],
                    }

                    after_event, _ = await ToolExecutor._invoke_after_tool_call_hook(
                        agent, selected_tool, tool_use, invocation_state, result
                    )
                    # Check if retry requested for unknown tool error
                    # Use getattr because BidiAfterToolCallEvent doesn't have retry attribute
                    if getattr(after_event, "retry", False):
                        logger.debug("tool_name=<%s> | retry requested, retrying tool call", tool_name)
                        continue
                    yield ToolResultEvent(after_event.result)
                    tool_results.append(after_event.result)
                    return
                if structured_output_context.is_enabled:
                    kwargs["structured_output_context"] = structured_output_context
                async for event in selected_tool.stream(tool_use, invocation_state, **kwargs):
                    # Internal optimization; for built-in AgentTools, we yield TypedEvents out of .stream()
                    # so that we don't needlessly yield ToolStreamEvents for non-generator callbacks.
                    # In which case, as soon as we get a ToolResultEvent we're done and for ToolStreamEvent
                    # we yield it directly; all other cases (non-sdk AgentTools), we wrap events in
                    # ToolStreamEvent and the last event is just the result.

                    if isinstance(event, ToolInterruptEvent):
                        yield event
                        return

                    if isinstance(event, ToolResultEvent):
                        # below the last "event" must point to the tool_result
                        event = event.tool_result
                        break

                    if isinstance(event, ToolStreamEvent):
                        yield event
                    else:
                        yield ToolStreamEvent(tool_use, event)

                result = cast(ToolResult, event)

                after_event, _ = await ToolExecutor._invoke_after_tool_call_hook(
                    agent, selected_tool, tool_use, invocation_state, result
                )

                # Check if retry requested (getattr for BidiAfterToolCallEvent compatibility)
                if getattr(after_event, "retry", False):
                    logger.debug("tool_name=<%s> | retry requested, retrying tool call", tool_name)
                    continue

                yield ToolResultEvent(after_event.result)
                tool_results.append(after_event.result)
                return

            except Exception as e:
                logger.exception("tool_name=<%s> | failed to process tool", tool_name)
                error_result: ToolResult = {
                    "toolUseId": str(tool_use.get("toolUseId")),
                    "status": "error",
                    "content": [{"text": f"Error: {str(e)}"}],
                }

                after_event, _ = await ToolExecutor._invoke_after_tool_call_hook(
                    agent, selected_tool, tool_use, invocation_state, error_result, exception=e
                )
                # Check if retry requested (getattr for BidiAfterToolCallEvent compatibility)
                if getattr(after_event, "retry", False):
                    logger.debug("tool_name=<%s> | retry requested after exception, retrying tool call", tool_name)
                    continue
                yield ToolResultEvent(after_event.result)
                tool_results.append(after_event.result)
                return

    @staticmethod
    async def _stream_with_trace(
        agent: "Agent",
        tool_use: ToolUse,
        tool_results: list[ToolResult],
        cycle_trace: Trace,
        cycle_span: Any,
        invocation_state: dict[str, Any],
        structured_output_context: StructuredOutputContext | None = None,
        **kwargs: Any,
    ) -> AsyncGenerator[TypedEvent, None]:
        """Execute tool with tracing and metrics collection.

        Args:
            agent: The agent for which the tool is being executed.
            tool_use: Metadata and inputs for the tool to be executed.
            tool_results: List of tool results from each tool execution.
            cycle_trace: Trace object for the current event loop cycle.
            cycle_span: Span object for tracing the cycle.
            invocation_state: Context for the tool invocation.
            structured_output_context: Context for structured output management.
            **kwargs: Additional keyword arguments for future extensibility.

        Yields:
            Tool events with the last being the tool result.
        """
        tool_name = tool_use["name"]
        structured_output_context = structured_output_context or StructuredOutputContext()

        tracer = get_tracer()

        tool_call_span = tracer.start_tool_call_span(
            tool_use, cycle_span, custom_trace_attributes=agent.trace_attributes
        )
        tool_trace = Trace(f"Tool: {tool_name}", parent_id=cycle_trace.id, raw_name=tool_name)
        tool_start_time = time.time()

        with trace_api.use_span(tool_call_span):
            async for event in ToolExecutor._stream(
                agent, tool_use, tool_results, invocation_state, structured_output_context, **kwargs
            ):
                yield event

            if isinstance(event, ToolInterruptEvent):
                tracer.end_tool_call_span(tool_call_span, tool_result=None)
                return

            result_event = cast(ToolResultEvent, event)
            result = result_event.tool_result

            tool_success = result.get("status") == "success"
            tool_duration = time.time() - tool_start_time
            message = Message(role="user", content=[{"toolResult": result}])
            if ToolExecutor._is_agent(agent):
                agent.event_loop_metrics.add_tool_usage(tool_use, tool_duration, tool_trace, tool_success, message)
            cycle_trace.add_child(tool_trace)

            tracer.end_tool_call_span(tool_call_span, result)

    @abc.abstractmethod
    # pragma: no cover
    def _execute(
        self,
        agent: "Agent",
        tool_uses: list[ToolUse],
        tool_results: list[ToolResult],
        cycle_trace: Trace,
        cycle_span: Any,
        invocation_state: dict[str, Any],
        structured_output_context: "StructuredOutputContext | None" = None,
    ) -> AsyncGenerator[TypedEvent, None]:
        """Execute the given tools according to this executor's strategy.

        Args:
            agent: The agent for which tools are being executed.
            tool_uses: Metadata and inputs for the tools to be executed.
            tool_results: List of tool results from each tool execution.
            cycle_trace: Trace object for the current event loop cycle.
            cycle_span: Span object for tracing the cycle.
            invocation_state: Context for the tool invocation.
            structured_output_context: Context for structured output management.

        Yields:
            Events from the tool execution stream.
        """
        pass

ToolProvider

Bases: ABC

Interface for providing tools with lifecycle management.

Provides a way to load a collection of tools and clean them up when done, with lifecycle managed by the agent.

Source code in strands/tools/tool_provider.py
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
class ToolProvider(ABC):
    """Interface for providing tools with lifecycle management.

    Provides a way to load a collection of tools and clean them up
    when done, with lifecycle managed by the agent.
    """

    @abstractmethod
    async def load_tools(self, **kwargs: Any) -> Sequence["AgentTool"]:
        """Load and return the tools in this provider.

        Args:
            **kwargs: Additional arguments for future compatibility.

        Returns:
            List of tools that are ready to use.
        """
        pass

    @abstractmethod
    def add_consumer(self, consumer_id: Any, **kwargs: Any) -> None:
        """Add a consumer to this tool provider.

        Args:
            consumer_id: Unique identifier for the consumer.
            **kwargs: Additional arguments for future compatibility.
        """
        pass

    @abstractmethod
    def remove_consumer(self, consumer_id: Any, **kwargs: Any) -> None:
        """Remove a consumer from this tool provider.

        This method must be idempotent - calling it multiple times with the same ID
        should have no additional effect after the first call.

        Provider may clean up resources when no consumers remain.

        Args:
            consumer_id: Unique identifier for the consumer.
            **kwargs: Additional arguments for future compatibility.
        """
        pass

add_consumer(consumer_id, **kwargs) abstractmethod

Add a consumer to this tool provider.

Parameters:

Name Type Description Default
consumer_id Any

Unique identifier for the consumer.

required
**kwargs Any

Additional arguments for future compatibility.

{}
Source code in strands/tools/tool_provider.py
30
31
32
33
34
35
36
37
38
@abstractmethod
def add_consumer(self, consumer_id: Any, **kwargs: Any) -> None:
    """Add a consumer to this tool provider.

    Args:
        consumer_id: Unique identifier for the consumer.
        **kwargs: Additional arguments for future compatibility.
    """
    pass

load_tools(**kwargs) abstractmethod async

Load and return the tools in this provider.

Parameters:

Name Type Description Default
**kwargs Any

Additional arguments for future compatibility.

{}

Returns:

Type Description
Sequence[AgentTool]

List of tools that are ready to use.

Source code in strands/tools/tool_provider.py
18
19
20
21
22
23
24
25
26
27
28
@abstractmethod
async def load_tools(self, **kwargs: Any) -> Sequence["AgentTool"]:
    """Load and return the tools in this provider.

    Args:
        **kwargs: Additional arguments for future compatibility.

    Returns:
        List of tools that are ready to use.
    """
    pass

remove_consumer(consumer_id, **kwargs) abstractmethod

Remove a consumer from this tool provider.

This method must be idempotent - calling it multiple times with the same ID should have no additional effect after the first call.

Provider may clean up resources when no consumers remain.

Parameters:

Name Type Description Default
consumer_id Any

Unique identifier for the consumer.

required
**kwargs Any

Additional arguments for future compatibility.

{}
Source code in strands/tools/tool_provider.py
40
41
42
43
44
45
46
47
48
49
50
51
52
53
@abstractmethod
def remove_consumer(self, consumer_id: Any, **kwargs: Any) -> None:
    """Remove a consumer from this tool provider.

    This method must be idempotent - calling it multiple times with the same ID
    should have no additional effect after the first call.

    Provider may clean up resources when no consumers remain.

    Args:
        consumer_id: Unique identifier for the consumer.
        **kwargs: Additional arguments for future compatibility.
    """
    pass

ToolRegistry

Central registry for all tools available to the agent.

This class manages tool registration, validation, discovery, and invocation.

Source code in strands/tools/registry.py
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
class ToolRegistry:
    """Central registry for all tools available to the agent.

    This class manages tool registration, validation, discovery, and invocation.
    """

    def __init__(self) -> None:
        """Initialize the tool registry."""
        self.registry: dict[str, AgentTool] = {}
        self.dynamic_tools: dict[str, AgentTool] = {}
        self.tool_config: dict[str, Any] | None = None
        self._tool_providers: list[ToolProvider] = []
        self._registry_id = str(uuid.uuid4())

    def process_tools(self, tools: list[Any]) -> list[str]:
        """Process tools list.

        Process list of tools that can contain local file path string, module import path string,
        imported modules, @tool decorated functions, or instances of AgentTool.

        Args:
            tools: List of tool specifications. Can be:

                1. Local file path to a module based tool: `./path/to/module/tool.py`
                2. Module import path

                    2.1. Path to a module based tool: `strands_tools.file_read`
                    2.2. Path to a module with multiple AgentTool instances (@tool decorated):
                        `tests.fixtures.say_tool`
                    2.3. Path to a module and a specific function: `tests.fixtures.say_tool:say`

                3. A module for a module based tool
                4. Instances of AgentTool (@tool decorated functions)
                5. Dictionaries with name/path keys (deprecated)


        Returns:
            List of tool names that were processed.
        """
        tool_names = []

        def add_tool(tool: Any) -> None:
            try:
                # String based tool
                # Can be a file path, a module path, or a module path with a targeted function. Examples:
                # './path/to/tool.py'
                # 'my.module.tool'
                # 'my.module.tool:tool_name'
                if isinstance(tool, str):
                    tools = load_tool_from_string(tool)
                    for a_tool in tools:
                        a_tool.mark_dynamic()
                        self.register_tool(a_tool)
                        tool_names.append(a_tool.tool_name)

                # Dictionary with name and path
                elif isinstance(tool, dict) and "name" in tool and "path" in tool:
                    tools = load_tool_from_string(tool["path"])

                    tool_found = False
                    for a_tool in tools:
                        if a_tool.tool_name == tool["name"]:
                            a_tool.mark_dynamic()
                            self.register_tool(a_tool)
                            tool_names.append(a_tool.tool_name)
                            tool_found = True

                    if not tool_found:
                        raise ValueError(f'Tool "{tool["name"]}" not found in "{tool["path"]}"')

                # Dictionary with path only
                elif isinstance(tool, dict) and "path" in tool:
                    tools = load_tool_from_string(tool["path"])

                    for a_tool in tools:
                        a_tool.mark_dynamic()
                        self.register_tool(a_tool)
                        tool_names.append(a_tool.tool_name)

                # Imported Python module
                elif hasattr(tool, "__file__") and inspect.ismodule(tool):
                    # Extract the tool name from the module name
                    module_tool_name = tool.__name__.split(".")[-1]

                    tools = load_tools_from_module(tool, module_tool_name)
                    for a_tool in tools:
                        self.register_tool(a_tool)
                        tool_names.append(a_tool.tool_name)

                # Case 5: AgentTools (which also covers @tool)
                elif isinstance(tool, AgentTool):
                    self.register_tool(tool)
                    tool_names.append(tool.tool_name)

                # Case 6: Nested iterable (list, tuple, etc.) - add each sub-tool
                elif isinstance(tool, Iterable) and not isinstance(tool, (str, bytes, bytearray)):
                    for t in tool:
                        add_tool(t)

                # Case 5: ToolProvider
                elif isinstance(tool, ToolProvider):
                    self._tool_providers.append(tool)
                    tool.add_consumer(self._registry_id)

                    async def get_tools() -> Sequence[AgentTool]:
                        return await tool.load_tools()

                    provider_tools = run_async(get_tools)

                    for provider_tool in provider_tools:
                        self.register_tool(provider_tool)
                        tool_names.append(provider_tool.tool_name)
                else:
                    logger.warning("tool=<%s> | unrecognized tool specification", tool)

            except Exception as e:
                exception_str = str(e)
                logger.exception("tool_name=<%s> | failed to load tool", tool)
                raise ValueError(f"Failed to load tool {tool}: {exception_str}") from e

        for tool in tools:
            add_tool(tool)
        return tool_names

    def load_tool_from_filepath(self, tool_name: str, tool_path: str) -> None:
        """DEPRECATED: Load a tool from a file path.

        Args:
            tool_name: Name of the tool.
            tool_path: Path to the tool file.

        Raises:
            FileNotFoundError: If the tool file is not found.
            ValueError: If the tool cannot be loaded.
        """
        warnings.warn(
            "load_tool_from_filepath is deprecated and will be removed in Strands SDK 2.0. "
            "`process_tools` automatically handles loading tools from a filepath.",
            DeprecationWarning,
            stacklevel=2,
        )

        from .loader import ToolLoader

        try:
            tool_path = expanduser(tool_path)
            if not os.path.exists(tool_path):
                raise FileNotFoundError(f"Tool file not found: {tool_path}")

            loaded_tools = ToolLoader.load_tools(tool_path, tool_name)
            for t in loaded_tools:
                t.mark_dynamic()
                # Because we're explicitly registering the tool we don't need an allowlist
                self.register_tool(t)
        except Exception as e:
            exception_str = str(e)
            logger.exception("tool_name=<%s> | failed to load tool", tool_name)
            raise ValueError(f"Failed to load tool {tool_name}: {exception_str}") from e

    def get_all_tools_config(self) -> dict[str, Any]:
        """Dynamically generate tool configuration by combining built-in and dynamic tools.

        Returns:
            Dictionary containing all tool configurations.
        """
        tool_config = {}
        logger.debug("getting tool configurations")

        # Add all registered tools
        for tool_name, tool in self.registry.items():
            # Make a deep copy to avoid modifying the original
            spec = tool.tool_spec.copy()
            try:
                # Normalize the schema before validation
                spec = normalize_tool_spec(spec)
                self.validate_tool_spec(spec)
                tool_config[tool_name] = spec
                logger.debug("tool_name=<%s> | loaded tool config", tool_name)
            except ValueError as e:
                logger.warning("tool_name=<%s> | spec validation failed | %s", tool_name, e)

        # Add any dynamic tools
        for tool_name, tool in self.dynamic_tools.items():
            if tool_name not in tool_config:
                # Make a deep copy to avoid modifying the original
                spec = tool.tool_spec.copy()
                try:
                    # Normalize the schema before validation
                    spec = normalize_tool_spec(spec)
                    self.validate_tool_spec(spec)
                    tool_config[tool_name] = spec
                    logger.debug("tool_name=<%s> | loaded dynamic tool config", tool_name)
                except ValueError as e:
                    logger.warning("tool_name=<%s> | dynamic tool spec validation failed | %s", tool_name, e)

        logger.debug("tool_count=<%s> | tools configured", len(tool_config))
        return tool_config

    # mypy has problems converting between DecoratedFunctionTool <-> AgentTool
    def register_tool(self, tool: AgentTool) -> None:
        """Register a tool function with the given name.

        Args:
            tool: The tool to register.
        """
        logger.debug(
            "tool_name=<%s>, tool_type=<%s>, is_dynamic=<%s> | registering tool",
            tool.tool_name,
            tool.tool_type,
            tool.is_dynamic,
        )

        # Check duplicate tool name, throw on duplicate tool names except if hot_reloading is enabled
        if tool.tool_name in self.registry and not tool.supports_hot_reload:
            raise ValueError(
                f"Tool name '{tool.tool_name}' already exists. Cannot register tools with exact same name."
            )

        # Check for normalized name conflicts (- vs _)
        if self.registry.get(tool.tool_name) is None:
            normalized_name = tool.tool_name.replace("-", "_")

            matching_tools = [
                tool_name
                for (tool_name, tool) in self.registry.items()
                if tool_name.replace("-", "_") == normalized_name
            ]

            if matching_tools:
                raise ValueError(
                    f"Tool name '{tool.tool_name}' already exists as '{matching_tools[0]}'."
                    " Cannot add a duplicate tool which differs by a '-' or '_'"
                )

        # Register in main registry
        self.registry[tool.tool_name] = tool

        # Register in dynamic tools if applicable
        if tool.is_dynamic:
            self.dynamic_tools[tool.tool_name] = tool

            if not tool.supports_hot_reload:
                logger.debug("tool_name=<%s>, tool_type=<%s> | skipping hot reloading", tool.tool_name, tool.tool_type)
                return

            logger.debug(
                "tool_name=<%s>, tool_registry=<%s>, dynamic_tools=<%s> | tool registered",
                tool.tool_name,
                list(self.registry.keys()),
                list(self.dynamic_tools.keys()),
            )

    def replace(self, new_tool: AgentTool) -> None:
        """Replace an existing tool with a new implementation.

        This performs a swap of the tool implementation in the registry.
        The replacement takes effect on the next agent invocation.

        Args:
            new_tool: New tool implementation. Its name must match the tool being replaced.

        Raises:
            ValueError: If the tool doesn't exist.
        """
        tool_name = new_tool.tool_name

        if tool_name not in self.registry:
            raise ValueError(f"Cannot replace tool '{tool_name}' - tool does not exist")

        # Update main registry
        self.registry[tool_name] = new_tool

        # Update dynamic_tools to match new tool's dynamic status
        if new_tool.is_dynamic:
            self.dynamic_tools[tool_name] = new_tool
        elif tool_name in self.dynamic_tools:
            del self.dynamic_tools[tool_name]

    def get_tools_dirs(self) -> list[Path]:
        """Get all tool directory paths.

        Returns:
            A list of Path objects for current working directory's "./tools/".
        """
        # Current working directory's tools directory
        cwd_tools_dir = Path.cwd() / "tools"

        # Return all directories that exist
        tool_dirs = []
        for directory in [cwd_tools_dir]:
            if directory.exists() and directory.is_dir():
                tool_dirs.append(directory)
                logger.debug("tools_dir=<%s> | found tools directory", directory)
            else:
                logger.debug("tools_dir=<%s> | tools directory not found", directory)

        return tool_dirs

    def discover_tool_modules(self) -> dict[str, Path]:
        """Discover available tool modules in all tools directories.

        Returns:
            Dictionary mapping tool names to their full paths.
        """
        tool_modules = {}
        tools_dirs = self.get_tools_dirs()

        for tools_dir in tools_dirs:
            logger.debug("tools_dir=<%s> | scanning", tools_dir)

            # Find Python tools
            for extension in ["*.py"]:
                for item in tools_dir.glob(extension):
                    if item.is_file() and not item.name.startswith("__"):
                        module_name = item.stem
                        # If tool already exists, newer paths take precedence
                        if module_name in tool_modules:
                            logger.debug("tools_dir=<%s>, module_name=<%s> | tool overridden", tools_dir, module_name)
                        tool_modules[module_name] = item

        logger.debug("tool_modules=<%s> | discovered", list(tool_modules.keys()))
        return tool_modules

    def reload_tool(self, tool_name: str) -> None:
        """Reload a specific tool module.

        Args:
            tool_name: Name of the tool to reload.

        Raises:
            FileNotFoundError: If the tool file cannot be found.
            ImportError: If there are issues importing the tool module.
            ValueError: If the tool specification is invalid or required components are missing.
            Exception: For other errors during tool reloading.
        """
        try:
            # Check for tool file
            logger.debug("tool_name=<%s> | searching directories for tool", tool_name)
            tools_dirs = self.get_tools_dirs()
            tool_path = None

            # Search for the tool file in all tool directories
            for tools_dir in tools_dirs:
                temp_path = tools_dir / f"{tool_name}.py"
                if temp_path.exists():
                    tool_path = temp_path
                    break

            if not tool_path:
                raise FileNotFoundError(f"No tool file found for: {tool_name}")

            logger.debug("tool_name=<%s> | reloading tool", tool_name)

            # Add tool directory to path temporarily
            tool_dir = str(tool_path.parent)
            sys.path.insert(0, tool_dir)
            try:
                # Load the module directly using spec
                spec = util.spec_from_file_location(tool_name, str(tool_path))
                if spec is None:
                    raise ImportError(f"Could not load spec for {tool_name}")

                module = util.module_from_spec(spec)
                sys.modules[tool_name] = module

                if spec.loader is None:
                    raise ImportError(f"Could not load {tool_name}")

                spec.loader.exec_module(module)

            finally:
                # Remove the temporary path
                sys.path.remove(tool_dir)

            # Look for function-based tools first
            try:
                function_tools = self._scan_module_for_tools(module)

                if function_tools:
                    for function_tool in function_tools:
                        # Register the function-based tool
                        self.register_tool(function_tool)

                        # Update tool configuration if available
                        if self.tool_config is not None:
                            self._update_tool_config(self.tool_config, {"spec": function_tool.tool_spec})

                    logger.debug("tool_name=<%s> | successfully reloaded function-based tool from module", tool_name)
                    return
            except ImportError:
                logger.debug("function tool loader not available | falling back to traditional tools")

            # Fall back to traditional module-level tools
            if not hasattr(module, "TOOL_SPEC"):
                raise ValueError(
                    f"Tool {tool_name} is missing TOOL_SPEC (neither at module level nor as a decorated function)"
                )

            expected_func_name = tool_name
            if not hasattr(module, expected_func_name):
                raise ValueError(f"Tool {tool_name} is missing {expected_func_name} function")

            tool_function = getattr(module, expected_func_name)
            if not callable(tool_function):
                raise ValueError(f"Tool {tool_name} function is not callable")

            # Validate tool spec
            self.validate_tool_spec(module.TOOL_SPEC)

            new_tool = PythonAgentTool(tool_name, module.TOOL_SPEC, tool_function)

            # Register the tool
            self.register_tool(new_tool)

            # Update tool configuration if available
            if self.tool_config is not None:
                self._update_tool_config(self.tool_config, {"spec": module.TOOL_SPEC})
            logger.debug("tool_name=<%s> | successfully reloaded tool", tool_name)

        except Exception:
            logger.exception("tool_name=<%s> | failed to reload tool", tool_name)
            raise

    def initialize_tools(self, load_tools_from_directory: bool = False) -> None:
        """Initialize all tools by discovering and loading them dynamically from all tool directories.

        Args:
            load_tools_from_directory: Whether to reload tools if changes are made at runtime.
        """
        self.tool_config = None

        # Then discover and load other tools
        tool_modules = self.discover_tool_modules()
        successful_loads = 0
        total_tools = len(tool_modules)
        tool_import_errors = {}

        # Process Python tools
        for tool_name, tool_path in tool_modules.items():
            if tool_name in ["__init__"]:
                continue

            if not load_tools_from_directory:
                continue

            try:
                # Add directory to path temporarily
                tool_dir = str(tool_path.parent)
                sys.path.insert(0, tool_dir)
                try:
                    module = import_module(tool_name)
                finally:
                    if tool_dir in sys.path:
                        sys.path.remove(tool_dir)

                # Process Python tool
                if tool_path.suffix == ".py":
                    # Check for decorated function tools first
                    try:
                        function_tools = self._scan_module_for_tools(module)

                        if function_tools:
                            for function_tool in function_tools:
                                self.register_tool(function_tool)
                                successful_loads += 1
                        else:
                            # Fall back to traditional tools
                            # Check for expected tool function
                            expected_func_name = tool_name
                            if hasattr(module, expected_func_name):
                                tool_function = getattr(module, expected_func_name)
                                if not callable(tool_function):
                                    logger.warning(
                                        "tool_name=<%s> | tool function exists but is not callable", tool_name
                                    )
                                    continue

                                # Validate tool spec before registering
                                if not hasattr(module, "TOOL_SPEC"):
                                    logger.warning("tool_name=<%s> | tool is missing TOOL_SPEC | skipping", tool_name)
                                    continue

                                try:
                                    self.validate_tool_spec(module.TOOL_SPEC)
                                except ValueError as e:
                                    logger.warning("tool_name=<%s> | tool spec validation failed | %s", tool_name, e)
                                    continue

                                tool_spec = module.TOOL_SPEC
                                tool = PythonAgentTool(tool_name, tool_spec, tool_function)
                                self.register_tool(tool)
                                successful_loads += 1

                            else:
                                logger.warning("tool_name=<%s> | tool function missing", tool_name)
                    except ImportError:
                        # Function tool loader not available, fall back to traditional tools
                        # Check for expected tool function
                        expected_func_name = tool_name
                        if hasattr(module, expected_func_name):
                            tool_function = getattr(module, expected_func_name)
                            if not callable(tool_function):
                                logger.warning("tool_name=<%s> | tool function exists but is not callable", tool_name)
                                continue

                            # Validate tool spec before registering
                            if not hasattr(module, "TOOL_SPEC"):
                                logger.warning("tool_name=<%s> | tool is missing TOOL_SPEC | skipping", tool_name)
                                continue

                            try:
                                self.validate_tool_spec(module.TOOL_SPEC)
                            except ValueError as e:
                                logger.warning("tool_name=<%s> | tool spec validation failed | %s", tool_name, e)
                                continue

                            tool_spec = module.TOOL_SPEC
                            tool = PythonAgentTool(tool_name, tool_spec, tool_function)
                            self.register_tool(tool)
                            successful_loads += 1

                        else:
                            logger.warning("tool_name=<%s> | tool function missing", tool_name)

            except Exception as e:
                logger.warning("tool_name=<%s> | failed to load tool | %s", tool_name, e)
                tool_import_errors[tool_name] = str(e)

        # Log summary
        logger.debug("tool_count=<%d>, success_count=<%d> | finished loading tools", total_tools, successful_loads)
        if tool_import_errors:
            for tool_name, error in tool_import_errors.items():
                logger.debug("tool_name=<%s> | import error | %s", tool_name, error)

    def get_all_tool_specs(self) -> list[ToolSpec]:
        """Get all the tool specs for all tools in this registry..

        Returns:
            A list of ToolSpecs.
        """
        all_tools = self.get_all_tools_config()
        tools: list[ToolSpec] = [tool_spec for tool_spec in all_tools.values()]
        return tools

    def register_dynamic_tool(self, tool: AgentTool) -> None:
        """Register a tool dynamically for temporary use.

        Args:
            tool: The tool to register dynamically

        Raises:
            ValueError: If a tool with this name already exists
        """
        if tool.tool_name in self.registry or tool.tool_name in self.dynamic_tools:
            raise ValueError(f"Tool '{tool.tool_name}' already exists")

        self.dynamic_tools[tool.tool_name] = tool
        logger.debug("Registered dynamic tool: %s", tool.tool_name)

    def validate_tool_spec(self, tool_spec: ToolSpec) -> None:
        """Validate tool specification against required schema.

        Args:
            tool_spec: Tool specification to validate.

        Raises:
            ValueError: If the specification is invalid.
        """
        required_fields = ["name", "description"]
        missing_fields = [field for field in required_fields if field not in tool_spec]
        if missing_fields:
            raise ValueError(f"Missing required fields in tool spec: {', '.join(missing_fields)}")

        if "json" not in tool_spec["inputSchema"]:
            # Convert direct schema to proper format
            json_schema = normalize_schema(tool_spec["inputSchema"])
            tool_spec["inputSchema"] = {"json": json_schema}
            return

        # Validate json schema fields
        json_schema = tool_spec["inputSchema"]["json"]

        # Ensure schema has required fields
        if "type" not in json_schema:
            json_schema["type"] = "object"
        if "properties" not in json_schema:
            json_schema["properties"] = {}
        if "required" not in json_schema:
            json_schema["required"] = []

        # Validate property definitions
        for prop_name, prop_def in json_schema.get("properties", {}).items():
            if not isinstance(prop_def, dict):
                json_schema["properties"][prop_name] = {
                    "type": "string",
                    "description": f"Property {prop_name}",
                }
                continue

            # It is expected that type and description are already included in referenced $def.
            if "$ref" in prop_def:
                continue

            has_composition = any(kw in prop_def for kw in _COMPOSITION_KEYWORDS)
            if "type" not in prop_def and not has_composition:
                prop_def["type"] = "string"
            if "description" not in prop_def:
                prop_def["description"] = f"Property {prop_name}"

    class NewToolDict(TypedDict):
        """Dictionary type for adding or updating a tool in the configuration.

        Attributes:
            spec: The tool specification that defines the tool's interface and behavior.
        """

        spec: ToolSpec

    def _update_tool_config(self, tool_config: dict[str, Any], new_tool: NewToolDict) -> None:
        """Update tool configuration with a new tool.

        Args:
            tool_config: The current tool configuration dictionary.
            new_tool: The new tool to add/update.

        Raises:
            ValueError: If the new tool spec is invalid.
        """
        if not new_tool.get("spec"):
            raise ValueError("Invalid tool format - missing spec")

        # Validate tool spec before updating
        try:
            self.validate_tool_spec(new_tool["spec"])
        except ValueError as e:
            raise ValueError(f"Tool specification validation failed: {str(e)}") from e

        new_tool_name = new_tool["spec"]["name"]
        existing_tool_idx = None

        # Find if tool already exists
        for idx, tool_entry in enumerate(tool_config["tools"]):
            if tool_entry["toolSpec"]["name"] == new_tool_name:
                existing_tool_idx = idx
                break

        # Update existing tool or add new one
        new_tool_entry = {"toolSpec": new_tool["spec"]}
        if existing_tool_idx is not None:
            tool_config["tools"][existing_tool_idx] = new_tool_entry
            logger.debug("tool_name=<%s> | updated existing tool", new_tool_name)
        else:
            tool_config["tools"].append(new_tool_entry)
            logger.debug("tool_name=<%s> | added new tool", new_tool_name)

    def _scan_module_for_tools(self, module: Any) -> list[AgentTool]:
        """Scan a module for function-based tools.

        Args:
            module: The module to scan.

        Returns:
            List of FunctionTool instances found in the module.
        """
        tools: list[AgentTool] = []

        for name, obj in inspect.getmembers(module):
            if isinstance(obj, DecoratedFunctionTool):
                # Create a function tool with correct name
                try:
                    # Cast as AgentTool for mypy
                    tools.append(cast(AgentTool, obj))
                except Exception as e:
                    logger.warning("tool_name=<%s> | failed to create function tool | %s", name, e)

        return tools

    def cleanup(self, **kwargs: Any) -> None:
        """Synchronously clean up all tool providers in this registry."""
        # Attempt cleanup of all providers even if one fails to minimize resource leakage
        exceptions = []
        for provider in self._tool_providers:
            try:
                provider.remove_consumer(self._registry_id)
                logger.debug("provider=<%s> | removed provider consumer", type(provider).__name__)
            except Exception as e:
                exceptions.append(e)
                logger.error(
                    "provider=<%s>, error=<%s> | failed to remove provider consumer", type(provider).__name__, e
                )

        if exceptions:
            raise exceptions[0]

NewToolDict

Bases: TypedDict

Dictionary type for adding or updating a tool in the configuration.

Attributes:

Name Type Description
spec ToolSpec

The tool specification that defines the tool's interface and behavior.

Source code in strands/tools/registry.py
640
641
642
643
644
645
646
647
class NewToolDict(TypedDict):
    """Dictionary type for adding or updating a tool in the configuration.

    Attributes:
        spec: The tool specification that defines the tool's interface and behavior.
    """

    spec: ToolSpec

__init__()

Initialize the tool registry.

Source code in strands/tools/registry.py
37
38
39
40
41
42
43
def __init__(self) -> None:
    """Initialize the tool registry."""
    self.registry: dict[str, AgentTool] = {}
    self.dynamic_tools: dict[str, AgentTool] = {}
    self.tool_config: dict[str, Any] | None = None
    self._tool_providers: list[ToolProvider] = []
    self._registry_id = str(uuid.uuid4())

cleanup(**kwargs)

Synchronously clean up all tool providers in this registry.

Source code in strands/tools/registry.py
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
def cleanup(self, **kwargs: Any) -> None:
    """Synchronously clean up all tool providers in this registry."""
    # Attempt cleanup of all providers even if one fails to minimize resource leakage
    exceptions = []
    for provider in self._tool_providers:
        try:
            provider.remove_consumer(self._registry_id)
            logger.debug("provider=<%s> | removed provider consumer", type(provider).__name__)
        except Exception as e:
            exceptions.append(e)
            logger.error(
                "provider=<%s>, error=<%s> | failed to remove provider consumer", type(provider).__name__, e
            )

    if exceptions:
        raise exceptions[0]

discover_tool_modules()

Discover available tool modules in all tools directories.

Returns:

Type Description
dict[str, Path]

Dictionary mapping tool names to their full paths.

Source code in strands/tools/registry.py
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
def discover_tool_modules(self) -> dict[str, Path]:
    """Discover available tool modules in all tools directories.

    Returns:
        Dictionary mapping tool names to their full paths.
    """
    tool_modules = {}
    tools_dirs = self.get_tools_dirs()

    for tools_dir in tools_dirs:
        logger.debug("tools_dir=<%s> | scanning", tools_dir)

        # Find Python tools
        for extension in ["*.py"]:
            for item in tools_dir.glob(extension):
                if item.is_file() and not item.name.startswith("__"):
                    module_name = item.stem
                    # If tool already exists, newer paths take precedence
                    if module_name in tool_modules:
                        logger.debug("tools_dir=<%s>, module_name=<%s> | tool overridden", tools_dir, module_name)
                    tool_modules[module_name] = item

    logger.debug("tool_modules=<%s> | discovered", list(tool_modules.keys()))
    return tool_modules

get_all_tool_specs()

Get all the tool specs for all tools in this registry..

Returns:

Type Description
list[ToolSpec]

A list of ToolSpecs.

Source code in strands/tools/registry.py
565
566
567
568
569
570
571
572
573
def get_all_tool_specs(self) -> list[ToolSpec]:
    """Get all the tool specs for all tools in this registry..

    Returns:
        A list of ToolSpecs.
    """
    all_tools = self.get_all_tools_config()
    tools: list[ToolSpec] = [tool_spec for tool_spec in all_tools.values()]
    return tools

get_all_tools_config()

Dynamically generate tool configuration by combining built-in and dynamic tools.

Returns:

Type Description
dict[str, Any]

Dictionary containing all tool configurations.

Source code in strands/tools/registry.py
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
def get_all_tools_config(self) -> dict[str, Any]:
    """Dynamically generate tool configuration by combining built-in and dynamic tools.

    Returns:
        Dictionary containing all tool configurations.
    """
    tool_config = {}
    logger.debug("getting tool configurations")

    # Add all registered tools
    for tool_name, tool in self.registry.items():
        # Make a deep copy to avoid modifying the original
        spec = tool.tool_spec.copy()
        try:
            # Normalize the schema before validation
            spec = normalize_tool_spec(spec)
            self.validate_tool_spec(spec)
            tool_config[tool_name] = spec
            logger.debug("tool_name=<%s> | loaded tool config", tool_name)
        except ValueError as e:
            logger.warning("tool_name=<%s> | spec validation failed | %s", tool_name, e)

    # Add any dynamic tools
    for tool_name, tool in self.dynamic_tools.items():
        if tool_name not in tool_config:
            # Make a deep copy to avoid modifying the original
            spec = tool.tool_spec.copy()
            try:
                # Normalize the schema before validation
                spec = normalize_tool_spec(spec)
                self.validate_tool_spec(spec)
                tool_config[tool_name] = spec
                logger.debug("tool_name=<%s> | loaded dynamic tool config", tool_name)
            except ValueError as e:
                logger.warning("tool_name=<%s> | dynamic tool spec validation failed | %s", tool_name, e)

    logger.debug("tool_count=<%s> | tools configured", len(tool_config))
    return tool_config

get_tools_dirs()

Get all tool directory paths.

Returns:

Type Description
list[Path]

A list of Path objects for current working directory's "./tools/".

Source code in strands/tools/registry.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
def get_tools_dirs(self) -> list[Path]:
    """Get all tool directory paths.

    Returns:
        A list of Path objects for current working directory's "./tools/".
    """
    # Current working directory's tools directory
    cwd_tools_dir = Path.cwd() / "tools"

    # Return all directories that exist
    tool_dirs = []
    for directory in [cwd_tools_dir]:
        if directory.exists() and directory.is_dir():
            tool_dirs.append(directory)
            logger.debug("tools_dir=<%s> | found tools directory", directory)
        else:
            logger.debug("tools_dir=<%s> | tools directory not found", directory)

    return tool_dirs

initialize_tools(load_tools_from_directory=False)

Initialize all tools by discovering and loading them dynamically from all tool directories.

Parameters:

Name Type Description Default
load_tools_from_directory bool

Whether to reload tools if changes are made at runtime.

False
Source code in strands/tools/registry.py
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
def initialize_tools(self, load_tools_from_directory: bool = False) -> None:
    """Initialize all tools by discovering and loading them dynamically from all tool directories.

    Args:
        load_tools_from_directory: Whether to reload tools if changes are made at runtime.
    """
    self.tool_config = None

    # Then discover and load other tools
    tool_modules = self.discover_tool_modules()
    successful_loads = 0
    total_tools = len(tool_modules)
    tool_import_errors = {}

    # Process Python tools
    for tool_name, tool_path in tool_modules.items():
        if tool_name in ["__init__"]:
            continue

        if not load_tools_from_directory:
            continue

        try:
            # Add directory to path temporarily
            tool_dir = str(tool_path.parent)
            sys.path.insert(0, tool_dir)
            try:
                module = import_module(tool_name)
            finally:
                if tool_dir in sys.path:
                    sys.path.remove(tool_dir)

            # Process Python tool
            if tool_path.suffix == ".py":
                # Check for decorated function tools first
                try:
                    function_tools = self._scan_module_for_tools(module)

                    if function_tools:
                        for function_tool in function_tools:
                            self.register_tool(function_tool)
                            successful_loads += 1
                    else:
                        # Fall back to traditional tools
                        # Check for expected tool function
                        expected_func_name = tool_name
                        if hasattr(module, expected_func_name):
                            tool_function = getattr(module, expected_func_name)
                            if not callable(tool_function):
                                logger.warning(
                                    "tool_name=<%s> | tool function exists but is not callable", tool_name
                                )
                                continue

                            # Validate tool spec before registering
                            if not hasattr(module, "TOOL_SPEC"):
                                logger.warning("tool_name=<%s> | tool is missing TOOL_SPEC | skipping", tool_name)
                                continue

                            try:
                                self.validate_tool_spec(module.TOOL_SPEC)
                            except ValueError as e:
                                logger.warning("tool_name=<%s> | tool spec validation failed | %s", tool_name, e)
                                continue

                            tool_spec = module.TOOL_SPEC
                            tool = PythonAgentTool(tool_name, tool_spec, tool_function)
                            self.register_tool(tool)
                            successful_loads += 1

                        else:
                            logger.warning("tool_name=<%s> | tool function missing", tool_name)
                except ImportError:
                    # Function tool loader not available, fall back to traditional tools
                    # Check for expected tool function
                    expected_func_name = tool_name
                    if hasattr(module, expected_func_name):
                        tool_function = getattr(module, expected_func_name)
                        if not callable(tool_function):
                            logger.warning("tool_name=<%s> | tool function exists but is not callable", tool_name)
                            continue

                        # Validate tool spec before registering
                        if not hasattr(module, "TOOL_SPEC"):
                            logger.warning("tool_name=<%s> | tool is missing TOOL_SPEC | skipping", tool_name)
                            continue

                        try:
                            self.validate_tool_spec(module.TOOL_SPEC)
                        except ValueError as e:
                            logger.warning("tool_name=<%s> | tool spec validation failed | %s", tool_name, e)
                            continue

                        tool_spec = module.TOOL_SPEC
                        tool = PythonAgentTool(tool_name, tool_spec, tool_function)
                        self.register_tool(tool)
                        successful_loads += 1

                    else:
                        logger.warning("tool_name=<%s> | tool function missing", tool_name)

        except Exception as e:
            logger.warning("tool_name=<%s> | failed to load tool | %s", tool_name, e)
            tool_import_errors[tool_name] = str(e)

    # Log summary
    logger.debug("tool_count=<%d>, success_count=<%d> | finished loading tools", total_tools, successful_loads)
    if tool_import_errors:
        for tool_name, error in tool_import_errors.items():
            logger.debug("tool_name=<%s> | import error | %s", tool_name, error)

load_tool_from_filepath(tool_name, tool_path)

DEPRECATED: Load a tool from a file path.

Parameters:

Name Type Description Default
tool_name str

Name of the tool.

required
tool_path str

Path to the tool file.

required

Raises:

Type Description
FileNotFoundError

If the tool file is not found.

ValueError

If the tool cannot be loaded.

Source code in strands/tools/registry.py
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
def load_tool_from_filepath(self, tool_name: str, tool_path: str) -> None:
    """DEPRECATED: Load a tool from a file path.

    Args:
        tool_name: Name of the tool.
        tool_path: Path to the tool file.

    Raises:
        FileNotFoundError: If the tool file is not found.
        ValueError: If the tool cannot be loaded.
    """
    warnings.warn(
        "load_tool_from_filepath is deprecated and will be removed in Strands SDK 2.0. "
        "`process_tools` automatically handles loading tools from a filepath.",
        DeprecationWarning,
        stacklevel=2,
    )

    from .loader import ToolLoader

    try:
        tool_path = expanduser(tool_path)
        if not os.path.exists(tool_path):
            raise FileNotFoundError(f"Tool file not found: {tool_path}")

        loaded_tools = ToolLoader.load_tools(tool_path, tool_name)
        for t in loaded_tools:
            t.mark_dynamic()
            # Because we're explicitly registering the tool we don't need an allowlist
            self.register_tool(t)
    except Exception as e:
        exception_str = str(e)
        logger.exception("tool_name=<%s> | failed to load tool", tool_name)
        raise ValueError(f"Failed to load tool {tool_name}: {exception_str}") from e

process_tools(tools)

Process tools list.

Process list of tools that can contain local file path string, module import path string, imported modules, @tool decorated functions, or instances of AgentTool.

Parameters:

Name Type Description Default
tools list[Any]

List of tool specifications. Can be:

  1. Local file path to a module based tool: ./path/to/module/tool.py
  2. Module import path

    2.1. Path to a module based tool: strands_tools.file_read 2.2. Path to a module with multiple AgentTool instances (@tool decorated): tests.fixtures.say_tool 2.3. Path to a module and a specific function: tests.fixtures.say_tool:say

  3. A module for a module based tool

  4. Instances of AgentTool (@tool decorated functions)
  5. Dictionaries with name/path keys (deprecated)
required

Returns:

Type Description
list[str]

List of tool names that were processed.

Source code in strands/tools/registry.py
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
def process_tools(self, tools: list[Any]) -> list[str]:
    """Process tools list.

    Process list of tools that can contain local file path string, module import path string,
    imported modules, @tool decorated functions, or instances of AgentTool.

    Args:
        tools: List of tool specifications. Can be:

            1. Local file path to a module based tool: `./path/to/module/tool.py`
            2. Module import path

                2.1. Path to a module based tool: `strands_tools.file_read`
                2.2. Path to a module with multiple AgentTool instances (@tool decorated):
                    `tests.fixtures.say_tool`
                2.3. Path to a module and a specific function: `tests.fixtures.say_tool:say`

            3. A module for a module based tool
            4. Instances of AgentTool (@tool decorated functions)
            5. Dictionaries with name/path keys (deprecated)


    Returns:
        List of tool names that were processed.
    """
    tool_names = []

    def add_tool(tool: Any) -> None:
        try:
            # String based tool
            # Can be a file path, a module path, or a module path with a targeted function. Examples:
            # './path/to/tool.py'
            # 'my.module.tool'
            # 'my.module.tool:tool_name'
            if isinstance(tool, str):
                tools = load_tool_from_string(tool)
                for a_tool in tools:
                    a_tool.mark_dynamic()
                    self.register_tool(a_tool)
                    tool_names.append(a_tool.tool_name)

            # Dictionary with name and path
            elif isinstance(tool, dict) and "name" in tool and "path" in tool:
                tools = load_tool_from_string(tool["path"])

                tool_found = False
                for a_tool in tools:
                    if a_tool.tool_name == tool["name"]:
                        a_tool.mark_dynamic()
                        self.register_tool(a_tool)
                        tool_names.append(a_tool.tool_name)
                        tool_found = True

                if not tool_found:
                    raise ValueError(f'Tool "{tool["name"]}" not found in "{tool["path"]}"')

            # Dictionary with path only
            elif isinstance(tool, dict) and "path" in tool:
                tools = load_tool_from_string(tool["path"])

                for a_tool in tools:
                    a_tool.mark_dynamic()
                    self.register_tool(a_tool)
                    tool_names.append(a_tool.tool_name)

            # Imported Python module
            elif hasattr(tool, "__file__") and inspect.ismodule(tool):
                # Extract the tool name from the module name
                module_tool_name = tool.__name__.split(".")[-1]

                tools = load_tools_from_module(tool, module_tool_name)
                for a_tool in tools:
                    self.register_tool(a_tool)
                    tool_names.append(a_tool.tool_name)

            # Case 5: AgentTools (which also covers @tool)
            elif isinstance(tool, AgentTool):
                self.register_tool(tool)
                tool_names.append(tool.tool_name)

            # Case 6: Nested iterable (list, tuple, etc.) - add each sub-tool
            elif isinstance(tool, Iterable) and not isinstance(tool, (str, bytes, bytearray)):
                for t in tool:
                    add_tool(t)

            # Case 5: ToolProvider
            elif isinstance(tool, ToolProvider):
                self._tool_providers.append(tool)
                tool.add_consumer(self._registry_id)

                async def get_tools() -> Sequence[AgentTool]:
                    return await tool.load_tools()

                provider_tools = run_async(get_tools)

                for provider_tool in provider_tools:
                    self.register_tool(provider_tool)
                    tool_names.append(provider_tool.tool_name)
            else:
                logger.warning("tool=<%s> | unrecognized tool specification", tool)

        except Exception as e:
            exception_str = str(e)
            logger.exception("tool_name=<%s> | failed to load tool", tool)
            raise ValueError(f"Failed to load tool {tool}: {exception_str}") from e

    for tool in tools:
        add_tool(tool)
    return tool_names

register_dynamic_tool(tool)

Register a tool dynamically for temporary use.

Parameters:

Name Type Description Default
tool AgentTool

The tool to register dynamically

required

Raises:

Type Description
ValueError

If a tool with this name already exists

Source code in strands/tools/registry.py
575
576
577
578
579
580
581
582
583
584
585
586
587
588
def register_dynamic_tool(self, tool: AgentTool) -> None:
    """Register a tool dynamically for temporary use.

    Args:
        tool: The tool to register dynamically

    Raises:
        ValueError: If a tool with this name already exists
    """
    if tool.tool_name in self.registry or tool.tool_name in self.dynamic_tools:
        raise ValueError(f"Tool '{tool.tool_name}' already exists")

    self.dynamic_tools[tool.tool_name] = tool
    logger.debug("Registered dynamic tool: %s", tool.tool_name)

register_tool(tool)

Register a tool function with the given name.

Parameters:

Name Type Description Default
tool AgentTool

The tool to register.

required
Source code in strands/tools/registry.py
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
def register_tool(self, tool: AgentTool) -> None:
    """Register a tool function with the given name.

    Args:
        tool: The tool to register.
    """
    logger.debug(
        "tool_name=<%s>, tool_type=<%s>, is_dynamic=<%s> | registering tool",
        tool.tool_name,
        tool.tool_type,
        tool.is_dynamic,
    )

    # Check duplicate tool name, throw on duplicate tool names except if hot_reloading is enabled
    if tool.tool_name in self.registry and not tool.supports_hot_reload:
        raise ValueError(
            f"Tool name '{tool.tool_name}' already exists. Cannot register tools with exact same name."
        )

    # Check for normalized name conflicts (- vs _)
    if self.registry.get(tool.tool_name) is None:
        normalized_name = tool.tool_name.replace("-", "_")

        matching_tools = [
            tool_name
            for (tool_name, tool) in self.registry.items()
            if tool_name.replace("-", "_") == normalized_name
        ]

        if matching_tools:
            raise ValueError(
                f"Tool name '{tool.tool_name}' already exists as '{matching_tools[0]}'."
                " Cannot add a duplicate tool which differs by a '-' or '_'"
            )

    # Register in main registry
    self.registry[tool.tool_name] = tool

    # Register in dynamic tools if applicable
    if tool.is_dynamic:
        self.dynamic_tools[tool.tool_name] = tool

        if not tool.supports_hot_reload:
            logger.debug("tool_name=<%s>, tool_type=<%s> | skipping hot reloading", tool.tool_name, tool.tool_type)
            return

        logger.debug(
            "tool_name=<%s>, tool_registry=<%s>, dynamic_tools=<%s> | tool registered",
            tool.tool_name,
            list(self.registry.keys()),
            list(self.dynamic_tools.keys()),
        )

reload_tool(tool_name)

Reload a specific tool module.

Parameters:

Name Type Description Default
tool_name str

Name of the tool to reload.

required

Raises:

Type Description
FileNotFoundError

If the tool file cannot be found.

ImportError

If there are issues importing the tool module.

ValueError

If the tool specification is invalid or required components are missing.

Exception

For other errors during tool reloading.

Source code in strands/tools/registry.py
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
def reload_tool(self, tool_name: str) -> None:
    """Reload a specific tool module.

    Args:
        tool_name: Name of the tool to reload.

    Raises:
        FileNotFoundError: If the tool file cannot be found.
        ImportError: If there are issues importing the tool module.
        ValueError: If the tool specification is invalid or required components are missing.
        Exception: For other errors during tool reloading.
    """
    try:
        # Check for tool file
        logger.debug("tool_name=<%s> | searching directories for tool", tool_name)
        tools_dirs = self.get_tools_dirs()
        tool_path = None

        # Search for the tool file in all tool directories
        for tools_dir in tools_dirs:
            temp_path = tools_dir / f"{tool_name}.py"
            if temp_path.exists():
                tool_path = temp_path
                break

        if not tool_path:
            raise FileNotFoundError(f"No tool file found for: {tool_name}")

        logger.debug("tool_name=<%s> | reloading tool", tool_name)

        # Add tool directory to path temporarily
        tool_dir = str(tool_path.parent)
        sys.path.insert(0, tool_dir)
        try:
            # Load the module directly using spec
            spec = util.spec_from_file_location(tool_name, str(tool_path))
            if spec is None:
                raise ImportError(f"Could not load spec for {tool_name}")

            module = util.module_from_spec(spec)
            sys.modules[tool_name] = module

            if spec.loader is None:
                raise ImportError(f"Could not load {tool_name}")

            spec.loader.exec_module(module)

        finally:
            # Remove the temporary path
            sys.path.remove(tool_dir)

        # Look for function-based tools first
        try:
            function_tools = self._scan_module_for_tools(module)

            if function_tools:
                for function_tool in function_tools:
                    # Register the function-based tool
                    self.register_tool(function_tool)

                    # Update tool configuration if available
                    if self.tool_config is not None:
                        self._update_tool_config(self.tool_config, {"spec": function_tool.tool_spec})

                logger.debug("tool_name=<%s> | successfully reloaded function-based tool from module", tool_name)
                return
        except ImportError:
            logger.debug("function tool loader not available | falling back to traditional tools")

        # Fall back to traditional module-level tools
        if not hasattr(module, "TOOL_SPEC"):
            raise ValueError(
                f"Tool {tool_name} is missing TOOL_SPEC (neither at module level nor as a decorated function)"
            )

        expected_func_name = tool_name
        if not hasattr(module, expected_func_name):
            raise ValueError(f"Tool {tool_name} is missing {expected_func_name} function")

        tool_function = getattr(module, expected_func_name)
        if not callable(tool_function):
            raise ValueError(f"Tool {tool_name} function is not callable")

        # Validate tool spec
        self.validate_tool_spec(module.TOOL_SPEC)

        new_tool = PythonAgentTool(tool_name, module.TOOL_SPEC, tool_function)

        # Register the tool
        self.register_tool(new_tool)

        # Update tool configuration if available
        if self.tool_config is not None:
            self._update_tool_config(self.tool_config, {"spec": module.TOOL_SPEC})
        logger.debug("tool_name=<%s> | successfully reloaded tool", tool_name)

    except Exception:
        logger.exception("tool_name=<%s> | failed to reload tool", tool_name)
        raise

replace(new_tool)

Replace an existing tool with a new implementation.

This performs a swap of the tool implementation in the registry. The replacement takes effect on the next agent invocation.

Parameters:

Name Type Description Default
new_tool AgentTool

New tool implementation. Its name must match the tool being replaced.

required

Raises:

Type Description
ValueError

If the tool doesn't exist.

Source code in strands/tools/registry.py
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
def replace(self, new_tool: AgentTool) -> None:
    """Replace an existing tool with a new implementation.

    This performs a swap of the tool implementation in the registry.
    The replacement takes effect on the next agent invocation.

    Args:
        new_tool: New tool implementation. Its name must match the tool being replaced.

    Raises:
        ValueError: If the tool doesn't exist.
    """
    tool_name = new_tool.tool_name

    if tool_name not in self.registry:
        raise ValueError(f"Cannot replace tool '{tool_name}' - tool does not exist")

    # Update main registry
    self.registry[tool_name] = new_tool

    # Update dynamic_tools to match new tool's dynamic status
    if new_tool.is_dynamic:
        self.dynamic_tools[tool_name] = new_tool
    elif tool_name in self.dynamic_tools:
        del self.dynamic_tools[tool_name]

validate_tool_spec(tool_spec)

Validate tool specification against required schema.

Parameters:

Name Type Description Default
tool_spec ToolSpec

Tool specification to validate.

required

Raises:

Type Description
ValueError

If the specification is invalid.

Source code in strands/tools/registry.py
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
def validate_tool_spec(self, tool_spec: ToolSpec) -> None:
    """Validate tool specification against required schema.

    Args:
        tool_spec: Tool specification to validate.

    Raises:
        ValueError: If the specification is invalid.
    """
    required_fields = ["name", "description"]
    missing_fields = [field for field in required_fields if field not in tool_spec]
    if missing_fields:
        raise ValueError(f"Missing required fields in tool spec: {', '.join(missing_fields)}")

    if "json" not in tool_spec["inputSchema"]:
        # Convert direct schema to proper format
        json_schema = normalize_schema(tool_spec["inputSchema"])
        tool_spec["inputSchema"] = {"json": json_schema}
        return

    # Validate json schema fields
    json_schema = tool_spec["inputSchema"]["json"]

    # Ensure schema has required fields
    if "type" not in json_schema:
        json_schema["type"] = "object"
    if "properties" not in json_schema:
        json_schema["properties"] = {}
    if "required" not in json_schema:
        json_schema["required"] = []

    # Validate property definitions
    for prop_name, prop_def in json_schema.get("properties", {}).items():
        if not isinstance(prop_def, dict):
            json_schema["properties"][prop_name] = {
                "type": "string",
                "description": f"Property {prop_name}",
            }
            continue

        # It is expected that type and description are already included in referenced $def.
        if "$ref" in prop_def:
            continue

        has_composition = any(kw in prop_def for kw in _COMPOSITION_KEYWORDS)
        if "type" not in prop_def and not has_composition:
            prop_def["type"] = "string"
        if "description" not in prop_def:
            prop_def["description"] = f"Property {prop_name}"

ToolWatcher

Watches tool directories for changes and reloads tools when they are modified.

Source code in strands/tools/watcher.py
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
class ToolWatcher:
    """Watches tool directories for changes and reloads tools when they are modified."""

    # This class uses class variables for the observer and handlers because watchdog allows only one Observer instance
    # per directory. Using class variables ensures that all ToolWatcher instances share a single Observer, with the
    # MasterChangeHandler routing file system events to the appropriate individual handlers for each registry. This
    # design pattern avoids conflicts when multiple tool registries are watching the same directories.

    _shared_observer = None
    _watched_dirs: set[str] = set()
    _observer_started = False
    _registry_handlers: dict[str, dict[int, "ToolWatcher.ToolChangeHandler"]] = {}

    def __init__(self, tool_registry: ToolRegistry) -> None:
        """Initialize a tool watcher for the given tool registry.

        Args:
            tool_registry: The tool registry to report changes.
        """
        self.tool_registry = tool_registry
        self.start()

    class ToolChangeHandler(FileSystemEventHandler):
        """Handler for tool file changes."""

        def __init__(self, tool_registry: ToolRegistry) -> None:
            """Initialize a tool change handler.

            Args:
                tool_registry: The tool registry to update when tools change.
            """
            self.tool_registry = tool_registry

        def on_modified(self, event: Any) -> None:
            """Reload tool if file modification detected.

            Args:
                event: The file system event that triggered this handler.
            """
            if event.src_path.endswith(".py"):
                tool_path = Path(event.src_path)
                tool_name = tool_path.stem

                if tool_name not in ["__init__"]:
                    logger.debug("tool_name=<%s> | tool change detected", tool_name)
                    try:
                        self.tool_registry.reload_tool(tool_name)
                    except Exception as e:
                        logger.error("tool_name=<%s>, exception=<%s> | failed to reload tool", tool_name, str(e))

    class MasterChangeHandler(FileSystemEventHandler):
        """Master handler that delegates to all registered handlers."""

        def __init__(self, dir_path: str) -> None:
            """Initialize a master change handler for a specific directory.

            Args:
                dir_path: The directory path to watch.
            """
            self.dir_path = dir_path

        def on_modified(self, event: Any) -> None:
            """Delegate file modification events to all registered handlers.

            Args:
                event: The file system event that triggered this handler.
            """
            if event.src_path.endswith(".py"):
                tool_path = Path(event.src_path)
                tool_name = tool_path.stem

                if tool_name not in ["__init__"]:
                    # Delegate to all registered handlers for this directory
                    for handler in ToolWatcher._registry_handlers.get(self.dir_path, {}).values():
                        try:
                            handler.on_modified(event)
                        except Exception as e:
                            logger.error("exception=<%s> | handler error", str(e))

    def start(self) -> None:
        """Start watching all tools directories for changes."""
        # Initialize shared observer if not already done
        if ToolWatcher._shared_observer is None:
            ToolWatcher._shared_observer = Observer()

        # Create handler for this instance
        self.tool_change_handler = self.ToolChangeHandler(self.tool_registry)
        registry_id = id(self.tool_registry)

        # Get tools directories to watch
        tools_dirs = self.tool_registry.get_tools_dirs()

        for tools_dir in tools_dirs:
            dir_str = str(tools_dir)

            # Initialize the registry handlers dict for this directory if needed
            if dir_str not in ToolWatcher._registry_handlers:
                ToolWatcher._registry_handlers[dir_str] = {}

            # Store this handler with its registry id
            ToolWatcher._registry_handlers[dir_str][registry_id] = self.tool_change_handler

            # Schedule or update the master handler for this directory
            if dir_str not in ToolWatcher._watched_dirs:
                # First time seeing this directory, create a master handler
                master_handler = self.MasterChangeHandler(dir_str)
                ToolWatcher._shared_observer.schedule(master_handler, dir_str, recursive=False)
                ToolWatcher._watched_dirs.add(dir_str)
                logger.debug("tools_dir=<%s> | started watching tools directory", tools_dir)
            else:
                # Directory already being watched, just log it
                logger.debug("tools_dir=<%s> | directory already being watched", tools_dir)

        # Start the observer if not already started
        if not ToolWatcher._observer_started:
            ToolWatcher._shared_observer.start()
            ToolWatcher._observer_started = True
            logger.debug("tool directory watching initialized")

MasterChangeHandler

Bases: FileSystemEventHandler

Master handler that delegates to all registered handlers.

Source code in strands/tools/watcher.py
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
class MasterChangeHandler(FileSystemEventHandler):
    """Master handler that delegates to all registered handlers."""

    def __init__(self, dir_path: str) -> None:
        """Initialize a master change handler for a specific directory.

        Args:
            dir_path: The directory path to watch.
        """
        self.dir_path = dir_path

    def on_modified(self, event: Any) -> None:
        """Delegate file modification events to all registered handlers.

        Args:
            event: The file system event that triggered this handler.
        """
        if event.src_path.endswith(".py"):
            tool_path = Path(event.src_path)
            tool_name = tool_path.stem

            if tool_name not in ["__init__"]:
                # Delegate to all registered handlers for this directory
                for handler in ToolWatcher._registry_handlers.get(self.dir_path, {}).values():
                    try:
                        handler.on_modified(event)
                    except Exception as e:
                        logger.error("exception=<%s> | handler error", str(e))

__init__(dir_path)

Initialize a master change handler for a specific directory.

Parameters:

Name Type Description Default
dir_path str

The directory path to watch.

required
Source code in strands/tools/watcher.py
72
73
74
75
76
77
78
def __init__(self, dir_path: str) -> None:
    """Initialize a master change handler for a specific directory.

    Args:
        dir_path: The directory path to watch.
    """
    self.dir_path = dir_path

on_modified(event)

Delegate file modification events to all registered handlers.

Parameters:

Name Type Description Default
event Any

The file system event that triggered this handler.

required
Source code in strands/tools/watcher.py
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
def on_modified(self, event: Any) -> None:
    """Delegate file modification events to all registered handlers.

    Args:
        event: The file system event that triggered this handler.
    """
    if event.src_path.endswith(".py"):
        tool_path = Path(event.src_path)
        tool_name = tool_path.stem

        if tool_name not in ["__init__"]:
            # Delegate to all registered handlers for this directory
            for handler in ToolWatcher._registry_handlers.get(self.dir_path, {}).values():
                try:
                    handler.on_modified(event)
                except Exception as e:
                    logger.error("exception=<%s> | handler error", str(e))

ToolChangeHandler

Bases: FileSystemEventHandler

Handler for tool file changes.

Source code in strands/tools/watcher.py
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
class ToolChangeHandler(FileSystemEventHandler):
    """Handler for tool file changes."""

    def __init__(self, tool_registry: ToolRegistry) -> None:
        """Initialize a tool change handler.

        Args:
            tool_registry: The tool registry to update when tools change.
        """
        self.tool_registry = tool_registry

    def on_modified(self, event: Any) -> None:
        """Reload tool if file modification detected.

        Args:
            event: The file system event that triggered this handler.
        """
        if event.src_path.endswith(".py"):
            tool_path = Path(event.src_path)
            tool_name = tool_path.stem

            if tool_name not in ["__init__"]:
                logger.debug("tool_name=<%s> | tool change detected", tool_name)
                try:
                    self.tool_registry.reload_tool(tool_name)
                except Exception as e:
                    logger.error("tool_name=<%s>, exception=<%s> | failed to reload tool", tool_name, str(e))

__init__(tool_registry)

Initialize a tool change handler.

Parameters:

Name Type Description Default
tool_registry ToolRegistry

The tool registry to update when tools change.

required
Source code in strands/tools/watcher.py
44
45
46
47
48
49
50
def __init__(self, tool_registry: ToolRegistry) -> None:
    """Initialize a tool change handler.

    Args:
        tool_registry: The tool registry to update when tools change.
    """
    self.tool_registry = tool_registry

on_modified(event)

Reload tool if file modification detected.

Parameters:

Name Type Description Default
event Any

The file system event that triggered this handler.

required
Source code in strands/tools/watcher.py
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
def on_modified(self, event: Any) -> None:
    """Reload tool if file modification detected.

    Args:
        event: The file system event that triggered this handler.
    """
    if event.src_path.endswith(".py"):
        tool_path = Path(event.src_path)
        tool_name = tool_path.stem

        if tool_name not in ["__init__"]:
            logger.debug("tool_name=<%s> | tool change detected", tool_name)
            try:
                self.tool_registry.reload_tool(tool_name)
            except Exception as e:
                logger.error("tool_name=<%s>, exception=<%s> | failed to reload tool", tool_name, str(e))

__init__(tool_registry)

Initialize a tool watcher for the given tool registry.

Parameters:

Name Type Description Default
tool_registry ToolRegistry

The tool registry to report changes.

required
Source code in strands/tools/watcher.py
32
33
34
35
36
37
38
39
def __init__(self, tool_registry: ToolRegistry) -> None:
    """Initialize a tool watcher for the given tool registry.

    Args:
        tool_registry: The tool registry to report changes.
    """
    self.tool_registry = tool_registry
    self.start()

start()

Start watching all tools directories for changes.

Source code in strands/tools/watcher.py
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
def start(self) -> None:
    """Start watching all tools directories for changes."""
    # Initialize shared observer if not already done
    if ToolWatcher._shared_observer is None:
        ToolWatcher._shared_observer = Observer()

    # Create handler for this instance
    self.tool_change_handler = self.ToolChangeHandler(self.tool_registry)
    registry_id = id(self.tool_registry)

    # Get tools directories to watch
    tools_dirs = self.tool_registry.get_tools_dirs()

    for tools_dir in tools_dirs:
        dir_str = str(tools_dir)

        # Initialize the registry handlers dict for this directory if needed
        if dir_str not in ToolWatcher._registry_handlers:
            ToolWatcher._registry_handlers[dir_str] = {}

        # Store this handler with its registry id
        ToolWatcher._registry_handlers[dir_str][registry_id] = self.tool_change_handler

        # Schedule or update the master handler for this directory
        if dir_str not in ToolWatcher._watched_dirs:
            # First time seeing this directory, create a master handler
            master_handler = self.MasterChangeHandler(dir_str)
            ToolWatcher._shared_observer.schedule(master_handler, dir_str, recursive=False)
            ToolWatcher._watched_dirs.add(dir_str)
            logger.debug("tools_dir=<%s> | started watching tools directory", tools_dir)
        else:
            # Directory already being watched, just log it
            logger.debug("tools_dir=<%s> | directory already being watched", tools_dir)

    # Start the observer if not already started
    if not ToolWatcher._observer_started:
        ToolWatcher._shared_observer.start()
        ToolWatcher._observer_started = True
        logger.debug("tool directory watching initialized")

TypedEvent

Bases: dict

Base class for all typed events in the agent system.

Source code in strands/types/_events.py
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
class TypedEvent(dict):
    """Base class for all typed events in the agent system."""

    def __init__(self, data: dict[str, Any] | None = None) -> None:
        """Initialize the typed event with optional data.

        Args:
            data: Optional dictionary of event data to initialize with
        """
        super().__init__(data or {})

    @property
    def is_callback_event(self) -> bool:
        """True if this event should trigger the callback_handler to fire."""
        return True

    def as_dict(self) -> dict:
        """Convert this event to a raw dictionary for emitting purposes."""
        return {**self}

    def prepare(self, invocation_state: dict) -> None:
        """Prepare the event for emission by adding invocation state.

        This allows a subset of events to merge with the invocation_state without needing to
        pass around the invocation_state throughout the system.
        """
        ...

is_callback_event property

True if this event should trigger the callback_handler to fire.

__init__(data=None)

Initialize the typed event with optional data.

Parameters:

Name Type Description Default
data dict[str, Any] | None

Optional dictionary of event data to initialize with

None
Source code in strands/types/_events.py
30
31
32
33
34
35
36
def __init__(self, data: dict[str, Any] | None = None) -> None:
    """Initialize the typed event with optional data.

    Args:
        data: Optional dictionary of event data to initialize with
    """
    super().__init__(data or {})

as_dict()

Convert this event to a raw dictionary for emitting purposes.

Source code in strands/types/_events.py
43
44
45
def as_dict(self) -> dict:
    """Convert this event to a raw dictionary for emitting purposes."""
    return {**self}

prepare(invocation_state)

Prepare the event for emission by adding invocation state.

This allows a subset of events to merge with the invocation_state without needing to pass around the invocation_state throughout the system.

Source code in strands/types/_events.py
47
48
49
50
51
52
53
def prepare(self, invocation_state: dict) -> None:
    """Prepare the event for emission by adding invocation state.

    This allows a subset of events to merge with the invocation_state without needing to
    pass around the invocation_state throughout the system.
    """
    ...

_DefaultCallbackHandlerSentinel

Sentinel class to distinguish between explicit None and default parameter value.

Source code in strands/agent/agent.py
76
77
78
79
class _DefaultCallbackHandlerSentinel:
    """Sentinel class to distinguish between explicit None and default parameter value."""

    pass

_DefaultRetryStrategySentinel

Sentinel class to distinguish between explicit None and default parameter value for retry_strategy.

Source code in strands/agent/agent.py
82
83
84
85
class _DefaultRetryStrategySentinel:
    """Sentinel class to distinguish between explicit None and default parameter value for retry_strategy."""

    pass

_InterruptState dataclass

Track the state of interrupt events raised by the user.

Note, interrupt state is cleared after resuming.

Attributes:

Name Type Description
interrupts dict[str, Interrupt]

Interrupts raised by the user.

context dict[str, Any]

Additional context associated with an interrupt event.

activated bool

True if agent is in an interrupt state, False otherwise.

Source code in strands/interrupt.py
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
@dataclass
class _InterruptState:
    """Track the state of interrupt events raised by the user.

    Note, interrupt state is cleared after resuming.

    Attributes:
        interrupts: Interrupts raised by the user.
        context: Additional context associated with an interrupt event.
        activated: True if agent is in an interrupt state, False otherwise.
    """

    interrupts: dict[str, Interrupt] = field(default_factory=dict)
    context: dict[str, Any] = field(default_factory=dict)
    activated: bool = False

    def activate(self) -> None:
        """Activate the interrupt state."""
        self.activated = True

    def deactivate(self) -> None:
        """Deacitvate the interrupt state.

        Interrupts and context are cleared.
        """
        self.interrupts = {}
        self.context = {}
        self.activated = False

    def resume(self, prompt: "AgentInput") -> None:
        """Configure the interrupt state if resuming from an interrupt event.

        Args:
            prompt: User responses if resuming from interrupt.

        Raises:
            TypeError: If in interrupt state but user did not provide responses.
        """
        if not self.activated:
            return

        if not isinstance(prompt, list):
            raise TypeError(f"prompt_type={type(prompt)} | must resume from interrupt with list of interruptResponse's")

        invalid_types = [
            content_type for content in prompt for content_type in content if content_type != "interruptResponse"
        ]
        if invalid_types:
            raise TypeError(
                f"content_types=<{invalid_types}> | must resume from interrupt with list of interruptResponse's"
            )

        contents = cast(list["InterruptResponseContent"], prompt)
        for content in contents:
            interrupt_id = content["interruptResponse"]["interruptId"]
            interrupt_response = content["interruptResponse"]["response"]

            if interrupt_id not in self.interrupts:
                raise KeyError(f"interrupt_id=<{interrupt_id}> | no interrupt found")

            self.interrupts[interrupt_id].response = interrupt_response

        self.context["responses"] = contents

    def to_dict(self) -> dict[str, Any]:
        """Serialize to dict for session management."""
        return asdict(self)

    @classmethod
    def from_dict(cls, data: dict[str, Any]) -> "_InterruptState":
        """Initiailize interrupt state from serialized interrupt state.

        Interrupt state can be serialized with the `to_dict` method.
        """
        return cls(
            interrupts={
                interrupt_id: Interrupt(**interrupt_data) for interrupt_id, interrupt_data in data["interrupts"].items()
            },
            context=data["context"],
            activated=data["activated"],
        )

activate()

Activate the interrupt state.

Source code in strands/interrupt.py
56
57
58
def activate(self) -> None:
    """Activate the interrupt state."""
    self.activated = True

deactivate()

Deacitvate the interrupt state.

Interrupts and context are cleared.

Source code in strands/interrupt.py
60
61
62
63
64
65
66
67
def deactivate(self) -> None:
    """Deacitvate the interrupt state.

    Interrupts and context are cleared.
    """
    self.interrupts = {}
    self.context = {}
    self.activated = False

from_dict(data) classmethod

Initiailize interrupt state from serialized interrupt state.

Interrupt state can be serialized with the to_dict method.

Source code in strands/interrupt.py
108
109
110
111
112
113
114
115
116
117
118
119
120
@classmethod
def from_dict(cls, data: dict[str, Any]) -> "_InterruptState":
    """Initiailize interrupt state from serialized interrupt state.

    Interrupt state can be serialized with the `to_dict` method.
    """
    return cls(
        interrupts={
            interrupt_id: Interrupt(**interrupt_data) for interrupt_id, interrupt_data in data["interrupts"].items()
        },
        context=data["context"],
        activated=data["activated"],
    )

resume(prompt)

Configure the interrupt state if resuming from an interrupt event.

Parameters:

Name Type Description Default
prompt AgentInput

User responses if resuming from interrupt.

required

Raises:

Type Description
TypeError

If in interrupt state but user did not provide responses.

Source code in strands/interrupt.py
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
def resume(self, prompt: "AgentInput") -> None:
    """Configure the interrupt state if resuming from an interrupt event.

    Args:
        prompt: User responses if resuming from interrupt.

    Raises:
        TypeError: If in interrupt state but user did not provide responses.
    """
    if not self.activated:
        return

    if not isinstance(prompt, list):
        raise TypeError(f"prompt_type={type(prompt)} | must resume from interrupt with list of interruptResponse's")

    invalid_types = [
        content_type for content in prompt for content_type in content if content_type != "interruptResponse"
    ]
    if invalid_types:
        raise TypeError(
            f"content_types=<{invalid_types}> | must resume from interrupt with list of interruptResponse's"
        )

    contents = cast(list["InterruptResponseContent"], prompt)
    for content in contents:
        interrupt_id = content["interruptResponse"]["interruptId"]
        interrupt_response = content["interruptResponse"]["response"]

        if interrupt_id not in self.interrupts:
            raise KeyError(f"interrupt_id=<{interrupt_id}> | no interrupt found")

        self.interrupts[interrupt_id].response = interrupt_response

    self.context["responses"] = contents

to_dict()

Serialize to dict for session management.

Source code in strands/interrupt.py
104
105
106
def to_dict(self) -> dict[str, Any]:
    """Serialize to dict for session management."""
    return asdict(self)

_ToolCaller

Call tool as a function.

Source code in strands/tools/_caller.py
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
class _ToolCaller:
    """Call tool as a function."""

    def __init__(self, agent: "Agent | BidiAgent") -> None:
        """Initialize instance.

        Args:
            agent: Agent reference that will accept tool results.
        """
        # WARNING: Do not add any other member variables or methods as this could result in a name conflict with
        #          agent tools and thus break their execution.
        self._agent = agent

    def __getattr__(self, name: str) -> Callable[..., Any]:
        """Call tool as a function.

        This method enables the method-style interface (e.g., `agent.tool.tool_name(param="value")`).
        It matches underscore-separated names to hyphenated tool names (e.g., 'some_thing' matches 'some-thing').

        Args:
            name: The name of the attribute (tool) being accessed.

        Returns:
            A function that when called will execute the named tool.

        Raises:
            AttributeError: If no tool with the given name exists or if multiple tools match the given name.
        """

        def caller(
            user_message_override: str | None = None,
            record_direct_tool_call: bool | None = None,
            **kwargs: Any,
        ) -> Any:
            """Call a tool directly by name.

            Args:
                user_message_override: Optional custom message to record instead of default
                record_direct_tool_call: Whether to record direct tool calls in message history. Overrides class
                    attribute if provided.
                **kwargs: Keyword arguments to pass to the tool.

            Returns:
                The result returned by the tool.

            Raises:
                AttributeError: If the tool doesn't exist.
            """
            if self._agent._interrupt_state.activated:
                raise RuntimeError("cannot directly call tool during interrupt")

            if record_direct_tool_call is not None:
                should_record_direct_tool_call = record_direct_tool_call
            else:
                should_record_direct_tool_call = self._agent.record_direct_tool_call

            should_lock = should_record_direct_tool_call

            from ..agent import Agent  # Locally imported to avoid circular reference

            acquired_lock = (
                should_lock
                and isinstance(self._agent, Agent)
                and self._agent._invocation_lock.acquire_lock(blocking=False)
            )
            if should_lock and not acquired_lock:
                raise ConcurrencyException(
                    "Direct tool call cannot be made while the agent is in the middle of an invocation. "
                    "Set record_direct_tool_call=False to allow direct tool calls during agent invocation."
                )

            try:
                normalized_name = self._find_normalized_tool_name(name)

                # Create unique tool ID and set up the tool request
                tool_id = f"tooluse_{name}_{random.randint(100000000, 999999999)}"
                tool_use: ToolUse = {
                    "toolUseId": tool_id,
                    "name": normalized_name,
                    "input": kwargs.copy(),
                }
                tool_results: list[ToolResult] = []
                invocation_state = kwargs

                async def acall() -> ToolResult:
                    async for event in ToolExecutor._stream(self._agent, tool_use, tool_results, invocation_state):
                        if isinstance(event, ToolInterruptEvent):
                            self._agent._interrupt_state.deactivate()
                            raise RuntimeError("cannot raise interrupt in direct tool call")

                    tool_result = tool_results[0]

                    if should_record_direct_tool_call:
                        # Create a record of this tool execution in the message history
                        await self._record_tool_execution(tool_use, tool_result, user_message_override)

                    return tool_result

                tool_result = run_async(acall)

                # TODO: https://github.com/strands-agents/sdk-python/issues/1311
                if isinstance(self._agent, Agent):
                    self._agent.conversation_manager.apply_management(self._agent)

                return tool_result

            finally:
                if acquired_lock and isinstance(self._agent, Agent):
                    self._agent._invocation_lock.release()

        return caller

    def _find_normalized_tool_name(self, name: str) -> str:
        """Lookup the tool represented by name, replacing characters with underscores as necessary."""
        tool_registry = self._agent.tool_registry.registry

        if tool_registry.get(name):
            return name

        # If the desired name contains underscores, it might be a placeholder for characters that can't be
        # represented as python identifiers but are valid as tool names, such as dashes. In that case, find
        # all tools that can be represented with the normalized name
        if "_" in name:
            filtered_tools = [
                tool_name for (tool_name, tool) in tool_registry.items() if tool_name.replace("-", "_") == name
            ]

            # The registry itself defends against similar names, so we can just take the first match
            if filtered_tools:
                return filtered_tools[0]

        raise AttributeError(f"Tool '{name}' not found")

    async def _record_tool_execution(
        self,
        tool: ToolUse,
        tool_result: ToolResult,
        user_message_override: str | None,
    ) -> None:
        """Record a tool execution in the message history.

        Creates a sequence of messages that represent the tool execution:

        1. A user message describing the tool call
        2. An assistant message with the tool use
        3. A user message with the tool result
        4. An assistant message acknowledging the tool call

        Args:
            tool: The tool call information.
            tool_result: The result returned by the tool.
            user_message_override: Optional custom message to include.
        """
        # Filter tool input parameters to only include those defined in tool spec
        filtered_input = self._filter_tool_parameters_for_recording(tool["name"], tool["input"])

        # Create user message describing the tool call
        input_parameters = json.dumps(filtered_input, default=lambda o: f"<<non-serializable: {type(o).__qualname__}>>")

        user_msg_content: list[ContentBlock] = [
            {"text": (f"agent.tool.{tool['name']} direct tool call.\nInput parameters: {input_parameters}\n")}
        ]

        # Add override message if provided
        if user_message_override:
            user_msg_content.insert(0, {"text": f"{user_message_override}\n"})

        # Create filtered tool use for message history
        filtered_tool: ToolUse = {
            "toolUseId": tool["toolUseId"],
            "name": tool["name"],
            "input": filtered_input,
        }

        # Create the message sequence
        user_msg: Message = {
            "role": "user",
            "content": user_msg_content,
        }
        tool_use_msg: Message = {
            "role": "assistant",
            "content": [{"toolUse": filtered_tool}],
        }
        tool_result_msg: Message = {
            "role": "user",
            "content": [{"toolResult": tool_result}],
        }
        assistant_msg: Message = {
            "role": "assistant",
            "content": [{"text": f"agent.tool.{tool['name']} was called."}],
        }

        # Add to message history
        await self._agent._append_messages(user_msg, tool_use_msg, tool_result_msg, assistant_msg)

    def _filter_tool_parameters_for_recording(self, tool_name: str, input_params: dict[str, Any]) -> dict[str, Any]:
        """Filter input parameters to only include those defined in the tool specification.

        Args:
            tool_name: Name of the tool to get specification for
            input_params: Original input parameters

        Returns:
            Filtered parameters containing only those defined in tool spec
        """
        all_tools_config = self._agent.tool_registry.get_all_tools_config()
        tool_spec = all_tools_config.get(tool_name)

        if not tool_spec or "inputSchema" not in tool_spec:
            return input_params.copy()

        properties = tool_spec["inputSchema"]["json"]["properties"]
        return {k: v for k, v in input_params.items() if k in properties}

__getattr__(name)

Call tool as a function.

This method enables the method-style interface (e.g., agent.tool.tool_name(param="value")). It matches underscore-separated names to hyphenated tool names (e.g., 'some_thing' matches 'some-thing').

Parameters:

Name Type Description Default
name str

The name of the attribute (tool) being accessed.

required

Returns:

Type Description
Callable[..., Any]

A function that when called will execute the named tool.

Raises:

Type Description
AttributeError

If no tool with the given name exists or if multiple tools match the given name.

Source code in strands/tools/_caller.py
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
def __getattr__(self, name: str) -> Callable[..., Any]:
    """Call tool as a function.

    This method enables the method-style interface (e.g., `agent.tool.tool_name(param="value")`).
    It matches underscore-separated names to hyphenated tool names (e.g., 'some_thing' matches 'some-thing').

    Args:
        name: The name of the attribute (tool) being accessed.

    Returns:
        A function that when called will execute the named tool.

    Raises:
        AttributeError: If no tool with the given name exists or if multiple tools match the given name.
    """

    def caller(
        user_message_override: str | None = None,
        record_direct_tool_call: bool | None = None,
        **kwargs: Any,
    ) -> Any:
        """Call a tool directly by name.

        Args:
            user_message_override: Optional custom message to record instead of default
            record_direct_tool_call: Whether to record direct tool calls in message history. Overrides class
                attribute if provided.
            **kwargs: Keyword arguments to pass to the tool.

        Returns:
            The result returned by the tool.

        Raises:
            AttributeError: If the tool doesn't exist.
        """
        if self._agent._interrupt_state.activated:
            raise RuntimeError("cannot directly call tool during interrupt")

        if record_direct_tool_call is not None:
            should_record_direct_tool_call = record_direct_tool_call
        else:
            should_record_direct_tool_call = self._agent.record_direct_tool_call

        should_lock = should_record_direct_tool_call

        from ..agent import Agent  # Locally imported to avoid circular reference

        acquired_lock = (
            should_lock
            and isinstance(self._agent, Agent)
            and self._agent._invocation_lock.acquire_lock(blocking=False)
        )
        if should_lock and not acquired_lock:
            raise ConcurrencyException(
                "Direct tool call cannot be made while the agent is in the middle of an invocation. "
                "Set record_direct_tool_call=False to allow direct tool calls during agent invocation."
            )

        try:
            normalized_name = self._find_normalized_tool_name(name)

            # Create unique tool ID and set up the tool request
            tool_id = f"tooluse_{name}_{random.randint(100000000, 999999999)}"
            tool_use: ToolUse = {
                "toolUseId": tool_id,
                "name": normalized_name,
                "input": kwargs.copy(),
            }
            tool_results: list[ToolResult] = []
            invocation_state = kwargs

            async def acall() -> ToolResult:
                async for event in ToolExecutor._stream(self._agent, tool_use, tool_results, invocation_state):
                    if isinstance(event, ToolInterruptEvent):
                        self._agent._interrupt_state.deactivate()
                        raise RuntimeError("cannot raise interrupt in direct tool call")

                tool_result = tool_results[0]

                if should_record_direct_tool_call:
                    # Create a record of this tool execution in the message history
                    await self._record_tool_execution(tool_use, tool_result, user_message_override)

                return tool_result

            tool_result = run_async(acall)

            # TODO: https://github.com/strands-agents/sdk-python/issues/1311
            if isinstance(self._agent, Agent):
                self._agent.conversation_manager.apply_management(self._agent)

            return tool_result

        finally:
            if acquired_lock and isinstance(self._agent, Agent):
                self._agent._invocation_lock.release()

    return caller

__init__(agent)

Initialize instance.

Parameters:

Name Type Description Default
agent Agent | BidiAgent

Agent reference that will accept tool results.

required
Source code in strands/tools/_caller.py
30
31
32
33
34
35
36
37
38
def __init__(self, agent: "Agent | BidiAgent") -> None:
    """Initialize instance.

    Args:
        agent: Agent reference that will accept tool results.
    """
    # WARNING: Do not add any other member variables or methods as this could result in a name conflict with
    #          agent tools and thus break their execution.
    self._agent = agent

event_loop_cycle(agent, invocation_state, structured_output_context=None) async

Execute a single cycle of the event loop.

This core function processes a single conversation turn, handling model inference, tool execution, and error recovery. It manages the entire lifecycle of a conversation turn, including:

  1. Initializing cycle state and metrics
  2. Checking execution limits
  3. Processing messages with the model
  4. Handling tool execution requests
  5. Managing recursive calls for multi-turn tool interactions
  6. Collecting and reporting metrics
  7. Error handling and recovery

Parameters:

Name Type Description Default
agent Agent

The agent for which the cycle is being executed.

required
invocation_state dict[str, Any]

Additional arguments including:

  • request_state: State maintained across cycles
  • event_loop_cycle_id: Unique ID for this cycle
  • event_loop_cycle_span: Current tracing Span for this cycle
required
structured_output_context StructuredOutputContext | None

Optional context for structured output management.

None

Yields:

Type Description
AsyncGenerator[TypedEvent, None]

Model and tool stream events. The last event is a tuple containing:

  • StopReason: Reason the model stopped generating (e.g., "tool_use")
  • Message: The generated message from the model
  • EventLoopMetrics: Updated metrics for the event loop
  • Any: Updated request state

Raises:

Type Description
EventLoopException

If an error occurs during execution

ContextWindowOverflowException

If the input is too large for the model

Source code in strands/event_loop/event_loop.py
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
async def event_loop_cycle(
    agent: "Agent",
    invocation_state: dict[str, Any],
    structured_output_context: StructuredOutputContext | None = None,
) -> AsyncGenerator[TypedEvent, None]:
    """Execute a single cycle of the event loop.

    This core function processes a single conversation turn, handling model inference, tool execution, and error
    recovery. It manages the entire lifecycle of a conversation turn, including:

    1. Initializing cycle state and metrics
    2. Checking execution limits
    3. Processing messages with the model
    4. Handling tool execution requests
    5. Managing recursive calls for multi-turn tool interactions
    6. Collecting and reporting metrics
    7. Error handling and recovery

    Args:
        agent: The agent for which the cycle is being executed.
        invocation_state: Additional arguments including:

            - request_state: State maintained across cycles
            - event_loop_cycle_id: Unique ID for this cycle
            - event_loop_cycle_span: Current tracing Span for this cycle
        structured_output_context: Optional context for structured output management.

    Yields:
        Model and tool stream events. The last event is a tuple containing:

            - StopReason: Reason the model stopped generating (e.g., "tool_use")
            - Message: The generated message from the model
            - EventLoopMetrics: Updated metrics for the event loop
            - Any: Updated request state

    Raises:
        EventLoopException: If an error occurs during execution
        ContextWindowOverflowException: If the input is too large for the model
    """
    structured_output_context = structured_output_context or StructuredOutputContext()

    # Initialize cycle state
    invocation_state["event_loop_cycle_id"] = uuid.uuid4()

    # Initialize state and get cycle trace
    if "request_state" not in invocation_state:
        invocation_state["request_state"] = {}
    attributes = {"event_loop_cycle_id": str(invocation_state.get("event_loop_cycle_id"))}
    cycle_start_time, cycle_trace = agent.event_loop_metrics.start_cycle(attributes=attributes)
    invocation_state["event_loop_cycle_trace"] = cycle_trace

    yield StartEvent()
    yield StartEventLoopEvent()

    # Create tracer span for this event loop cycle
    tracer = get_tracer()
    cycle_span = tracer.start_event_loop_cycle_span(
        invocation_state=invocation_state,
        messages=agent.messages,
        parent_span=agent.trace_span,
        custom_trace_attributes=agent.trace_attributes,
    )
    invocation_state["event_loop_cycle_span"] = cycle_span

    with trace_api.use_span(cycle_span, end_on_exit=True):
        # Skipping model invocation if in interrupt state as interrupts are currently only supported for tool calls.
        if agent._interrupt_state.activated:
            stop_reason: StopReason = "tool_use"
            message = agent._interrupt_state.context["tool_use_message"]
        # Skip model invocation if the latest message contains ToolUse
        elif _has_tool_use_in_latest_message(agent.messages):
            stop_reason = "tool_use"
            message = agent.messages[-1]
        else:
            model_events = _handle_model_execution(
                agent, cycle_span, cycle_trace, invocation_state, tracer, structured_output_context
            )
            async for model_event in model_events:
                if not isinstance(model_event, ModelStopReason):
                    yield model_event

            stop_reason, message, *_ = model_event["stop"]
            yield ModelMessageEvent(message=message)

        try:
            if stop_reason == "max_tokens":
                """
                Handle max_tokens limit reached by the model.

                When the model reaches its maximum token limit, this represents a potentially unrecoverable
                state where the model's response was truncated. By default, Strands fails hard with an
                MaxTokensReachedException to maintain consistency with other failure types.
                """
                raise MaxTokensReachedException(
                    message=(
                        "Agent has reached an unrecoverable state due to max_tokens limit. "
                        "For more information see: "
                        "https://strandsagents.com/latest/user-guide/concepts/agents/agent-loop/#maxtokensreachedexception"
                    )
                )

            if stop_reason == "tool_use":
                # Handle tool execution
                tool_events = _handle_tool_execution(
                    stop_reason,
                    message,
                    agent=agent,
                    cycle_trace=cycle_trace,
                    cycle_span=cycle_span,
                    cycle_start_time=cycle_start_time,
                    invocation_state=invocation_state,
                    tracer=tracer,
                    structured_output_context=structured_output_context,
                )
                async for tool_event in tool_events:
                    yield tool_event

                return

            # End the cycle and return results
            agent.event_loop_metrics.end_cycle(cycle_start_time, cycle_trace, attributes)
            # Set attributes before span auto-closes
            tracer.end_event_loop_cycle_span(cycle_span, message)
        except EventLoopException:
            # Don't yield or log the exception - we already did it when we
            # raised the exception and we don't need that duplication.
            raise
        except (ContextWindowOverflowException, MaxTokensReachedException) as e:
            # Special cased exceptions which we want to bubble up rather than get wrapped in an EventLoopException
            raise e
        except Exception as e:
            # Handle any other exceptions
            yield ForceStopEvent(reason=e)
            logger.exception("cycle failed")
            raise EventLoopException(e, invocation_state["request_state"]) from e

        # Force structured output tool call if LLM didn't use it automatically
        if structured_output_context.is_enabled and stop_reason == "end_turn":
            if structured_output_context.force_attempted:
                raise StructuredOutputException(
                    "The model failed to invoke the structured output tool even after it was forced."
                )
            structured_output_context.set_forced_mode()
            logger.debug("Forcing structured output tool")
            await agent._append_messages(
                {"role": "user", "content": [{"text": structured_output_context.structured_output_prompt}]}
            )

            events = recurse_event_loop(
                agent=agent, invocation_state=invocation_state, structured_output_context=structured_output_context
            )
            async for typed_event in events:
                yield typed_event
            return

        yield EventLoopStopEvent(stop_reason, message, agent.event_loop_metrics, invocation_state["request_state"])

generate_missing_tool_result_content(tool_use_ids)

Generate ToolResult content blocks for orphaned ToolUse message.

Source code in strands/tools/_tool_helpers.py
19
20
21
22
23
24
25
26
27
28
29
30
def generate_missing_tool_result_content(tool_use_ids: list[str]) -> list[ContentBlock]:
    """Generate ToolResult content blocks for orphaned ToolUse message."""
    return [
        {
            "toolResult": {
                "toolUseId": tool_use_id,
                "status": "error",
                "content": [{"text": "Tool was interrupted."}],
            }
        }
        for tool_use_id in tool_use_ids
    ]

get_tracer()

Get or create the global tracer.

Returns:

Type Description
Tracer

The global tracer instance.

Source code in strands/telemetry/tracer.py
880
881
882
883
884
885
886
887
888
889
890
891
def get_tracer() -> Tracer:
    """Get or create the global tracer.

    Returns:
        The global tracer instance.
    """
    global _tracer_instance

    if not _tracer_instance:
        _tracer_instance = Tracer()

    return _tracer_instance

null_callback_handler(**_kwargs)

Callback handler that discards all output.

Parameters:

Name Type Description Default
**_kwargs Any

Event data (ignored).

{}
Source code in strands/handlers/callback_handler.py
67
68
69
70
71
72
73
def null_callback_handler(**_kwargs: Any) -> None:
    """Callback handler that discards all output.

    Args:
        **_kwargs: Event data (ignored).
    """
    return None

run_async(async_func)

Run an async function in a separate thread to avoid event loop conflicts.

This utility handles the common pattern of running async code from sync contexts by using ThreadPoolExecutor to isolate the async execution.

Parameters:

Name Type Description Default
async_func Callable[[], Awaitable[T]]

A callable that returns an awaitable

required

Returns:

Type Description
T

The result of the async function

Source code in strands/_async.py
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
def run_async(async_func: Callable[[], Awaitable[T]]) -> T:
    """Run an async function in a separate thread to avoid event loop conflicts.

    This utility handles the common pattern of running async code from sync contexts
    by using ThreadPoolExecutor to isolate the async execution.

    Args:
        async_func: A callable that returns an awaitable

    Returns:
        The result of the async function
    """

    async def execute_async() -> T:
        return await async_func()

    def execute() -> T:
        return asyncio.run(execute_async())

    with ThreadPoolExecutor() as executor:
        context = contextvars.copy_context()
        future = executor.submit(context.run, execute)
        return future.result()

serialize(obj)

Serialize an object to JSON with consistent settings.

Parameters:

Name Type Description Default
obj Any

The object to serialize

required

Returns:

Type Description
str

JSON string representation of the object

Source code in strands/telemetry/tracer.py
894
895
896
897
898
899
900
901
902
903
def serialize(obj: Any) -> str:
    """Serialize an object to JSON with consistent settings.

    Args:
        obj: The object to serialize

    Returns:
        JSON string representation of the object
    """
    return json.dumps(obj, ensure_ascii=False, cls=JSONEncoder)