Skip to content

strands.agent.conversation_manager.summarizing_conversation_manager

Summarizing conversation history management with configurable options.

DEFAULT_SUMMARIZATION_PROMPT = 'You are a conversation summarizer. Provide a concise summary of the conversation history.\n\nFormat Requirements:\n- You MUST create a structured and concise summary in bullet-point format.\n- You MUST NOT respond conversationally.\n- You MUST NOT address the user directly.\n- You MUST NOT comment on tool availability.\n\nAssumptions:\n- You MUST NOT assume tool executions failed unless otherwise stated.\n\nTask:\nYour task is to create a structured summary document:\n- It MUST contain bullet points with key topics and questions covered\n- It MUST contain bullet points for all significant tools executed and their results\n- It MUST contain bullet points for any code or technical information shared\n- It MUST contain a section of key insights gained\n- It MUST format the summary in the third person\n\nExample format:\n\n## Conversation Summary\n* Topic 1: Key information\n* Topic 2: Key information\n*\n## Tools Executed\n* Tool X: Result Y' module-attribute

logger = logging.getLogger(__name__) module-attribute

Agent

Core Agent interface.

An agent orchestrates the following workflow:

  1. Receives user input
  2. Processes the input using a language model
  3. Decides whether to use tools to gather information or perform actions
  4. Executes those tools and receives results
  5. Continues reasoning with the new information
  6. Produces a final response
Source code in strands/agent/agent.py
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
class Agent:
    """Core Agent interface.

    An agent orchestrates the following workflow:

    1. Receives user input
    2. Processes the input using a language model
    3. Decides whether to use tools to gather information or perform actions
    4. Executes those tools and receives results
    5. Continues reasoning with the new information
    6. Produces a final response
    """

    # For backwards compatibility
    ToolCaller = _ToolCaller

    def __init__(
        self,
        model: Union[Model, str, None] = None,
        messages: Optional[Messages] = None,
        tools: Optional[list[Union[str, dict[str, str], "ToolProvider", Any]]] = None,
        system_prompt: Optional[str | list[SystemContentBlock]] = None,
        structured_output_model: Optional[Type[BaseModel]] = None,
        callback_handler: Optional[
            Union[Callable[..., Any], _DefaultCallbackHandlerSentinel]
        ] = _DEFAULT_CALLBACK_HANDLER,
        conversation_manager: Optional[ConversationManager] = None,
        record_direct_tool_call: bool = True,
        load_tools_from_directory: bool = False,
        trace_attributes: Optional[Mapping[str, AttributeValue]] = None,
        *,
        agent_id: Optional[str] = None,
        name: Optional[str] = None,
        description: Optional[str] = None,
        state: Optional[Union[AgentState, dict]] = None,
        hooks: Optional[list[HookProvider]] = None,
        session_manager: Optional[SessionManager] = None,
        tool_executor: Optional[ToolExecutor] = None,
    ):
        """Initialize the Agent with the specified configuration.

        Args:
            model: Provider for running inference or a string representing the model-id for Bedrock to use.
                Defaults to strands.models.BedrockModel if None.
            messages: List of initial messages to pre-load into the conversation.
                Defaults to an empty list if None.
            tools: List of tools to make available to the agent.
                Can be specified as:

                - String tool names (e.g., "retrieve")
                - File paths (e.g., "/path/to/tool.py")
                - Imported Python modules (e.g., from strands_tools import current_time)
                - Dictionaries with name/path keys (e.g., {"name": "tool_name", "path": "/path/to/tool.py"})
                - ToolProvider instances for managed tool collections
                - Functions decorated with `@strands.tool` decorator.

                If provided, only these tools will be available. If None, all tools will be available.
            system_prompt: System prompt to guide model behavior.
                Can be a string or a list of SystemContentBlock objects for advanced features like caching.
                If None, the model will behave according to its default settings.
            structured_output_model: Pydantic model type(s) for structured output.
                When specified, all agent calls will attempt to return structured output of this type.
                This can be overridden on the agent invocation.
                Defaults to None (no structured output).
            callback_handler: Callback for processing events as they happen during agent execution.
                If not provided (using the default), a new PrintingCallbackHandler instance is created.
                If explicitly set to None, null_callback_handler is used.
            conversation_manager: Manager for conversation history and context window.
                Defaults to strands.agent.conversation_manager.SlidingWindowConversationManager if None.
            record_direct_tool_call: Whether to record direct tool calls in message history.
                Defaults to True.
            load_tools_from_directory: Whether to load and automatically reload tools in the `./tools/` directory.
                Defaults to False.
            trace_attributes: Custom trace attributes to apply to the agent's trace span.
            agent_id: Optional ID for the agent, useful for session management and multi-agent scenarios.
                Defaults to "default".
            name: name of the Agent
                Defaults to "Strands Agents".
            description: description of what the Agent does
                Defaults to None.
            state: stateful information for the agent. Can be either an AgentState object, or a json serializable dict.
                Defaults to an empty AgentState object.
            hooks: hooks to be added to the agent hook registry
                Defaults to None.
            session_manager: Manager for handling agent sessions including conversation history and state.
                If provided, enables session-based persistence and state management.
            tool_executor: Definition of tool execution strategy (e.g., sequential, concurrent, etc.).

        Raises:
            ValueError: If agent id contains path separators.
        """
        self.model = BedrockModel() if not model else BedrockModel(model_id=model) if isinstance(model, str) else model
        self.messages = messages if messages is not None else []
        # initializing self._system_prompt for backwards compatibility
        self._system_prompt, self._system_prompt_content = self._initialize_system_prompt(system_prompt)
        self._default_structured_output_model = structured_output_model
        self.agent_id = _identifier.validate(agent_id or _DEFAULT_AGENT_ID, _identifier.Identifier.AGENT)
        self.name = name or _DEFAULT_AGENT_NAME
        self.description = description

        # If not provided, create a new PrintingCallbackHandler instance
        # If explicitly set to None, use null_callback_handler
        # Otherwise use the passed callback_handler
        self.callback_handler: Union[Callable[..., Any], PrintingCallbackHandler]
        if isinstance(callback_handler, _DefaultCallbackHandlerSentinel):
            self.callback_handler = PrintingCallbackHandler()
        elif callback_handler is None:
            self.callback_handler = null_callback_handler
        else:
            self.callback_handler = callback_handler

        self.conversation_manager = conversation_manager if conversation_manager else SlidingWindowConversationManager()

        # Process trace attributes to ensure they're of compatible types
        self.trace_attributes: dict[str, AttributeValue] = {}
        if trace_attributes:
            for k, v in trace_attributes.items():
                if isinstance(v, (str, int, float, bool)) or (
                    isinstance(v, list) and all(isinstance(x, (str, int, float, bool)) for x in v)
                ):
                    self.trace_attributes[k] = v

        self.record_direct_tool_call = record_direct_tool_call
        self.load_tools_from_directory = load_tools_from_directory

        self.tool_registry = ToolRegistry()

        # Process tool list if provided
        if tools is not None:
            self.tool_registry.process_tools(tools)

        # Initialize tools and configuration
        self.tool_registry.initialize_tools(self.load_tools_from_directory)
        if load_tools_from_directory:
            self.tool_watcher = ToolWatcher(tool_registry=self.tool_registry)

        self.event_loop_metrics = EventLoopMetrics()

        # Initialize tracer instance (no-op if not configured)
        self.tracer = get_tracer()
        self.trace_span: Optional[trace_api.Span] = None

        # Initialize agent state management
        if state is not None:
            if isinstance(state, dict):
                self.state = AgentState(state)
            elif isinstance(state, AgentState):
                self.state = state
            else:
                raise ValueError("state must be an AgentState object or a dict")
        else:
            self.state = AgentState()

        self.tool_caller = _ToolCaller(self)

        self.hooks = HookRegistry()

        self._interrupt_state = _InterruptState()

        # Initialize session management functionality
        self._session_manager = session_manager
        if self._session_manager:
            self.hooks.add_hook(self._session_manager)

        # Allow conversation_managers to subscribe to hooks
        self.hooks.add_hook(self.conversation_manager)

        self.tool_executor = tool_executor or ConcurrentToolExecutor()

        if hooks:
            for hook in hooks:
                self.hooks.add_hook(hook)
        self.hooks.invoke_callbacks(AgentInitializedEvent(agent=self))

    @property
    def system_prompt(self) -> str | None:
        """Get the system prompt as a string for backwards compatibility.

        Returns the system prompt as a concatenated string when it contains text content,
        or None if no text content is present. This maintains backwards compatibility
        with existing code that expects system_prompt to be a string.

        Returns:
            The system prompt as a string, or None if no text content exists.
        """
        return self._system_prompt

    @system_prompt.setter
    def system_prompt(self, value: str | list[SystemContentBlock] | None) -> None:
        """Set the system prompt and update internal content representation.

        Accepts either a string or list of SystemContentBlock objects.
        When set, both the backwards-compatible string representation and the internal
        content block representation are updated to maintain consistency.

        Args:
            value: System prompt as string, list of SystemContentBlock objects, or None.
                  - str: Simple text prompt (most common use case)
                  - list[SystemContentBlock]: Content blocks with features like caching
                  - None: Clear the system prompt
        """
        self._system_prompt, self._system_prompt_content = self._initialize_system_prompt(value)

    @property
    def tool(self) -> _ToolCaller:
        """Call tool as a function.

        Returns:
            Tool caller through which user can invoke tool as a function.

        Example:
            ```
            agent = Agent(tools=[calculator])
            agent.tool.calculator(...)
            ```
        """
        return self.tool_caller

    @property
    def tool_names(self) -> list[str]:
        """Get a list of all registered tool names.

        Returns:
            Names of all tools available to this agent.
        """
        all_tools = self.tool_registry.get_all_tools_config()
        return list(all_tools.keys())

    def __call__(
        self,
        prompt: AgentInput = None,
        *,
        invocation_state: dict[str, Any] | None = None,
        structured_output_model: Type[BaseModel] | None = None,
        **kwargs: Any,
    ) -> AgentResult:
        """Process a natural language prompt through the agent's event loop.

        This method implements the conversational interface with multiple input patterns:
        - String input: `agent("hello!")`
        - ContentBlock list: `agent([{"text": "hello"}, {"image": {...}}])`
        - Message list: `agent([{"role": "user", "content": [{"text": "hello"}]}])`
        - No input: `agent()` - uses existing conversation history

        Args:
            prompt: User input in various formats:
                - str: Simple text input
                - list[ContentBlock]: Multi-modal content blocks
                - list[Message]: Complete messages with roles
                - None: Use existing conversation history
            invocation_state: Additional parameters to pass through the event loop.
            structured_output_model: Pydantic model type(s) for structured output (overrides agent default).
            **kwargs: Additional parameters to pass through the event loop.[Deprecating]

        Returns:
            Result object containing:

                - stop_reason: Why the event loop stopped (e.g., "end_turn", "max_tokens")
                - message: The final message from the model
                - metrics: Performance metrics from the event loop
                - state: The final state of the event loop
                - structured_output: Parsed structured output when structured_output_model was specified
        """
        return run_async(
            lambda: self.invoke_async(
                prompt, invocation_state=invocation_state, structured_output_model=structured_output_model, **kwargs
            )
        )

    async def invoke_async(
        self,
        prompt: AgentInput = None,
        *,
        invocation_state: dict[str, Any] | None = None,
        structured_output_model: Type[BaseModel] | None = None,
        **kwargs: Any,
    ) -> AgentResult:
        """Process a natural language prompt through the agent's event loop.

        This method implements the conversational interface with multiple input patterns:
        - String input: Simple text input
        - ContentBlock list: Multi-modal content blocks
        - Message list: Complete messages with roles
        - No input: Use existing conversation history

        Args:
            prompt: User input in various formats:
                - str: Simple text input
                - list[ContentBlock]: Multi-modal content blocks
                - list[Message]: Complete messages with roles
                - None: Use existing conversation history
            invocation_state: Additional parameters to pass through the event loop.
            structured_output_model: Pydantic model type(s) for structured output (overrides agent default).
            **kwargs: Additional parameters to pass through the event loop.[Deprecating]

        Returns:
            Result: object containing:

                - stop_reason: Why the event loop stopped (e.g., "end_turn", "max_tokens")
                - message: The final message from the model
                - metrics: Performance metrics from the event loop
                - state: The final state of the event loop
        """
        events = self.stream_async(
            prompt, invocation_state=invocation_state, structured_output_model=structured_output_model, **kwargs
        )
        async for event in events:
            _ = event

        return cast(AgentResult, event["result"])

    def structured_output(self, output_model: Type[T], prompt: AgentInput = None) -> T:
        """This method allows you to get structured output from the agent.

        If you pass in a prompt, it will be used temporarily without adding it to the conversation history.
        If you don't pass in a prompt, it will use only the existing conversation history to respond.

        For smaller models, you may want to use the optional prompt to add additional instructions to explicitly
        instruct the model to output the structured data.

        Args:
            output_model: The output model (a JSON schema written as a Pydantic BaseModel)
                that the agent will use when responding.
            prompt: The prompt to use for the agent in various formats:
                - str: Simple text input
                - list[ContentBlock]: Multi-modal content blocks
                - list[Message]: Complete messages with roles
                - None: Use existing conversation history

        Raises:
            ValueError: If no conversation history or prompt is provided.
        """
        warnings.warn(
            "Agent.structured_output method is deprecated."
            " You should pass in `structured_output_model` directly into the agent invocation."
            " see: https://strandsagents.com/latest/documentation/docs/user-guide/concepts/agents/structured-output/",
            category=DeprecationWarning,
            stacklevel=2,
        )

        return run_async(lambda: self.structured_output_async(output_model, prompt))

    async def structured_output_async(self, output_model: Type[T], prompt: AgentInput = None) -> T:
        """This method allows you to get structured output from the agent.

        If you pass in a prompt, it will be used temporarily without adding it to the conversation history.
        If you don't pass in a prompt, it will use only the existing conversation history to respond.

        For smaller models, you may want to use the optional prompt to add additional instructions to explicitly
        instruct the model to output the structured data.

        Args:
            output_model: The output model (a JSON schema written as a Pydantic BaseModel)
                that the agent will use when responding.
            prompt: The prompt to use for the agent (will not be added to conversation history).

        Raises:
            ValueError: If no conversation history or prompt is provided.
        -
        """
        if self._interrupt_state.activated:
            raise RuntimeError("cannot call structured output during interrupt")

        warnings.warn(
            "Agent.structured_output_async method is deprecated."
            " You should pass in `structured_output_model` directly into the agent invocation."
            " see: https://strandsagents.com/latest/documentation/docs/user-guide/concepts/agents/structured-output/",
            category=DeprecationWarning,
            stacklevel=2,
        )
        await self.hooks.invoke_callbacks_async(BeforeInvocationEvent(agent=self))
        with self.tracer.tracer.start_as_current_span(
            "execute_structured_output", kind=trace_api.SpanKind.CLIENT
        ) as structured_output_span:
            try:
                if not self.messages and not prompt:
                    raise ValueError("No conversation history or prompt provided")

                temp_messages: Messages = self.messages + await self._convert_prompt_to_messages(prompt)

                structured_output_span.set_attributes(
                    {
                        "gen_ai.system": "strands-agents",
                        "gen_ai.agent.name": self.name,
                        "gen_ai.agent.id": self.agent_id,
                        "gen_ai.operation.name": "execute_structured_output",
                    }
                )
                if self.system_prompt:
                    structured_output_span.add_event(
                        "gen_ai.system.message",
                        attributes={"role": "system", "content": serialize([{"text": self.system_prompt}])},
                    )
                for message in temp_messages:
                    structured_output_span.add_event(
                        f"gen_ai.{message['role']}.message",
                        attributes={"role": message["role"], "content": serialize(message["content"])},
                    )
                events = self.model.structured_output(output_model, temp_messages, system_prompt=self.system_prompt)
                async for event in events:
                    if isinstance(event, TypedEvent):
                        event.prepare(invocation_state={})
                        if event.is_callback_event:
                            self.callback_handler(**event.as_dict())

                structured_output_span.add_event(
                    "gen_ai.choice", attributes={"message": serialize(event["output"].model_dump())}
                )
                return event["output"]

            finally:
                await self.hooks.invoke_callbacks_async(AfterInvocationEvent(agent=self))

    def cleanup(self) -> None:
        """Clean up resources used by the agent.

        This method cleans up all tool providers that require explicit cleanup,
        such as MCP clients. It should be called when the agent is no longer needed
        to ensure proper resource cleanup.

        Note: This method uses a "belt and braces" approach with automatic cleanup
        through finalizers as a fallback, but explicit cleanup is recommended.
        """
        self.tool_registry.cleanup()

    def __del__(self) -> None:
        """Clean up resources when agent is garbage collected."""
        # __del__ is called even when an exception is thrown in the constructor,
        # so there is no guarantee tool_registry was set..
        if hasattr(self, "tool_registry"):
            self.tool_registry.cleanup()

    async def stream_async(
        self,
        prompt: AgentInput = None,
        *,
        invocation_state: dict[str, Any] | None = None,
        structured_output_model: Type[BaseModel] | None = None,
        **kwargs: Any,
    ) -> AsyncIterator[Any]:
        """Process a natural language prompt and yield events as an async iterator.

        This method provides an asynchronous interface for streaming agent events with multiple input patterns:
        - String input: Simple text input
        - ContentBlock list: Multi-modal content blocks
        - Message list: Complete messages with roles
        - No input: Use existing conversation history

        Args:
            prompt: User input in various formats:
                - str: Simple text input
                - list[ContentBlock]: Multi-modal content blocks
                - list[Message]: Complete messages with roles
                - None: Use existing conversation history
            invocation_state: Additional parameters to pass through the event loop.
            structured_output_model: Pydantic model type(s) for structured output (overrides agent default).
            **kwargs: Additional parameters to pass to the event loop.[Deprecating]

        Yields:
            An async iterator that yields events. Each event is a dictionary containing
                information about the current state of processing, such as:

                - data: Text content being generated
                - complete: Whether this is the final chunk
                - current_tool_use: Information about tools being executed
                - And other event data provided by the callback handler

        Raises:
            Exception: Any exceptions from the agent invocation will be propagated to the caller.

        Example:
            ```python
            async for event in agent.stream_async("Analyze this data"):
                if "data" in event:
                    yield event["data"]
            ```
        """
        self._interrupt_state.resume(prompt)

        self.event_loop_metrics.reset_usage_metrics()

        merged_state = {}
        if kwargs:
            warnings.warn("`**kwargs` parameter is deprecating, use `invocation_state` instead.", stacklevel=2)
            merged_state.update(kwargs)
            if invocation_state is not None:
                merged_state["invocation_state"] = invocation_state
        else:
            if invocation_state is not None:
                merged_state = invocation_state

        callback_handler = self.callback_handler
        if kwargs:
            callback_handler = kwargs.get("callback_handler", self.callback_handler)

        # Process input and get message to add (if any)
        messages = await self._convert_prompt_to_messages(prompt)

        self.trace_span = self._start_agent_trace_span(messages)

        with trace_api.use_span(self.trace_span):
            try:
                events = self._run_loop(messages, merged_state, structured_output_model)

                async for event in events:
                    event.prepare(invocation_state=merged_state)

                    if event.is_callback_event:
                        as_dict = event.as_dict()
                        callback_handler(**as_dict)
                        yield as_dict

                result = AgentResult(*event["stop"])
                callback_handler(result=result)
                yield AgentResultEvent(result=result).as_dict()

                self._end_agent_trace_span(response=result)

            except Exception as e:
                self._end_agent_trace_span(error=e)
                raise

    async def _run_loop(
        self,
        messages: Messages,
        invocation_state: dict[str, Any],
        structured_output_model: Type[BaseModel] | None = None,
    ) -> AsyncGenerator[TypedEvent, None]:
        """Execute the agent's event loop with the given message and parameters.

        Args:
            messages: The input messages to add to the conversation.
            invocation_state: Additional parameters to pass to the event loop.
            structured_output_model: Optional Pydantic model type for structured output.

        Yields:
            Events from the event loop cycle.
        """
        await self.hooks.invoke_callbacks_async(BeforeInvocationEvent(agent=self))

        agent_result: AgentResult | None = None
        try:
            yield InitEventLoopEvent()

            await self._append_messages(*messages)

            structured_output_context = StructuredOutputContext(
                structured_output_model or self._default_structured_output_model
            )

            # Execute the event loop cycle with retry logic for context limits
            events = self._execute_event_loop_cycle(invocation_state, structured_output_context)
            async for event in events:
                # Signal from the model provider that the message sent by the user should be redacted,
                # likely due to a guardrail.
                if (
                    isinstance(event, ModelStreamChunkEvent)
                    and event.chunk
                    and event.chunk.get("redactContent")
                    and event.chunk["redactContent"].get("redactUserContentMessage")
                ):
                    self.messages[-1]["content"] = self._redact_user_content(
                        self.messages[-1]["content"], str(event.chunk["redactContent"]["redactUserContentMessage"])
                    )
                    if self._session_manager:
                        self._session_manager.redact_latest_message(self.messages[-1], self)
                yield event

            # Capture the result from the final event if available
            if isinstance(event, EventLoopStopEvent):
                agent_result = AgentResult(*event["stop"])

        finally:
            self.conversation_manager.apply_management(self)
            await self.hooks.invoke_callbacks_async(AfterInvocationEvent(agent=self, result=agent_result))

    async def _execute_event_loop_cycle(
        self, invocation_state: dict[str, Any], structured_output_context: StructuredOutputContext | None = None
    ) -> AsyncGenerator[TypedEvent, None]:
        """Execute the event loop cycle with retry logic for context window limits.

        This internal method handles the execution of the event loop cycle and implements
        retry logic for handling context window overflow exceptions by reducing the
        conversation context and retrying.

        Args:
            invocation_state: Additional parameters to pass to the event loop.
            structured_output_context: Optional structured output context for this invocation.

        Yields:
            Events of the loop cycle.
        """
        # Add `Agent` to invocation_state to keep backwards-compatibility
        invocation_state["agent"] = self

        if structured_output_context:
            structured_output_context.register_tool(self.tool_registry)

        try:
            events = event_loop_cycle(
                agent=self,
                invocation_state=invocation_state,
                structured_output_context=structured_output_context,
            )
            async for event in events:
                yield event

        except ContextWindowOverflowException as e:
            # Try reducing the context size and retrying
            self.conversation_manager.reduce_context(self, e=e)

            # Sync agent after reduce_context to keep conversation_manager_state up to date in the session
            if self._session_manager:
                self._session_manager.sync_agent(self)

            events = self._execute_event_loop_cycle(invocation_state, structured_output_context)
            async for event in events:
                yield event

        finally:
            if structured_output_context:
                structured_output_context.cleanup(self.tool_registry)

    async def _convert_prompt_to_messages(self, prompt: AgentInput) -> Messages:
        if self._interrupt_state.activated:
            return []

        messages: Messages | None = None
        if prompt is not None:
            # Check if the latest message is toolUse
            if len(self.messages) > 0 and any("toolUse" in content for content in self.messages[-1]["content"]):
                # Add toolResult message after to have a valid conversation
                logger.info(
                    "Agents latest message is toolUse, appending a toolResult message to have valid conversation."
                )
                tool_use_ids = [
                    content["toolUse"]["toolUseId"] for content in self.messages[-1]["content"] if "toolUse" in content
                ]
                await self._append_messages(
                    {
                        "role": "user",
                        "content": generate_missing_tool_result_content(tool_use_ids),
                    }
                )
            if isinstance(prompt, str):
                # String input - convert to user message
                messages = [{"role": "user", "content": [{"text": prompt}]}]
            elif isinstance(prompt, list):
                if len(prompt) == 0:
                    # Empty list
                    messages = []
                # Check if all item in input list are dictionaries
                elif all(isinstance(item, dict) for item in prompt):
                    # Check if all items are messages
                    if all(all(key in item for key in Message.__annotations__.keys()) for item in prompt):
                        # Messages input - add all messages to conversation
                        messages = cast(Messages, prompt)

                    # Check if all items are content blocks
                    elif all(any(key in ContentBlock.__annotations__.keys() for key in item) for item in prompt):
                        # Treat as List[ContentBlock] input - convert to user message
                        # This allows invalid structures to be passed through to the model
                        messages = [{"role": "user", "content": cast(list[ContentBlock], prompt)}]
        else:
            messages = []
        if messages is None:
            raise ValueError("Input prompt must be of type: `str | list[Contentblock] | Messages | None`.")
        return messages

    def _start_agent_trace_span(self, messages: Messages) -> trace_api.Span:
        """Starts a trace span for the agent.

        Args:
            messages: The input messages.
        """
        model_id = self.model.config.get("model_id") if hasattr(self.model, "config") else None
        return self.tracer.start_agent_span(
            messages=messages,
            agent_name=self.name,
            model_id=model_id,
            tools=self.tool_names,
            system_prompt=self.system_prompt,
            custom_trace_attributes=self.trace_attributes,
            tools_config=self.tool_registry.get_all_tools_config(),
        )

    def _end_agent_trace_span(
        self,
        response: Optional[AgentResult] = None,
        error: Optional[Exception] = None,
    ) -> None:
        """Ends a trace span for the agent.

        Args:
            span: The span to end.
            response: Response to record as a trace attribute.
            error: Error to record as a trace attribute.
        """
        if self.trace_span:
            trace_attributes: dict[str, Any] = {
                "span": self.trace_span,
            }

            if response:
                trace_attributes["response"] = response
            if error:
                trace_attributes["error"] = error

            self.tracer.end_agent_span(**trace_attributes)

    def _initialize_system_prompt(
        self, system_prompt: str | list[SystemContentBlock] | None
    ) -> tuple[str | None, list[SystemContentBlock] | None]:
        """Initialize system prompt fields from constructor input.

        Maintains backwards compatibility by keeping system_prompt as str when string input
        provided, avoiding breaking existing consumers.

        Maps system_prompt input to both string and content block representations:
        - If string: system_prompt=string, _system_prompt_content=[{text: string}]
        - If list with text elements: system_prompt=concatenated_text, _system_prompt_content=list
        - If list without text elements: system_prompt=None, _system_prompt_content=list
        - If None: system_prompt=None, _system_prompt_content=None
        """
        if isinstance(system_prompt, str):
            return system_prompt, [{"text": system_prompt}]
        elif isinstance(system_prompt, list):
            # Concatenate all text elements for backwards compatibility, None if no text found
            text_parts = [block["text"] for block in system_prompt if "text" in block]
            system_prompt_str = "\n".join(text_parts) if text_parts else None
            return system_prompt_str, system_prompt
        else:
            return None, None

    async def _append_messages(self, *messages: Message) -> None:
        """Appends messages to history and invoke the callbacks for the MessageAddedEvent."""
        for message in messages:
            self.messages.append(message)
            await self.hooks.invoke_callbacks_async(MessageAddedEvent(agent=self, message=message))

    def _redact_user_content(self, content: list[ContentBlock], redact_message: str) -> list[ContentBlock]:
        """Redact user content preserving toolResult blocks.

        Args:
            content: content blocks to be redacted
            redact_message: redact message to be replaced

        Returns:
            Redacted content, as follows:
            - if the message contains at least a toolResult block,
                all toolResult blocks(s) are kept, redacting only the result content;
            - otherwise, the entire content of the message is replaced
                with a single text block with the redact message.
        """
        redacted_content = []
        for block in content:
            if "toolResult" in block:
                block["toolResult"]["content"] = [{"text": redact_message}]
                redacted_content.append(block)

        if not redacted_content:
            # Text content is added only if no toolResult blocks were found
            redacted_content = [{"text": redact_message}]

        return redacted_content

system_prompt property writable

Get the system prompt as a string for backwards compatibility.

Returns the system prompt as a concatenated string when it contains text content, or None if no text content is present. This maintains backwards compatibility with existing code that expects system_prompt to be a string.

Returns:

Type Description
str | None

The system prompt as a string, or None if no text content exists.

tool property

Call tool as a function.

Returns:

Type Description
_ToolCaller

Tool caller through which user can invoke tool as a function.

Example
agent = Agent(tools=[calculator])
agent.tool.calculator(...)

tool_names property

Get a list of all registered tool names.

Returns:

Type Description
list[str]

Names of all tools available to this agent.

__call__(prompt=None, *, invocation_state=None, structured_output_model=None, **kwargs)

Process a natural language prompt through the agent's event loop.

This method implements the conversational interface with multiple input patterns: - String input: agent("hello!") - ContentBlock list: agent([{"text": "hello"}, {"image": {...}}]) - Message list: agent([{"role": "user", "content": [{"text": "hello"}]}]) - No input: agent() - uses existing conversation history

Parameters:

Name Type Description Default
prompt AgentInput

User input in various formats: - str: Simple text input - list[ContentBlock]: Multi-modal content blocks - list[Message]: Complete messages with roles - None: Use existing conversation history

None
invocation_state dict[str, Any] | None

Additional parameters to pass through the event loop.

None
structured_output_model Type[BaseModel] | None

Pydantic model type(s) for structured output (overrides agent default).

None
**kwargs Any

Additional parameters to pass through the event loop.[Deprecating]

{}

Returns:

Type Description
AgentResult

Result object containing:

  • stop_reason: Why the event loop stopped (e.g., "end_turn", "max_tokens")
  • message: The final message from the model
  • metrics: Performance metrics from the event loop
  • state: The final state of the event loop
  • structured_output: Parsed structured output when structured_output_model was specified
Source code in strands/agent/agent.py
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
def __call__(
    self,
    prompt: AgentInput = None,
    *,
    invocation_state: dict[str, Any] | None = None,
    structured_output_model: Type[BaseModel] | None = None,
    **kwargs: Any,
) -> AgentResult:
    """Process a natural language prompt through the agent's event loop.

    This method implements the conversational interface with multiple input patterns:
    - String input: `agent("hello!")`
    - ContentBlock list: `agent([{"text": "hello"}, {"image": {...}}])`
    - Message list: `agent([{"role": "user", "content": [{"text": "hello"}]}])`
    - No input: `agent()` - uses existing conversation history

    Args:
        prompt: User input in various formats:
            - str: Simple text input
            - list[ContentBlock]: Multi-modal content blocks
            - list[Message]: Complete messages with roles
            - None: Use existing conversation history
        invocation_state: Additional parameters to pass through the event loop.
        structured_output_model: Pydantic model type(s) for structured output (overrides agent default).
        **kwargs: Additional parameters to pass through the event loop.[Deprecating]

    Returns:
        Result object containing:

            - stop_reason: Why the event loop stopped (e.g., "end_turn", "max_tokens")
            - message: The final message from the model
            - metrics: Performance metrics from the event loop
            - state: The final state of the event loop
            - structured_output: Parsed structured output when structured_output_model was specified
    """
    return run_async(
        lambda: self.invoke_async(
            prompt, invocation_state=invocation_state, structured_output_model=structured_output_model, **kwargs
        )
    )

__del__()

Clean up resources when agent is garbage collected.

Source code in strands/agent/agent.py
514
515
516
517
518
519
def __del__(self) -> None:
    """Clean up resources when agent is garbage collected."""
    # __del__ is called even when an exception is thrown in the constructor,
    # so there is no guarantee tool_registry was set..
    if hasattr(self, "tool_registry"):
        self.tool_registry.cleanup()

__init__(model=None, messages=None, tools=None, system_prompt=None, structured_output_model=None, callback_handler=_DEFAULT_CALLBACK_HANDLER, conversation_manager=None, record_direct_tool_call=True, load_tools_from_directory=False, trace_attributes=None, *, agent_id=None, name=None, description=None, state=None, hooks=None, session_manager=None, tool_executor=None)

Initialize the Agent with the specified configuration.

Parameters:

Name Type Description Default
model Union[Model, str, None]

Provider for running inference or a string representing the model-id for Bedrock to use. Defaults to strands.models.BedrockModel if None.

None
messages Optional[Messages]

List of initial messages to pre-load into the conversation. Defaults to an empty list if None.

None
tools Optional[list[Union[str, dict[str, str], ToolProvider, Any]]]

List of tools to make available to the agent. Can be specified as:

  • String tool names (e.g., "retrieve")
  • File paths (e.g., "/path/to/tool.py")
  • Imported Python modules (e.g., from strands_tools import current_time)
  • Dictionaries with name/path keys (e.g., {"name": "tool_name", "path": "/path/to/tool.py"})
  • ToolProvider instances for managed tool collections
  • Functions decorated with @strands.tool decorator.

If provided, only these tools will be available. If None, all tools will be available.

None
system_prompt Optional[str | list[SystemContentBlock]]

System prompt to guide model behavior. Can be a string or a list of SystemContentBlock objects for advanced features like caching. If None, the model will behave according to its default settings.

None
structured_output_model Optional[Type[BaseModel]]

Pydantic model type(s) for structured output. When specified, all agent calls will attempt to return structured output of this type. This can be overridden on the agent invocation. Defaults to None (no structured output).

None
callback_handler Optional[Union[Callable[..., Any], _DefaultCallbackHandlerSentinel]]

Callback for processing events as they happen during agent execution. If not provided (using the default), a new PrintingCallbackHandler instance is created. If explicitly set to None, null_callback_handler is used.

_DEFAULT_CALLBACK_HANDLER
conversation_manager Optional[ConversationManager]

Manager for conversation history and context window. Defaults to strands.agent.conversation_manager.SlidingWindowConversationManager if None.

None
record_direct_tool_call bool

Whether to record direct tool calls in message history. Defaults to True.

True
load_tools_from_directory bool

Whether to load and automatically reload tools in the ./tools/ directory. Defaults to False.

False
trace_attributes Optional[Mapping[str, AttributeValue]]

Custom trace attributes to apply to the agent's trace span.

None
agent_id Optional[str]

Optional ID for the agent, useful for session management and multi-agent scenarios. Defaults to "default".

None
name Optional[str]

name of the Agent Defaults to "Strands Agents".

None
description Optional[str]

description of what the Agent does Defaults to None.

None
state Optional[Union[AgentState, dict]]

stateful information for the agent. Can be either an AgentState object, or a json serializable dict. Defaults to an empty AgentState object.

None
hooks Optional[list[HookProvider]]

hooks to be added to the agent hook registry Defaults to None.

None
session_manager Optional[SessionManager]

Manager for handling agent sessions including conversation history and state. If provided, enables session-based persistence and state management.

None
tool_executor Optional[ToolExecutor]

Definition of tool execution strategy (e.g., sequential, concurrent, etc.).

None

Raises:

Type Description
ValueError

If agent id contains path separators.

Source code in strands/agent/agent.py
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
def __init__(
    self,
    model: Union[Model, str, None] = None,
    messages: Optional[Messages] = None,
    tools: Optional[list[Union[str, dict[str, str], "ToolProvider", Any]]] = None,
    system_prompt: Optional[str | list[SystemContentBlock]] = None,
    structured_output_model: Optional[Type[BaseModel]] = None,
    callback_handler: Optional[
        Union[Callable[..., Any], _DefaultCallbackHandlerSentinel]
    ] = _DEFAULT_CALLBACK_HANDLER,
    conversation_manager: Optional[ConversationManager] = None,
    record_direct_tool_call: bool = True,
    load_tools_from_directory: bool = False,
    trace_attributes: Optional[Mapping[str, AttributeValue]] = None,
    *,
    agent_id: Optional[str] = None,
    name: Optional[str] = None,
    description: Optional[str] = None,
    state: Optional[Union[AgentState, dict]] = None,
    hooks: Optional[list[HookProvider]] = None,
    session_manager: Optional[SessionManager] = None,
    tool_executor: Optional[ToolExecutor] = None,
):
    """Initialize the Agent with the specified configuration.

    Args:
        model: Provider for running inference or a string representing the model-id for Bedrock to use.
            Defaults to strands.models.BedrockModel if None.
        messages: List of initial messages to pre-load into the conversation.
            Defaults to an empty list if None.
        tools: List of tools to make available to the agent.
            Can be specified as:

            - String tool names (e.g., "retrieve")
            - File paths (e.g., "/path/to/tool.py")
            - Imported Python modules (e.g., from strands_tools import current_time)
            - Dictionaries with name/path keys (e.g., {"name": "tool_name", "path": "/path/to/tool.py"})
            - ToolProvider instances for managed tool collections
            - Functions decorated with `@strands.tool` decorator.

            If provided, only these tools will be available. If None, all tools will be available.
        system_prompt: System prompt to guide model behavior.
            Can be a string or a list of SystemContentBlock objects for advanced features like caching.
            If None, the model will behave according to its default settings.
        structured_output_model: Pydantic model type(s) for structured output.
            When specified, all agent calls will attempt to return structured output of this type.
            This can be overridden on the agent invocation.
            Defaults to None (no structured output).
        callback_handler: Callback for processing events as they happen during agent execution.
            If not provided (using the default), a new PrintingCallbackHandler instance is created.
            If explicitly set to None, null_callback_handler is used.
        conversation_manager: Manager for conversation history and context window.
            Defaults to strands.agent.conversation_manager.SlidingWindowConversationManager if None.
        record_direct_tool_call: Whether to record direct tool calls in message history.
            Defaults to True.
        load_tools_from_directory: Whether to load and automatically reload tools in the `./tools/` directory.
            Defaults to False.
        trace_attributes: Custom trace attributes to apply to the agent's trace span.
        agent_id: Optional ID for the agent, useful for session management and multi-agent scenarios.
            Defaults to "default".
        name: name of the Agent
            Defaults to "Strands Agents".
        description: description of what the Agent does
            Defaults to None.
        state: stateful information for the agent. Can be either an AgentState object, or a json serializable dict.
            Defaults to an empty AgentState object.
        hooks: hooks to be added to the agent hook registry
            Defaults to None.
        session_manager: Manager for handling agent sessions including conversation history and state.
            If provided, enables session-based persistence and state management.
        tool_executor: Definition of tool execution strategy (e.g., sequential, concurrent, etc.).

    Raises:
        ValueError: If agent id contains path separators.
    """
    self.model = BedrockModel() if not model else BedrockModel(model_id=model) if isinstance(model, str) else model
    self.messages = messages if messages is not None else []
    # initializing self._system_prompt for backwards compatibility
    self._system_prompt, self._system_prompt_content = self._initialize_system_prompt(system_prompt)
    self._default_structured_output_model = structured_output_model
    self.agent_id = _identifier.validate(agent_id or _DEFAULT_AGENT_ID, _identifier.Identifier.AGENT)
    self.name = name or _DEFAULT_AGENT_NAME
    self.description = description

    # If not provided, create a new PrintingCallbackHandler instance
    # If explicitly set to None, use null_callback_handler
    # Otherwise use the passed callback_handler
    self.callback_handler: Union[Callable[..., Any], PrintingCallbackHandler]
    if isinstance(callback_handler, _DefaultCallbackHandlerSentinel):
        self.callback_handler = PrintingCallbackHandler()
    elif callback_handler is None:
        self.callback_handler = null_callback_handler
    else:
        self.callback_handler = callback_handler

    self.conversation_manager = conversation_manager if conversation_manager else SlidingWindowConversationManager()

    # Process trace attributes to ensure they're of compatible types
    self.trace_attributes: dict[str, AttributeValue] = {}
    if trace_attributes:
        for k, v in trace_attributes.items():
            if isinstance(v, (str, int, float, bool)) or (
                isinstance(v, list) and all(isinstance(x, (str, int, float, bool)) for x in v)
            ):
                self.trace_attributes[k] = v

    self.record_direct_tool_call = record_direct_tool_call
    self.load_tools_from_directory = load_tools_from_directory

    self.tool_registry = ToolRegistry()

    # Process tool list if provided
    if tools is not None:
        self.tool_registry.process_tools(tools)

    # Initialize tools and configuration
    self.tool_registry.initialize_tools(self.load_tools_from_directory)
    if load_tools_from_directory:
        self.tool_watcher = ToolWatcher(tool_registry=self.tool_registry)

    self.event_loop_metrics = EventLoopMetrics()

    # Initialize tracer instance (no-op if not configured)
    self.tracer = get_tracer()
    self.trace_span: Optional[trace_api.Span] = None

    # Initialize agent state management
    if state is not None:
        if isinstance(state, dict):
            self.state = AgentState(state)
        elif isinstance(state, AgentState):
            self.state = state
        else:
            raise ValueError("state must be an AgentState object or a dict")
    else:
        self.state = AgentState()

    self.tool_caller = _ToolCaller(self)

    self.hooks = HookRegistry()

    self._interrupt_state = _InterruptState()

    # Initialize session management functionality
    self._session_manager = session_manager
    if self._session_manager:
        self.hooks.add_hook(self._session_manager)

    # Allow conversation_managers to subscribe to hooks
    self.hooks.add_hook(self.conversation_manager)

    self.tool_executor = tool_executor or ConcurrentToolExecutor()

    if hooks:
        for hook in hooks:
            self.hooks.add_hook(hook)
    self.hooks.invoke_callbacks(AgentInitializedEvent(agent=self))

cleanup()

Clean up resources used by the agent.

This method cleans up all tool providers that require explicit cleanup, such as MCP clients. It should be called when the agent is no longer needed to ensure proper resource cleanup.

Note: This method uses a "belt and braces" approach with automatic cleanup through finalizers as a fallback, but explicit cleanup is recommended.

Source code in strands/agent/agent.py
502
503
504
505
506
507
508
509
510
511
512
def cleanup(self) -> None:
    """Clean up resources used by the agent.

    This method cleans up all tool providers that require explicit cleanup,
    such as MCP clients. It should be called when the agent is no longer needed
    to ensure proper resource cleanup.

    Note: This method uses a "belt and braces" approach with automatic cleanup
    through finalizers as a fallback, but explicit cleanup is recommended.
    """
    self.tool_registry.cleanup()

invoke_async(prompt=None, *, invocation_state=None, structured_output_model=None, **kwargs) async

Process a natural language prompt through the agent's event loop.

This method implements the conversational interface with multiple input patterns: - String input: Simple text input - ContentBlock list: Multi-modal content blocks - Message list: Complete messages with roles - No input: Use existing conversation history

Parameters:

Name Type Description Default
prompt AgentInput

User input in various formats: - str: Simple text input - list[ContentBlock]: Multi-modal content blocks - list[Message]: Complete messages with roles - None: Use existing conversation history

None
invocation_state dict[str, Any] | None

Additional parameters to pass through the event loop.

None
structured_output_model Type[BaseModel] | None

Pydantic model type(s) for structured output (overrides agent default).

None
**kwargs Any

Additional parameters to pass through the event loop.[Deprecating]

{}

Returns:

Name Type Description
Result AgentResult

object containing:

  • stop_reason: Why the event loop stopped (e.g., "end_turn", "max_tokens")
  • message: The final message from the model
  • metrics: Performance metrics from the event loop
  • state: The final state of the event loop
Source code in strands/agent/agent.py
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
async def invoke_async(
    self,
    prompt: AgentInput = None,
    *,
    invocation_state: dict[str, Any] | None = None,
    structured_output_model: Type[BaseModel] | None = None,
    **kwargs: Any,
) -> AgentResult:
    """Process a natural language prompt through the agent's event loop.

    This method implements the conversational interface with multiple input patterns:
    - String input: Simple text input
    - ContentBlock list: Multi-modal content blocks
    - Message list: Complete messages with roles
    - No input: Use existing conversation history

    Args:
        prompt: User input in various formats:
            - str: Simple text input
            - list[ContentBlock]: Multi-modal content blocks
            - list[Message]: Complete messages with roles
            - None: Use existing conversation history
        invocation_state: Additional parameters to pass through the event loop.
        structured_output_model: Pydantic model type(s) for structured output (overrides agent default).
        **kwargs: Additional parameters to pass through the event loop.[Deprecating]

    Returns:
        Result: object containing:

            - stop_reason: Why the event loop stopped (e.g., "end_turn", "max_tokens")
            - message: The final message from the model
            - metrics: Performance metrics from the event loop
            - state: The final state of the event loop
    """
    events = self.stream_async(
        prompt, invocation_state=invocation_state, structured_output_model=structured_output_model, **kwargs
    )
    async for event in events:
        _ = event

    return cast(AgentResult, event["result"])

stream_async(prompt=None, *, invocation_state=None, structured_output_model=None, **kwargs) async

Process a natural language prompt and yield events as an async iterator.

This method provides an asynchronous interface for streaming agent events with multiple input patterns: - String input: Simple text input - ContentBlock list: Multi-modal content blocks - Message list: Complete messages with roles - No input: Use existing conversation history

Parameters:

Name Type Description Default
prompt AgentInput

User input in various formats: - str: Simple text input - list[ContentBlock]: Multi-modal content blocks - list[Message]: Complete messages with roles - None: Use existing conversation history

None
invocation_state dict[str, Any] | None

Additional parameters to pass through the event loop.

None
structured_output_model Type[BaseModel] | None

Pydantic model type(s) for structured output (overrides agent default).

None
**kwargs Any

Additional parameters to pass to the event loop.[Deprecating]

{}

Yields:

Type Description
AsyncIterator[Any]

An async iterator that yields events. Each event is a dictionary containing information about the current state of processing, such as:

  • data: Text content being generated
  • complete: Whether this is the final chunk
  • current_tool_use: Information about tools being executed
  • And other event data provided by the callback handler

Raises:

Type Description
Exception

Any exceptions from the agent invocation will be propagated to the caller.

Example
async for event in agent.stream_async("Analyze this data"):
    if "data" in event:
        yield event["data"]
Source code in strands/agent/agent.py
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
async def stream_async(
    self,
    prompt: AgentInput = None,
    *,
    invocation_state: dict[str, Any] | None = None,
    structured_output_model: Type[BaseModel] | None = None,
    **kwargs: Any,
) -> AsyncIterator[Any]:
    """Process a natural language prompt and yield events as an async iterator.

    This method provides an asynchronous interface for streaming agent events with multiple input patterns:
    - String input: Simple text input
    - ContentBlock list: Multi-modal content blocks
    - Message list: Complete messages with roles
    - No input: Use existing conversation history

    Args:
        prompt: User input in various formats:
            - str: Simple text input
            - list[ContentBlock]: Multi-modal content blocks
            - list[Message]: Complete messages with roles
            - None: Use existing conversation history
        invocation_state: Additional parameters to pass through the event loop.
        structured_output_model: Pydantic model type(s) for structured output (overrides agent default).
        **kwargs: Additional parameters to pass to the event loop.[Deprecating]

    Yields:
        An async iterator that yields events. Each event is a dictionary containing
            information about the current state of processing, such as:

            - data: Text content being generated
            - complete: Whether this is the final chunk
            - current_tool_use: Information about tools being executed
            - And other event data provided by the callback handler

    Raises:
        Exception: Any exceptions from the agent invocation will be propagated to the caller.

    Example:
        ```python
        async for event in agent.stream_async("Analyze this data"):
            if "data" in event:
                yield event["data"]
        ```
    """
    self._interrupt_state.resume(prompt)

    self.event_loop_metrics.reset_usage_metrics()

    merged_state = {}
    if kwargs:
        warnings.warn("`**kwargs` parameter is deprecating, use `invocation_state` instead.", stacklevel=2)
        merged_state.update(kwargs)
        if invocation_state is not None:
            merged_state["invocation_state"] = invocation_state
    else:
        if invocation_state is not None:
            merged_state = invocation_state

    callback_handler = self.callback_handler
    if kwargs:
        callback_handler = kwargs.get("callback_handler", self.callback_handler)

    # Process input and get message to add (if any)
    messages = await self._convert_prompt_to_messages(prompt)

    self.trace_span = self._start_agent_trace_span(messages)

    with trace_api.use_span(self.trace_span):
        try:
            events = self._run_loop(messages, merged_state, structured_output_model)

            async for event in events:
                event.prepare(invocation_state=merged_state)

                if event.is_callback_event:
                    as_dict = event.as_dict()
                    callback_handler(**as_dict)
                    yield as_dict

            result = AgentResult(*event["stop"])
            callback_handler(result=result)
            yield AgentResultEvent(result=result).as_dict()

            self._end_agent_trace_span(response=result)

        except Exception as e:
            self._end_agent_trace_span(error=e)
            raise

structured_output(output_model, prompt=None)

This method allows you to get structured output from the agent.

If you pass in a prompt, it will be used temporarily without adding it to the conversation history. If you don't pass in a prompt, it will use only the existing conversation history to respond.

For smaller models, you may want to use the optional prompt to add additional instructions to explicitly instruct the model to output the structured data.

Parameters:

Name Type Description Default
output_model Type[T]

The output model (a JSON schema written as a Pydantic BaseModel) that the agent will use when responding.

required
prompt AgentInput

The prompt to use for the agent in various formats: - str: Simple text input - list[ContentBlock]: Multi-modal content blocks - list[Message]: Complete messages with roles - None: Use existing conversation history

None

Raises:

Type Description
ValueError

If no conversation history or prompt is provided.

Source code in strands/agent/agent.py
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
def structured_output(self, output_model: Type[T], prompt: AgentInput = None) -> T:
    """This method allows you to get structured output from the agent.

    If you pass in a prompt, it will be used temporarily without adding it to the conversation history.
    If you don't pass in a prompt, it will use only the existing conversation history to respond.

    For smaller models, you may want to use the optional prompt to add additional instructions to explicitly
    instruct the model to output the structured data.

    Args:
        output_model: The output model (a JSON schema written as a Pydantic BaseModel)
            that the agent will use when responding.
        prompt: The prompt to use for the agent in various formats:
            - str: Simple text input
            - list[ContentBlock]: Multi-modal content blocks
            - list[Message]: Complete messages with roles
            - None: Use existing conversation history

    Raises:
        ValueError: If no conversation history or prompt is provided.
    """
    warnings.warn(
        "Agent.structured_output method is deprecated."
        " You should pass in `structured_output_model` directly into the agent invocation."
        " see: https://strandsagents.com/latest/documentation/docs/user-guide/concepts/agents/structured-output/",
        category=DeprecationWarning,
        stacklevel=2,
    )

    return run_async(lambda: self.structured_output_async(output_model, prompt))

structured_output_async(output_model, prompt=None) async

This method allows you to get structured output from the agent.

If you pass in a prompt, it will be used temporarily without adding it to the conversation history. If you don't pass in a prompt, it will use only the existing conversation history to respond.

For smaller models, you may want to use the optional prompt to add additional instructions to explicitly instruct the model to output the structured data.

Parameters:

Name Type Description Default
output_model Type[T]

The output model (a JSON schema written as a Pydantic BaseModel) that the agent will use when responding.

required
prompt AgentInput

The prompt to use for the agent (will not be added to conversation history).

None

Raises:

Type Description
ValueError

If no conversation history or prompt is provided.

-

Source code in strands/agent/agent.py
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
async def structured_output_async(self, output_model: Type[T], prompt: AgentInput = None) -> T:
    """This method allows you to get structured output from the agent.

    If you pass in a prompt, it will be used temporarily without adding it to the conversation history.
    If you don't pass in a prompt, it will use only the existing conversation history to respond.

    For smaller models, you may want to use the optional prompt to add additional instructions to explicitly
    instruct the model to output the structured data.

    Args:
        output_model: The output model (a JSON schema written as a Pydantic BaseModel)
            that the agent will use when responding.
        prompt: The prompt to use for the agent (will not be added to conversation history).

    Raises:
        ValueError: If no conversation history or prompt is provided.
    -
    """
    if self._interrupt_state.activated:
        raise RuntimeError("cannot call structured output during interrupt")

    warnings.warn(
        "Agent.structured_output_async method is deprecated."
        " You should pass in `structured_output_model` directly into the agent invocation."
        " see: https://strandsagents.com/latest/documentation/docs/user-guide/concepts/agents/structured-output/",
        category=DeprecationWarning,
        stacklevel=2,
    )
    await self.hooks.invoke_callbacks_async(BeforeInvocationEvent(agent=self))
    with self.tracer.tracer.start_as_current_span(
        "execute_structured_output", kind=trace_api.SpanKind.CLIENT
    ) as structured_output_span:
        try:
            if not self.messages and not prompt:
                raise ValueError("No conversation history or prompt provided")

            temp_messages: Messages = self.messages + await self._convert_prompt_to_messages(prompt)

            structured_output_span.set_attributes(
                {
                    "gen_ai.system": "strands-agents",
                    "gen_ai.agent.name": self.name,
                    "gen_ai.agent.id": self.agent_id,
                    "gen_ai.operation.name": "execute_structured_output",
                }
            )
            if self.system_prompt:
                structured_output_span.add_event(
                    "gen_ai.system.message",
                    attributes={"role": "system", "content": serialize([{"text": self.system_prompt}])},
                )
            for message in temp_messages:
                structured_output_span.add_event(
                    f"gen_ai.{message['role']}.message",
                    attributes={"role": message["role"], "content": serialize(message["content"])},
                )
            events = self.model.structured_output(output_model, temp_messages, system_prompt=self.system_prompt)
            async for event in events:
                if isinstance(event, TypedEvent):
                    event.prepare(invocation_state={})
                    if event.is_callback_event:
                        self.callback_handler(**event.as_dict())

            structured_output_span.add_event(
                "gen_ai.choice", attributes={"message": serialize(event["output"].model_dump())}
            )
            return event["output"]

        finally:
            await self.hooks.invoke_callbacks_async(AfterInvocationEvent(agent=self))

AgentTool

Bases: ABC

Abstract base class for all SDK tools.

This class defines the interface that all tool implementations must follow. Each tool must provide its name, specification, and implement a stream method that executes the tool's functionality.

Source code in strands/types/tools.py
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
class AgentTool(ABC):
    """Abstract base class for all SDK tools.

    This class defines the interface that all tool implementations must follow. Each tool must provide its name,
    specification, and implement a stream method that executes the tool's functionality.
    """

    _is_dynamic: bool

    def __init__(self) -> None:
        """Initialize the base agent tool with default dynamic state."""
        self._is_dynamic = False

    @property
    @abstractmethod
    # pragma: no cover
    def tool_name(self) -> str:
        """The unique name of the tool used for identification and invocation."""
        pass

    @property
    @abstractmethod
    # pragma: no cover
    def tool_spec(self) -> ToolSpec:
        """Tool specification that describes its functionality and parameters."""
        pass

    @property
    @abstractmethod
    # pragma: no cover
    def tool_type(self) -> str:
        """The type of the tool implementation (e.g., 'python', 'javascript', 'lambda').

        Used for categorization and appropriate handling.
        """
        pass

    @property
    def supports_hot_reload(self) -> bool:
        """Whether the tool supports automatic reloading when modified.

        Returns:
            False by default.
        """
        return False

    @abstractmethod
    # pragma: no cover
    def stream(self, tool_use: ToolUse, invocation_state: dict[str, Any], **kwargs: Any) -> ToolGenerator:
        """Stream tool events and return the final result.

        Args:
            tool_use: The tool use request containing tool ID and parameters.
            invocation_state: Caller-provided kwargs that were passed to the agent when it was invoked (agent(),
                              agent.invoke_async(), etc.).
            **kwargs: Additional keyword arguments for future extensibility.

        Yields:
            Tool events with the last being the tool result.
        """
        ...

    @property
    def is_dynamic(self) -> bool:
        """Whether the tool was dynamically loaded during runtime.

        Dynamic tools may have different lifecycle management.

        Returns:
            True if loaded dynamically, False otherwise.
        """
        return self._is_dynamic

    def mark_dynamic(self) -> None:
        """Mark this tool as dynamically loaded."""
        self._is_dynamic = True

    def get_display_properties(self) -> dict[str, str]:
        """Get properties to display in UI representations of this tool.

        Subclasses can extend this to include additional properties.

        Returns:
            Dictionary of property names and their string values.
        """
        return {
            "Name": self.tool_name,
            "Type": self.tool_type,
        }

is_dynamic property

Whether the tool was dynamically loaded during runtime.

Dynamic tools may have different lifecycle management.

Returns:

Type Description
bool

True if loaded dynamically, False otherwise.

supports_hot_reload property

Whether the tool supports automatic reloading when modified.

Returns:

Type Description
bool

False by default.

tool_name abstractmethod property

The unique name of the tool used for identification and invocation.

tool_spec abstractmethod property

Tool specification that describes its functionality and parameters.

tool_type abstractmethod property

The type of the tool implementation (e.g., 'python', 'javascript', 'lambda').

Used for categorization and appropriate handling.

__init__()

Initialize the base agent tool with default dynamic state.

Source code in strands/types/tools.py
227
228
229
def __init__(self) -> None:
    """Initialize the base agent tool with default dynamic state."""
    self._is_dynamic = False

get_display_properties()

Get properties to display in UI representations of this tool.

Subclasses can extend this to include additional properties.

Returns:

Type Description
dict[str, str]

Dictionary of property names and their string values.

Source code in strands/types/tools.py
295
296
297
298
299
300
301
302
303
304
305
306
def get_display_properties(self) -> dict[str, str]:
    """Get properties to display in UI representations of this tool.

    Subclasses can extend this to include additional properties.

    Returns:
        Dictionary of property names and their string values.
    """
    return {
        "Name": self.tool_name,
        "Type": self.tool_type,
    }

mark_dynamic()

Mark this tool as dynamically loaded.

Source code in strands/types/tools.py
291
292
293
def mark_dynamic(self) -> None:
    """Mark this tool as dynamically loaded."""
    self._is_dynamic = True

stream(tool_use, invocation_state, **kwargs) abstractmethod

Stream tool events and return the final result.

Parameters:

Name Type Description Default
tool_use ToolUse

The tool use request containing tool ID and parameters.

required
invocation_state dict[str, Any]

Caller-provided kwargs that were passed to the agent when it was invoked (agent(), agent.invoke_async(), etc.).

required
**kwargs Any

Additional keyword arguments for future extensibility.

{}

Yields:

Type Description
ToolGenerator

Tool events with the last being the tool result.

Source code in strands/types/tools.py
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
@abstractmethod
# pragma: no cover
def stream(self, tool_use: ToolUse, invocation_state: dict[str, Any], **kwargs: Any) -> ToolGenerator:
    """Stream tool events and return the final result.

    Args:
        tool_use: The tool use request containing tool ID and parameters.
        invocation_state: Caller-provided kwargs that were passed to the agent when it was invoked (agent(),
                          agent.invoke_async(), etc.).
        **kwargs: Additional keyword arguments for future extensibility.

    Yields:
        Tool events with the last being the tool result.
    """
    ...

ContextWindowOverflowException

Bases: Exception

Exception raised when the context window is exceeded.

This exception is raised when the input to a model exceeds the maximum context window size that the model can handle. This typically occurs when the combined length of the conversation history, system prompt, and current message is too large for the model to process.

Source code in strands/types/exceptions.py
38
39
40
41
42
43
44
45
46
class ContextWindowOverflowException(Exception):
    """Exception raised when the context window is exceeded.

    This exception is raised when the input to a model exceeds the maximum context window size that the model can
    handle. This typically occurs when the combined length of the conversation history, system prompt, and current
    message is too large for the model to process.
    """

    pass

ConversationManager

Bases: ABC, HookProvider

Abstract base class for managing conversation history.

This class provides an interface for implementing conversation management strategies to control the size of message arrays/conversation histories, helping to:

  • Manage memory usage
  • Control context length
  • Maintain relevant conversation state

ConversationManager implements the HookProvider protocol, allowing derived classes to register hooks for agent lifecycle events. Derived classes that override register_hooks must call the base implementation to ensure proper hook registration.

Example
class MyConversationManager(ConversationManager):
    def register_hooks(self, registry: HookRegistry, **kwargs: Any) -> None:
        super().register_hooks(registry, **kwargs)
        # Register additional hooks here
Source code in strands/agent/conversation_manager/conversation_manager.py
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
class ConversationManager(ABC, HookProvider):
    """Abstract base class for managing conversation history.

    This class provides an interface for implementing conversation management strategies to control the size of message
    arrays/conversation histories, helping to:

    - Manage memory usage
    - Control context length
    - Maintain relevant conversation state

    ConversationManager implements the HookProvider protocol, allowing derived classes to register hooks for agent
    lifecycle events. Derived classes that override register_hooks must call the base implementation to ensure proper
    hook registration.

    Example:
        ```python
        class MyConversationManager(ConversationManager):
            def register_hooks(self, registry: HookRegistry, **kwargs: Any) -> None:
                super().register_hooks(registry, **kwargs)
                # Register additional hooks here
        ```
    """

    def __init__(self) -> None:
        """Initialize the ConversationManager.

        Attributes:
          removed_message_count: The messages that have been removed from the agents messages array.
              These represent messages provided by the user or LLM that have been removed, not messages
              included by the conversation manager through something like summarization.
        """
        self.removed_message_count = 0

    def register_hooks(self, registry: HookRegistry, **kwargs: Any) -> None:
        """Register hooks for agent lifecycle events.

        Derived classes that override this method must call the base implementation to ensure proper hook
        registration chain.

        Args:
            registry: The hook registry to register callbacks with.
            **kwargs: Additional keyword arguments for future extensibility.

        Example:
            ```python
            def register_hooks(self, registry: HookRegistry, **kwargs: Any) -> None:
                super().register_hooks(registry, **kwargs)
                registry.add_callback(SomeEvent, self.on_some_event)
            ```
        """
        pass

    def restore_from_session(self, state: dict[str, Any]) -> Optional[list[Message]]:
        """Restore the Conversation Manager's state from a session.

        Args:
            state: Previous state of the conversation manager
        Returns:
            Optional list of messages to prepend to the agents messages. By default returns None.
        """
        if state.get("__name__") != self.__class__.__name__:
            raise ValueError("Invalid conversation manager state.")
        self.removed_message_count = state["removed_message_count"]
        return None

    def get_state(self) -> dict[str, Any]:
        """Get the current state of a Conversation Manager as a Json serializable dictionary."""
        return {
            "__name__": self.__class__.__name__,
            "removed_message_count": self.removed_message_count,
        }

    @abstractmethod
    def apply_management(self, agent: "Agent", **kwargs: Any) -> None:
        """Applies management strategy to the provided agent.

        Processes the conversation history to maintain appropriate size by modifying the messages list in-place.
        Implementations should handle message pruning, summarization, or other size management techniques to keep the
        conversation context within desired bounds.

        Args:
            agent: The agent whose conversation history will be manage.
                This list is modified in-place.
            **kwargs: Additional keyword arguments for future extensibility.
        """
        pass

    @abstractmethod
    def reduce_context(self, agent: "Agent", e: Optional[Exception] = None, **kwargs: Any) -> None:
        """Called when the model's context window is exceeded.

        This method should implement the specific strategy for reducing the window size when a context overflow occurs.
        It is typically called after a ContextWindowOverflowException is caught.

        Implementations might use strategies such as:

        - Removing the N oldest messages
        - Summarizing older context
        - Applying importance-based filtering
        - Maintaining critical conversation markers

        Args:
            agent: The agent whose conversation history will be reduced.
                This list is modified in-place.
            e: The exception that triggered the context reduction, if any.
            **kwargs: Additional keyword arguments for future extensibility.
        """
        pass

__init__()

Initialize the ConversationManager.

Attributes:

Name Type Description
removed_message_count

The messages that have been removed from the agents messages array. These represent messages provided by the user or LLM that have been removed, not messages included by the conversation manager through something like summarization.

Source code in strands/agent/conversation_manager/conversation_manager.py
36
37
38
39
40
41
42
43
44
def __init__(self) -> None:
    """Initialize the ConversationManager.

    Attributes:
      removed_message_count: The messages that have been removed from the agents messages array.
          These represent messages provided by the user or LLM that have been removed, not messages
          included by the conversation manager through something like summarization.
    """
    self.removed_message_count = 0

apply_management(agent, **kwargs) abstractmethod

Applies management strategy to the provided agent.

Processes the conversation history to maintain appropriate size by modifying the messages list in-place. Implementations should handle message pruning, summarization, or other size management techniques to keep the conversation context within desired bounds.

Parameters:

Name Type Description Default
agent Agent

The agent whose conversation history will be manage. This list is modified in-place.

required
**kwargs Any

Additional keyword arguments for future extensibility.

{}
Source code in strands/agent/conversation_manager/conversation_manager.py
85
86
87
88
89
90
91
92
93
94
95
96
97
98
@abstractmethod
def apply_management(self, agent: "Agent", **kwargs: Any) -> None:
    """Applies management strategy to the provided agent.

    Processes the conversation history to maintain appropriate size by modifying the messages list in-place.
    Implementations should handle message pruning, summarization, or other size management techniques to keep the
    conversation context within desired bounds.

    Args:
        agent: The agent whose conversation history will be manage.
            This list is modified in-place.
        **kwargs: Additional keyword arguments for future extensibility.
    """
    pass

get_state()

Get the current state of a Conversation Manager as a Json serializable dictionary.

Source code in strands/agent/conversation_manager/conversation_manager.py
78
79
80
81
82
83
def get_state(self) -> dict[str, Any]:
    """Get the current state of a Conversation Manager as a Json serializable dictionary."""
    return {
        "__name__": self.__class__.__name__,
        "removed_message_count": self.removed_message_count,
    }

reduce_context(agent, e=None, **kwargs) abstractmethod

Called when the model's context window is exceeded.

This method should implement the specific strategy for reducing the window size when a context overflow occurs. It is typically called after a ContextWindowOverflowException is caught.

Implementations might use strategies such as:

  • Removing the N oldest messages
  • Summarizing older context
  • Applying importance-based filtering
  • Maintaining critical conversation markers

Parameters:

Name Type Description Default
agent Agent

The agent whose conversation history will be reduced. This list is modified in-place.

required
e Optional[Exception]

The exception that triggered the context reduction, if any.

None
**kwargs Any

Additional keyword arguments for future extensibility.

{}
Source code in strands/agent/conversation_manager/conversation_manager.py
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
@abstractmethod
def reduce_context(self, agent: "Agent", e: Optional[Exception] = None, **kwargs: Any) -> None:
    """Called when the model's context window is exceeded.

    This method should implement the specific strategy for reducing the window size when a context overflow occurs.
    It is typically called after a ContextWindowOverflowException is caught.

    Implementations might use strategies such as:

    - Removing the N oldest messages
    - Summarizing older context
    - Applying importance-based filtering
    - Maintaining critical conversation markers

    Args:
        agent: The agent whose conversation history will be reduced.
            This list is modified in-place.
        e: The exception that triggered the context reduction, if any.
        **kwargs: Additional keyword arguments for future extensibility.
    """
    pass

register_hooks(registry, **kwargs)

Register hooks for agent lifecycle events.

Derived classes that override this method must call the base implementation to ensure proper hook registration chain.

Parameters:

Name Type Description Default
registry HookRegistry

The hook registry to register callbacks with.

required
**kwargs Any

Additional keyword arguments for future extensibility.

{}
Example
def register_hooks(self, registry: HookRegistry, **kwargs: Any) -> None:
    super().register_hooks(registry, **kwargs)
    registry.add_callback(SomeEvent, self.on_some_event)
Source code in strands/agent/conversation_manager/conversation_manager.py
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
def register_hooks(self, registry: HookRegistry, **kwargs: Any) -> None:
    """Register hooks for agent lifecycle events.

    Derived classes that override this method must call the base implementation to ensure proper hook
    registration chain.

    Args:
        registry: The hook registry to register callbacks with.
        **kwargs: Additional keyword arguments for future extensibility.

    Example:
        ```python
        def register_hooks(self, registry: HookRegistry, **kwargs: Any) -> None:
            super().register_hooks(registry, **kwargs)
            registry.add_callback(SomeEvent, self.on_some_event)
        ```
    """
    pass

restore_from_session(state)

Restore the Conversation Manager's state from a session.

Parameters:

Name Type Description Default
state dict[str, Any]

Previous state of the conversation manager

required

Returns: Optional list of messages to prepend to the agents messages. By default returns None.

Source code in strands/agent/conversation_manager/conversation_manager.py
65
66
67
68
69
70
71
72
73
74
75
76
def restore_from_session(self, state: dict[str, Any]) -> Optional[list[Message]]:
    """Restore the Conversation Manager's state from a session.

    Args:
        state: Previous state of the conversation manager
    Returns:
        Optional list of messages to prepend to the agents messages. By default returns None.
    """
    if state.get("__name__") != self.__class__.__name__:
        raise ValueError("Invalid conversation manager state.")
    self.removed_message_count = state["removed_message_count"]
    return None

Message

Bases: TypedDict

A message in a conversation with the agent.

Attributes:

Name Type Description
content List[ContentBlock]

The message content.

role Role

The role of the message sender.

Source code in strands/types/content.py
178
179
180
181
182
183
184
185
186
187
class Message(TypedDict):
    """A message in a conversation with the agent.

    Attributes:
        content: The message content.
        role: The role of the message sender.
    """

    content: List[ContentBlock]
    role: Role

SummarizingConversationManager

Bases: ConversationManager

Implements a summarizing window manager.

This manager provides a configurable option to summarize older context instead of simply trimming it, helping preserve important information while staying within context limits.

Source code in strands/agent/conversation_manager/summarizing_conversation_manager.py
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
class SummarizingConversationManager(ConversationManager):
    """Implements a summarizing window manager.

    This manager provides a configurable option to summarize older context instead of
    simply trimming it, helping preserve important information while staying within
    context limits.
    """

    def __init__(
        self,
        summary_ratio: float = 0.3,
        preserve_recent_messages: int = 10,
        summarization_agent: Optional["Agent"] = None,
        summarization_system_prompt: Optional[str] = None,
    ):
        """Initialize the summarizing conversation manager.

        Args:
            summary_ratio: Ratio of messages to summarize vs keep when context overflow occurs.
                Value between 0.1 and 0.8. Defaults to 0.3 (summarize 30% of oldest messages).
            preserve_recent_messages: Minimum number of recent messages to always keep.
                Defaults to 10 messages.
            summarization_agent: Optional agent to use for summarization instead of the parent agent.
                If provided, this agent can use tools as part of the summarization process.
            summarization_system_prompt: Optional system prompt override for summarization.
                If None, uses the default summarization prompt.
        """
        super().__init__()
        if summarization_agent is not None and summarization_system_prompt is not None:
            raise ValueError(
                "Cannot provide both summarization_agent and summarization_system_prompt. "
                "Agents come with their own system prompt."
            )

        self.summary_ratio = max(0.1, min(0.8, summary_ratio))
        self.preserve_recent_messages = preserve_recent_messages
        self.summarization_agent = summarization_agent
        self.summarization_system_prompt = summarization_system_prompt
        self._summary_message: Optional[Message] = None

    @override
    def restore_from_session(self, state: dict[str, Any]) -> Optional[list[Message]]:
        """Restores the Summarizing Conversation manager from its previous state in a session.

        Args:
            state: The previous state of the Summarizing Conversation Manager.

        Returns:
            Optionally returns the previous conversation summary if it exists.
        """
        super().restore_from_session(state)
        self._summary_message = state.get("summary_message")
        return [self._summary_message] if self._summary_message else None

    def get_state(self) -> dict[str, Any]:
        """Returns a dictionary representation of the state for the Summarizing Conversation Manager."""
        return {"summary_message": self._summary_message, **super().get_state()}

    def apply_management(self, agent: "Agent", **kwargs: Any) -> None:
        """Apply management strategy to conversation history.

        For the summarizing conversation manager, no proactive management is performed.
        Summarization only occurs when there's a context overflow that triggers reduce_context.

        Args:
            agent: The agent whose conversation history will be managed.
                The agent's messages list is modified in-place.
            **kwargs: Additional keyword arguments for future extensibility.
        """
        # No proactive management - summarization only happens on context overflow
        pass

    def reduce_context(self, agent: "Agent", e: Optional[Exception] = None, **kwargs: Any) -> None:
        """Reduce context using summarization.

        Args:
            agent: The agent whose conversation history will be reduced.
                The agent's messages list is modified in-place.
            e: The exception that triggered the context reduction, if any.
            **kwargs: Additional keyword arguments for future extensibility.

        Raises:
            ContextWindowOverflowException: If the context cannot be summarized.
        """
        try:
            # Calculate how many messages to summarize
            messages_to_summarize_count = max(1, int(len(agent.messages) * self.summary_ratio))

            # Ensure we don't summarize recent messages
            messages_to_summarize_count = min(
                messages_to_summarize_count, len(agent.messages) - self.preserve_recent_messages
            )

            if messages_to_summarize_count <= 0:
                raise ContextWindowOverflowException("Cannot summarize: insufficient messages for summarization")

            # Adjust split point to avoid breaking ToolUse/ToolResult pairs
            messages_to_summarize_count = self._adjust_split_point_for_tool_pairs(
                agent.messages, messages_to_summarize_count
            )

            if messages_to_summarize_count <= 0:
                raise ContextWindowOverflowException("Cannot summarize: insufficient messages for summarization")

            # Extract messages to summarize
            messages_to_summarize = agent.messages[:messages_to_summarize_count]
            remaining_messages = agent.messages[messages_to_summarize_count:]

            # Keep track of the number of messages that have been summarized thus far.
            self.removed_message_count += len(messages_to_summarize)
            # If there is a summary message, don't count it in the removed_message_count.
            if self._summary_message:
                self.removed_message_count -= 1

            # Generate summary
            self._summary_message = self._generate_summary(messages_to_summarize, agent)

            # Replace the summarized messages with the summary
            agent.messages[:] = [self._summary_message] + remaining_messages

        except Exception as summarization_error:
            logger.error("Summarization failed: %s", summarization_error)
            raise summarization_error from e

    def _generate_summary(self, messages: List[Message], agent: "Agent") -> Message:
        """Generate a summary of the provided messages.

        Args:
            messages: The messages to summarize.
            agent: The agent instance to use for summarization.

        Returns:
            A message containing the conversation summary.

        Raises:
            Exception: If summary generation fails.
        """
        # Choose which agent to use for summarization
        summarization_agent = self.summarization_agent if self.summarization_agent is not None else agent

        # Save original system prompt, messages, and tool registry to restore later
        original_system_prompt = summarization_agent.system_prompt
        original_messages = summarization_agent.messages.copy()
        original_tool_registry = summarization_agent.tool_registry

        try:
            # Only override system prompt if no agent was provided during initialization
            if self.summarization_agent is None:
                # Use custom system prompt if provided, otherwise use default
                system_prompt = (
                    self.summarization_system_prompt
                    if self.summarization_system_prompt is not None
                    else DEFAULT_SUMMARIZATION_PROMPT
                )
                # Temporarily set the system prompt for summarization
                summarization_agent.system_prompt = system_prompt

            # Add no-op tool if agent has no tools to satisfy tool spec requirement
            if not summarization_agent.tool_names:
                tool_registry = ToolRegistry()
                tool_registry.register_tool(cast(AgentTool, noop_tool))
                summarization_agent.tool_registry = tool_registry

            summarization_agent.messages = messages

            # Use the agent to generate summary with rich content (can use tools if needed)
            result = summarization_agent("Please summarize this conversation.")
            return cast(Message, {**result.message, "role": "user"})

        finally:
            # Restore original agent state
            summarization_agent.system_prompt = original_system_prompt
            summarization_agent.messages = original_messages
            summarization_agent.tool_registry = original_tool_registry

    def _adjust_split_point_for_tool_pairs(self, messages: List[Message], split_point: int) -> int:
        """Adjust the split point to avoid breaking ToolUse/ToolResult pairs.

        Uses the same logic as SlidingWindowConversationManager for consistency.

        Args:
            messages: The full list of messages.
            split_point: The initially calculated split point.

        Returns:
            The adjusted split point that doesn't break ToolUse/ToolResult pairs.

        Raises:
            ContextWindowOverflowException: If no valid split point can be found.
        """
        if split_point > len(messages):
            raise ContextWindowOverflowException("Split point exceeds message array length")

        if split_point == len(messages):
            return split_point

        # Find the next valid split_point
        while split_point < len(messages):
            if (
                # Oldest message cannot be a toolResult because it needs a toolUse preceding it
                any("toolResult" in content for content in messages[split_point]["content"])
                or (
                    # Oldest message can be a toolUse only if a toolResult immediately follows it.
                    any("toolUse" in content for content in messages[split_point]["content"])
                    and split_point + 1 < len(messages)
                    and not any("toolResult" in content for content in messages[split_point + 1]["content"])
                )
            ):
                split_point += 1
            else:
                break
        else:
            # If we didn't find a valid split_point, then we throw
            raise ContextWindowOverflowException("Unable to trim conversation context!")

        return split_point

__init__(summary_ratio=0.3, preserve_recent_messages=10, summarization_agent=None, summarization_system_prompt=None)

Initialize the summarizing conversation manager.

Parameters:

Name Type Description Default
summary_ratio float

Ratio of messages to summarize vs keep when context overflow occurs. Value between 0.1 and 0.8. Defaults to 0.3 (summarize 30% of oldest messages).

0.3
preserve_recent_messages int

Minimum number of recent messages to always keep. Defaults to 10 messages.

10
summarization_agent Optional[Agent]

Optional agent to use for summarization instead of the parent agent. If provided, this agent can use tools as part of the summarization process.

None
summarization_system_prompt Optional[str]

Optional system prompt override for summarization. If None, uses the default summarization prompt.

None
Source code in strands/agent/conversation_manager/summarizing_conversation_manager.py
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
def __init__(
    self,
    summary_ratio: float = 0.3,
    preserve_recent_messages: int = 10,
    summarization_agent: Optional["Agent"] = None,
    summarization_system_prompt: Optional[str] = None,
):
    """Initialize the summarizing conversation manager.

    Args:
        summary_ratio: Ratio of messages to summarize vs keep when context overflow occurs.
            Value between 0.1 and 0.8. Defaults to 0.3 (summarize 30% of oldest messages).
        preserve_recent_messages: Minimum number of recent messages to always keep.
            Defaults to 10 messages.
        summarization_agent: Optional agent to use for summarization instead of the parent agent.
            If provided, this agent can use tools as part of the summarization process.
        summarization_system_prompt: Optional system prompt override for summarization.
            If None, uses the default summarization prompt.
    """
    super().__init__()
    if summarization_agent is not None and summarization_system_prompt is not None:
        raise ValueError(
            "Cannot provide both summarization_agent and summarization_system_prompt. "
            "Agents come with their own system prompt."
        )

    self.summary_ratio = max(0.1, min(0.8, summary_ratio))
    self.preserve_recent_messages = preserve_recent_messages
    self.summarization_agent = summarization_agent
    self.summarization_system_prompt = summarization_system_prompt
    self._summary_message: Optional[Message] = None

apply_management(agent, **kwargs)

Apply management strategy to conversation history.

For the summarizing conversation manager, no proactive management is performed. Summarization only occurs when there's a context overflow that triggers reduce_context.

Parameters:

Name Type Description Default
agent Agent

The agent whose conversation history will be managed. The agent's messages list is modified in-place.

required
**kwargs Any

Additional keyword arguments for future extensibility.

{}
Source code in strands/agent/conversation_manager/summarizing_conversation_manager.py
110
111
112
113
114
115
116
117
118
119
120
121
122
def apply_management(self, agent: "Agent", **kwargs: Any) -> None:
    """Apply management strategy to conversation history.

    For the summarizing conversation manager, no proactive management is performed.
    Summarization only occurs when there's a context overflow that triggers reduce_context.

    Args:
        agent: The agent whose conversation history will be managed.
            The agent's messages list is modified in-place.
        **kwargs: Additional keyword arguments for future extensibility.
    """
    # No proactive management - summarization only happens on context overflow
    pass

get_state()

Returns a dictionary representation of the state for the Summarizing Conversation Manager.

Source code in strands/agent/conversation_manager/summarizing_conversation_manager.py
106
107
108
def get_state(self) -> dict[str, Any]:
    """Returns a dictionary representation of the state for the Summarizing Conversation Manager."""
    return {"summary_message": self._summary_message, **super().get_state()}

reduce_context(agent, e=None, **kwargs)

Reduce context using summarization.

Parameters:

Name Type Description Default
agent Agent

The agent whose conversation history will be reduced. The agent's messages list is modified in-place.

required
e Optional[Exception]

The exception that triggered the context reduction, if any.

None
**kwargs Any

Additional keyword arguments for future extensibility.

{}

Raises:

Type Description
ContextWindowOverflowException

If the context cannot be summarized.

Source code in strands/agent/conversation_manager/summarizing_conversation_manager.py
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
def reduce_context(self, agent: "Agent", e: Optional[Exception] = None, **kwargs: Any) -> None:
    """Reduce context using summarization.

    Args:
        agent: The agent whose conversation history will be reduced.
            The agent's messages list is modified in-place.
        e: The exception that triggered the context reduction, if any.
        **kwargs: Additional keyword arguments for future extensibility.

    Raises:
        ContextWindowOverflowException: If the context cannot be summarized.
    """
    try:
        # Calculate how many messages to summarize
        messages_to_summarize_count = max(1, int(len(agent.messages) * self.summary_ratio))

        # Ensure we don't summarize recent messages
        messages_to_summarize_count = min(
            messages_to_summarize_count, len(agent.messages) - self.preserve_recent_messages
        )

        if messages_to_summarize_count <= 0:
            raise ContextWindowOverflowException("Cannot summarize: insufficient messages for summarization")

        # Adjust split point to avoid breaking ToolUse/ToolResult pairs
        messages_to_summarize_count = self._adjust_split_point_for_tool_pairs(
            agent.messages, messages_to_summarize_count
        )

        if messages_to_summarize_count <= 0:
            raise ContextWindowOverflowException("Cannot summarize: insufficient messages for summarization")

        # Extract messages to summarize
        messages_to_summarize = agent.messages[:messages_to_summarize_count]
        remaining_messages = agent.messages[messages_to_summarize_count:]

        # Keep track of the number of messages that have been summarized thus far.
        self.removed_message_count += len(messages_to_summarize)
        # If there is a summary message, don't count it in the removed_message_count.
        if self._summary_message:
            self.removed_message_count -= 1

        # Generate summary
        self._summary_message = self._generate_summary(messages_to_summarize, agent)

        # Replace the summarized messages with the summary
        agent.messages[:] = [self._summary_message] + remaining_messages

    except Exception as summarization_error:
        logger.error("Summarization failed: %s", summarization_error)
        raise summarization_error from e

restore_from_session(state)

Restores the Summarizing Conversation manager from its previous state in a session.

Parameters:

Name Type Description Default
state dict[str, Any]

The previous state of the Summarizing Conversation Manager.

required

Returns:

Type Description
Optional[list[Message]]

Optionally returns the previous conversation summary if it exists.

Source code in strands/agent/conversation_manager/summarizing_conversation_manager.py
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
@override
def restore_from_session(self, state: dict[str, Any]) -> Optional[list[Message]]:
    """Restores the Summarizing Conversation manager from its previous state in a session.

    Args:
        state: The previous state of the Summarizing Conversation Manager.

    Returns:
        Optionally returns the previous conversation summary if it exists.
    """
    super().restore_from_session(state)
    self._summary_message = state.get("summary_message")
    return [self._summary_message] if self._summary_message else None

ToolRegistry

Central registry for all tools available to the agent.

This class manages tool registration, validation, discovery, and invocation.

Source code in strands/tools/registry.py
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
class ToolRegistry:
    """Central registry for all tools available to the agent.

    This class manages tool registration, validation, discovery, and invocation.
    """

    def __init__(self) -> None:
        """Initialize the tool registry."""
        self.registry: Dict[str, AgentTool] = {}
        self.dynamic_tools: Dict[str, AgentTool] = {}
        self.tool_config: Optional[Dict[str, Any]] = None
        self._tool_providers: List[ToolProvider] = []
        self._registry_id = str(uuid.uuid4())

    def process_tools(self, tools: List[Any]) -> List[str]:
        """Process tools list.

        Process list of tools that can contain local file path string, module import path string,
        imported modules, @tool decorated functions, or instances of AgentTool.

        Args:
            tools: List of tool specifications. Can be:

                1. Local file path to a module based tool: `./path/to/module/tool.py`
                2. Module import path

                    2.1. Path to a module based tool: `strands_tools.file_read`
                    2.2. Path to a module with multiple AgentTool instances (@tool decorated):
                        `tests.fixtures.say_tool`
                    2.3. Path to a module and a specific function: `tests.fixtures.say_tool:say`

                3. A module for a module based tool
                4. Instances of AgentTool (@tool decorated functions)
                5. Dictionaries with name/path keys (deprecated)


        Returns:
            List of tool names that were processed.
        """
        tool_names = []

        def add_tool(tool: Any) -> None:
            try:
                # String based tool
                # Can be a file path, a module path, or a module path with a targeted function. Examples:
                # './path/to/tool.py'
                # 'my.module.tool'
                # 'my.module.tool:tool_name'
                if isinstance(tool, str):
                    tools = load_tool_from_string(tool)
                    for a_tool in tools:
                        a_tool.mark_dynamic()
                        self.register_tool(a_tool)
                        tool_names.append(a_tool.tool_name)

                # Dictionary with name and path
                elif isinstance(tool, dict) and "name" in tool and "path" in tool:
                    tools = load_tool_from_string(tool["path"])

                    tool_found = False
                    for a_tool in tools:
                        if a_tool.tool_name == tool["name"]:
                            a_tool.mark_dynamic()
                            self.register_tool(a_tool)
                            tool_names.append(a_tool.tool_name)
                            tool_found = True

                    if not tool_found:
                        raise ValueError(f'Tool "{tool["name"]}" not found in "{tool["path"]}"')

                # Dictionary with path only
                elif isinstance(tool, dict) and "path" in tool:
                    tools = load_tool_from_string(tool["path"])

                    for a_tool in tools:
                        a_tool.mark_dynamic()
                        self.register_tool(a_tool)
                        tool_names.append(a_tool.tool_name)

                # Imported Python module
                elif hasattr(tool, "__file__") and inspect.ismodule(tool):
                    # Extract the tool name from the module name
                    module_tool_name = tool.__name__.split(".")[-1]

                    tools = load_tools_from_module(tool, module_tool_name)
                    for a_tool in tools:
                        self.register_tool(a_tool)
                        tool_names.append(a_tool.tool_name)

                # Case 5: AgentTools (which also covers @tool)
                elif isinstance(tool, AgentTool):
                    self.register_tool(tool)
                    tool_names.append(tool.tool_name)

                # Case 6: Nested iterable (list, tuple, etc.) - add each sub-tool
                elif isinstance(tool, Iterable) and not isinstance(tool, (str, bytes, bytearray)):
                    for t in tool:
                        add_tool(t)

                # Case 5: ToolProvider
                elif isinstance(tool, ToolProvider):
                    self._tool_providers.append(tool)
                    tool.add_consumer(self._registry_id)

                    async def get_tools() -> Sequence[AgentTool]:
                        return await tool.load_tools()

                    provider_tools = run_async(get_tools)

                    for provider_tool in provider_tools:
                        self.register_tool(provider_tool)
                        tool_names.append(provider_tool.tool_name)
                else:
                    logger.warning("tool=<%s> | unrecognized tool specification", tool)

            except Exception as e:
                exception_str = str(e)
                logger.exception("tool_name=<%s> | failed to load tool", tool)
                raise ValueError(f"Failed to load tool {tool}: {exception_str}") from e

        for tool in tools:
            add_tool(tool)
        return tool_names

    def load_tool_from_filepath(self, tool_name: str, tool_path: str) -> None:
        """DEPRECATED: Load a tool from a file path.

        Args:
            tool_name: Name of the tool.
            tool_path: Path to the tool file.

        Raises:
            FileNotFoundError: If the tool file is not found.
            ValueError: If the tool cannot be loaded.
        """
        warnings.warn(
            "load_tool_from_filepath is deprecated and will be removed in Strands SDK 2.0. "
            "`process_tools` automatically handles loading tools from a filepath.",
            DeprecationWarning,
            stacklevel=2,
        )

        from .loader import ToolLoader

        try:
            tool_path = expanduser(tool_path)
            if not os.path.exists(tool_path):
                raise FileNotFoundError(f"Tool file not found: {tool_path}")

            loaded_tools = ToolLoader.load_tools(tool_path, tool_name)
            for t in loaded_tools:
                t.mark_dynamic()
                # Because we're explicitly registering the tool we don't need an allowlist
                self.register_tool(t)
        except Exception as e:
            exception_str = str(e)
            logger.exception("tool_name=<%s> | failed to load tool", tool_name)
            raise ValueError(f"Failed to load tool {tool_name}: {exception_str}") from e

    def get_all_tools_config(self) -> Dict[str, Any]:
        """Dynamically generate tool configuration by combining built-in and dynamic tools.

        Returns:
            Dictionary containing all tool configurations.
        """
        tool_config = {}
        logger.debug("getting tool configurations")

        # Add all registered tools
        for tool_name, tool in self.registry.items():
            # Make a deep copy to avoid modifying the original
            spec = tool.tool_spec.copy()
            try:
                # Normalize the schema before validation
                spec = normalize_tool_spec(spec)
                self.validate_tool_spec(spec)
                tool_config[tool_name] = spec
                logger.debug("tool_name=<%s> | loaded tool config", tool_name)
            except ValueError as e:
                logger.warning("tool_name=<%s> | spec validation failed | %s", tool_name, e)

        # Add any dynamic tools
        for tool_name, tool in self.dynamic_tools.items():
            if tool_name not in tool_config:
                # Make a deep copy to avoid modifying the original
                spec = tool.tool_spec.copy()
                try:
                    # Normalize the schema before validation
                    spec = normalize_tool_spec(spec)
                    self.validate_tool_spec(spec)
                    tool_config[tool_name] = spec
                    logger.debug("tool_name=<%s> | loaded dynamic tool config", tool_name)
                except ValueError as e:
                    logger.warning("tool_name=<%s> | dynamic tool spec validation failed | %s", tool_name, e)

        logger.debug("tool_count=<%s> | tools configured", len(tool_config))
        return tool_config

    # mypy has problems converting between DecoratedFunctionTool <-> AgentTool
    def register_tool(self, tool: AgentTool) -> None:
        """Register a tool function with the given name.

        Args:
            tool: The tool to register.
        """
        logger.debug(
            "tool_name=<%s>, tool_type=<%s>, is_dynamic=<%s> | registering tool",
            tool.tool_name,
            tool.tool_type,
            tool.is_dynamic,
        )

        # Check duplicate tool name, throw on duplicate tool names except if hot_reloading is enabled
        if tool.tool_name in self.registry and not tool.supports_hot_reload:
            raise ValueError(
                f"Tool name '{tool.tool_name}' already exists. Cannot register tools with exact same name."
            )

        # Check for normalized name conflicts (- vs _)
        if self.registry.get(tool.tool_name) is None:
            normalized_name = tool.tool_name.replace("-", "_")

            matching_tools = [
                tool_name
                for (tool_name, tool) in self.registry.items()
                if tool_name.replace("-", "_") == normalized_name
            ]

            if matching_tools:
                raise ValueError(
                    f"Tool name '{tool.tool_name}' already exists as '{matching_tools[0]}'."
                    " Cannot add a duplicate tool which differs by a '-' or '_'"
                )

        # Register in main registry
        self.registry[tool.tool_name] = tool

        # Register in dynamic tools if applicable
        if tool.is_dynamic:
            self.dynamic_tools[tool.tool_name] = tool

            if not tool.supports_hot_reload:
                logger.debug("tool_name=<%s>, tool_type=<%s> | skipping hot reloading", tool.tool_name, tool.tool_type)
                return

            logger.debug(
                "tool_name=<%s>, tool_registry=<%s>, dynamic_tools=<%s> | tool registered",
                tool.tool_name,
                list(self.registry.keys()),
                list(self.dynamic_tools.keys()),
            )

    def replace(self, new_tool: AgentTool) -> None:
        """Replace an existing tool with a new implementation.

        This performs a swap of the tool implementation in the registry.
        The replacement takes effect on the next agent invocation.

        Args:
            new_tool: New tool implementation. Its name must match the tool being replaced.

        Raises:
            ValueError: If the tool doesn't exist.
        """
        tool_name = new_tool.tool_name

        if tool_name not in self.registry:
            raise ValueError(f"Cannot replace tool '{tool_name}' - tool does not exist")

        # Update main registry
        self.registry[tool_name] = new_tool

        # Update dynamic_tools to match new tool's dynamic status
        if new_tool.is_dynamic:
            self.dynamic_tools[tool_name] = new_tool
        elif tool_name in self.dynamic_tools:
            del self.dynamic_tools[tool_name]

    def get_tools_dirs(self) -> List[Path]:
        """Get all tool directory paths.

        Returns:
            A list of Path objects for current working directory's "./tools/".
        """
        # Current working directory's tools directory
        cwd_tools_dir = Path.cwd() / "tools"

        # Return all directories that exist
        tool_dirs = []
        for directory in [cwd_tools_dir]:
            if directory.exists() and directory.is_dir():
                tool_dirs.append(directory)
                logger.debug("tools_dir=<%s> | found tools directory", directory)
            else:
                logger.debug("tools_dir=<%s> | tools directory not found", directory)

        return tool_dirs

    def discover_tool_modules(self) -> Dict[str, Path]:
        """Discover available tool modules in all tools directories.

        Returns:
            Dictionary mapping tool names to their full paths.
        """
        tool_modules = {}
        tools_dirs = self.get_tools_dirs()

        for tools_dir in tools_dirs:
            logger.debug("tools_dir=<%s> | scanning", tools_dir)

            # Find Python tools
            for extension in ["*.py"]:
                for item in tools_dir.glob(extension):
                    if item.is_file() and not item.name.startswith("__"):
                        module_name = item.stem
                        # If tool already exists, newer paths take precedence
                        if module_name in tool_modules:
                            logger.debug("tools_dir=<%s>, module_name=<%s> | tool overridden", tools_dir, module_name)
                        tool_modules[module_name] = item

        logger.debug("tool_modules=<%s> | discovered", list(tool_modules.keys()))
        return tool_modules

    def reload_tool(self, tool_name: str) -> None:
        """Reload a specific tool module.

        Args:
            tool_name: Name of the tool to reload.

        Raises:
            FileNotFoundError: If the tool file cannot be found.
            ImportError: If there are issues importing the tool module.
            ValueError: If the tool specification is invalid or required components are missing.
            Exception: For other errors during tool reloading.
        """
        try:
            # Check for tool file
            logger.debug("tool_name=<%s> | searching directories for tool", tool_name)
            tools_dirs = self.get_tools_dirs()
            tool_path = None

            # Search for the tool file in all tool directories
            for tools_dir in tools_dirs:
                temp_path = tools_dir / f"{tool_name}.py"
                if temp_path.exists():
                    tool_path = temp_path
                    break

            if not tool_path:
                raise FileNotFoundError(f"No tool file found for: {tool_name}")

            logger.debug("tool_name=<%s> | reloading tool", tool_name)

            # Add tool directory to path temporarily
            tool_dir = str(tool_path.parent)
            sys.path.insert(0, tool_dir)
            try:
                # Load the module directly using spec
                spec = util.spec_from_file_location(tool_name, str(tool_path))
                if spec is None:
                    raise ImportError(f"Could not load spec for {tool_name}")

                module = util.module_from_spec(spec)
                sys.modules[tool_name] = module

                if spec.loader is None:
                    raise ImportError(f"Could not load {tool_name}")

                spec.loader.exec_module(module)

            finally:
                # Remove the temporary path
                sys.path.remove(tool_dir)

            # Look for function-based tools first
            try:
                function_tools = self._scan_module_for_tools(module)

                if function_tools:
                    for function_tool in function_tools:
                        # Register the function-based tool
                        self.register_tool(function_tool)

                        # Update tool configuration if available
                        if self.tool_config is not None:
                            self._update_tool_config(self.tool_config, {"spec": function_tool.tool_spec})

                    logger.debug("tool_name=<%s> | successfully reloaded function-based tool from module", tool_name)
                    return
            except ImportError:
                logger.debug("function tool loader not available | falling back to traditional tools")

            # Fall back to traditional module-level tools
            if not hasattr(module, "TOOL_SPEC"):
                raise ValueError(
                    f"Tool {tool_name} is missing TOOL_SPEC (neither at module level nor as a decorated function)"
                )

            expected_func_name = tool_name
            if not hasattr(module, expected_func_name):
                raise ValueError(f"Tool {tool_name} is missing {expected_func_name} function")

            tool_function = getattr(module, expected_func_name)
            if not callable(tool_function):
                raise ValueError(f"Tool {tool_name} function is not callable")

            # Validate tool spec
            self.validate_tool_spec(module.TOOL_SPEC)

            new_tool = PythonAgentTool(tool_name, module.TOOL_SPEC, tool_function)

            # Register the tool
            self.register_tool(new_tool)

            # Update tool configuration if available
            if self.tool_config is not None:
                self._update_tool_config(self.tool_config, {"spec": module.TOOL_SPEC})
            logger.debug("tool_name=<%s> | successfully reloaded tool", tool_name)

        except Exception:
            logger.exception("tool_name=<%s> | failed to reload tool", tool_name)
            raise

    def initialize_tools(self, load_tools_from_directory: bool = False) -> None:
        """Initialize all tools by discovering and loading them dynamically from all tool directories.

        Args:
            load_tools_from_directory: Whether to reload tools if changes are made at runtime.
        """
        self.tool_config = None

        # Then discover and load other tools
        tool_modules = self.discover_tool_modules()
        successful_loads = 0
        total_tools = len(tool_modules)
        tool_import_errors = {}

        # Process Python tools
        for tool_name, tool_path in tool_modules.items():
            if tool_name in ["__init__"]:
                continue

            if not load_tools_from_directory:
                continue

            try:
                # Add directory to path temporarily
                tool_dir = str(tool_path.parent)
                sys.path.insert(0, tool_dir)
                try:
                    module = import_module(tool_name)
                finally:
                    if tool_dir in sys.path:
                        sys.path.remove(tool_dir)

                # Process Python tool
                if tool_path.suffix == ".py":
                    # Check for decorated function tools first
                    try:
                        function_tools = self._scan_module_for_tools(module)

                        if function_tools:
                            for function_tool in function_tools:
                                self.register_tool(function_tool)
                                successful_loads += 1
                        else:
                            # Fall back to traditional tools
                            # Check for expected tool function
                            expected_func_name = tool_name
                            if hasattr(module, expected_func_name):
                                tool_function = getattr(module, expected_func_name)
                                if not callable(tool_function):
                                    logger.warning(
                                        "tool_name=<%s> | tool function exists but is not callable", tool_name
                                    )
                                    continue

                                # Validate tool spec before registering
                                if not hasattr(module, "TOOL_SPEC"):
                                    logger.warning("tool_name=<%s> | tool is missing TOOL_SPEC | skipping", tool_name)
                                    continue

                                try:
                                    self.validate_tool_spec(module.TOOL_SPEC)
                                except ValueError as e:
                                    logger.warning("tool_name=<%s> | tool spec validation failed | %s", tool_name, e)
                                    continue

                                tool_spec = module.TOOL_SPEC
                                tool = PythonAgentTool(tool_name, tool_spec, tool_function)
                                self.register_tool(tool)
                                successful_loads += 1

                            else:
                                logger.warning("tool_name=<%s> | tool function missing", tool_name)
                    except ImportError:
                        # Function tool loader not available, fall back to traditional tools
                        # Check for expected tool function
                        expected_func_name = tool_name
                        if hasattr(module, expected_func_name):
                            tool_function = getattr(module, expected_func_name)
                            if not callable(tool_function):
                                logger.warning("tool_name=<%s> | tool function exists but is not callable", tool_name)
                                continue

                            # Validate tool spec before registering
                            if not hasattr(module, "TOOL_SPEC"):
                                logger.warning("tool_name=<%s> | tool is missing TOOL_SPEC | skipping", tool_name)
                                continue

                            try:
                                self.validate_tool_spec(module.TOOL_SPEC)
                            except ValueError as e:
                                logger.warning("tool_name=<%s> | tool spec validation failed | %s", tool_name, e)
                                continue

                            tool_spec = module.TOOL_SPEC
                            tool = PythonAgentTool(tool_name, tool_spec, tool_function)
                            self.register_tool(tool)
                            successful_loads += 1

                        else:
                            logger.warning("tool_name=<%s> | tool function missing", tool_name)

            except Exception as e:
                logger.warning("tool_name=<%s> | failed to load tool | %s", tool_name, e)
                tool_import_errors[tool_name] = str(e)

        # Log summary
        logger.debug("tool_count=<%d>, success_count=<%d> | finished loading tools", total_tools, successful_loads)
        if tool_import_errors:
            for tool_name, error in tool_import_errors.items():
                logger.debug("tool_name=<%s> | import error | %s", tool_name, error)

    def get_all_tool_specs(self) -> list[ToolSpec]:
        """Get all the tool specs for all tools in this registry..

        Returns:
            A list of ToolSpecs.
        """
        all_tools = self.get_all_tools_config()
        tools: List[ToolSpec] = [tool_spec for tool_spec in all_tools.values()]
        return tools

    def register_dynamic_tool(self, tool: AgentTool) -> None:
        """Register a tool dynamically for temporary use.

        Args:
            tool: The tool to register dynamically

        Raises:
            ValueError: If a tool with this name already exists
        """
        if tool.tool_name in self.registry or tool.tool_name in self.dynamic_tools:
            raise ValueError(f"Tool '{tool.tool_name}' already exists")

        self.dynamic_tools[tool.tool_name] = tool
        logger.debug("Registered dynamic tool: %s", tool.tool_name)

    def validate_tool_spec(self, tool_spec: ToolSpec) -> None:
        """Validate tool specification against required schema.

        Args:
            tool_spec: Tool specification to validate.

        Raises:
            ValueError: If the specification is invalid.
        """
        required_fields = ["name", "description"]
        missing_fields = [field for field in required_fields if field not in tool_spec]
        if missing_fields:
            raise ValueError(f"Missing required fields in tool spec: {', '.join(missing_fields)}")

        if "json" not in tool_spec["inputSchema"]:
            # Convert direct schema to proper format
            json_schema = normalize_schema(tool_spec["inputSchema"])
            tool_spec["inputSchema"] = {"json": json_schema}
            return

        # Validate json schema fields
        json_schema = tool_spec["inputSchema"]["json"]

        # Ensure schema has required fields
        if "type" not in json_schema:
            json_schema["type"] = "object"
        if "properties" not in json_schema:
            json_schema["properties"] = {}
        if "required" not in json_schema:
            json_schema["required"] = []

        # Validate property definitions
        for prop_name, prop_def in json_schema.get("properties", {}).items():
            if not isinstance(prop_def, dict):
                json_schema["properties"][prop_name] = {
                    "type": "string",
                    "description": f"Property {prop_name}",
                }
                continue

            # It is expected that type and description are already included in referenced $def.
            if "$ref" in prop_def:
                continue

            has_composition = any(kw in prop_def for kw in _COMPOSITION_KEYWORDS)
            if "type" not in prop_def and not has_composition:
                prop_def["type"] = "string"
            if "description" not in prop_def:
                prop_def["description"] = f"Property {prop_name}"

    class NewToolDict(TypedDict):
        """Dictionary type for adding or updating a tool in the configuration.

        Attributes:
            spec: The tool specification that defines the tool's interface and behavior.
        """

        spec: ToolSpec

    def _update_tool_config(self, tool_config: Dict[str, Any], new_tool: NewToolDict) -> None:
        """Update tool configuration with a new tool.

        Args:
            tool_config: The current tool configuration dictionary.
            new_tool: The new tool to add/update.

        Raises:
            ValueError: If the new tool spec is invalid.
        """
        if not new_tool.get("spec"):
            raise ValueError("Invalid tool format - missing spec")

        # Validate tool spec before updating
        try:
            self.validate_tool_spec(new_tool["spec"])
        except ValueError as e:
            raise ValueError(f"Tool specification validation failed: {str(e)}") from e

        new_tool_name = new_tool["spec"]["name"]
        existing_tool_idx = None

        # Find if tool already exists
        for idx, tool_entry in enumerate(tool_config["tools"]):
            if tool_entry["toolSpec"]["name"] == new_tool_name:
                existing_tool_idx = idx
                break

        # Update existing tool or add new one
        new_tool_entry = {"toolSpec": new_tool["spec"]}
        if existing_tool_idx is not None:
            tool_config["tools"][existing_tool_idx] = new_tool_entry
            logger.debug("tool_name=<%s> | updated existing tool", new_tool_name)
        else:
            tool_config["tools"].append(new_tool_entry)
            logger.debug("tool_name=<%s> | added new tool", new_tool_name)

    def _scan_module_for_tools(self, module: Any) -> List[AgentTool]:
        """Scan a module for function-based tools.

        Args:
            module: The module to scan.

        Returns:
            List of FunctionTool instances found in the module.
        """
        tools: List[AgentTool] = []

        for name, obj in inspect.getmembers(module):
            if isinstance(obj, DecoratedFunctionTool):
                # Create a function tool with correct name
                try:
                    # Cast as AgentTool for mypy
                    tools.append(cast(AgentTool, obj))
                except Exception as e:
                    logger.warning("tool_name=<%s> | failed to create function tool | %s", name, e)

        return tools

    def cleanup(self, **kwargs: Any) -> None:
        """Synchronously clean up all tool providers in this registry."""
        # Attempt cleanup of all providers even if one fails to minimize resource leakage
        exceptions = []
        for provider in self._tool_providers:
            try:
                provider.remove_consumer(self._registry_id)
                logger.debug("provider=<%s> | removed provider consumer", type(provider).__name__)
            except Exception as e:
                exceptions.append(e)
                logger.error(
                    "provider=<%s>, error=<%s> | failed to remove provider consumer", type(provider).__name__, e
                )

        if exceptions:
            raise exceptions[0]

NewToolDict

Bases: TypedDict

Dictionary type for adding or updating a tool in the configuration.

Attributes:

Name Type Description
spec ToolSpec

The tool specification that defines the tool's interface and behavior.

Source code in strands/tools/registry.py
639
640
641
642
643
644
645
646
class NewToolDict(TypedDict):
    """Dictionary type for adding or updating a tool in the configuration.

    Attributes:
        spec: The tool specification that defines the tool's interface and behavior.
    """

    spec: ToolSpec

__init__()

Initialize the tool registry.

Source code in strands/tools/registry.py
36
37
38
39
40
41
42
def __init__(self) -> None:
    """Initialize the tool registry."""
    self.registry: Dict[str, AgentTool] = {}
    self.dynamic_tools: Dict[str, AgentTool] = {}
    self.tool_config: Optional[Dict[str, Any]] = None
    self._tool_providers: List[ToolProvider] = []
    self._registry_id = str(uuid.uuid4())

cleanup(**kwargs)

Synchronously clean up all tool providers in this registry.

Source code in strands/tools/registry.py
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
def cleanup(self, **kwargs: Any) -> None:
    """Synchronously clean up all tool providers in this registry."""
    # Attempt cleanup of all providers even if one fails to minimize resource leakage
    exceptions = []
    for provider in self._tool_providers:
        try:
            provider.remove_consumer(self._registry_id)
            logger.debug("provider=<%s> | removed provider consumer", type(provider).__name__)
        except Exception as e:
            exceptions.append(e)
            logger.error(
                "provider=<%s>, error=<%s> | failed to remove provider consumer", type(provider).__name__, e
            )

    if exceptions:
        raise exceptions[0]

discover_tool_modules()

Discover available tool modules in all tools directories.

Returns:

Type Description
Dict[str, Path]

Dictionary mapping tool names to their full paths.

Source code in strands/tools/registry.py
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
def discover_tool_modules(self) -> Dict[str, Path]:
    """Discover available tool modules in all tools directories.

    Returns:
        Dictionary mapping tool names to their full paths.
    """
    tool_modules = {}
    tools_dirs = self.get_tools_dirs()

    for tools_dir in tools_dirs:
        logger.debug("tools_dir=<%s> | scanning", tools_dir)

        # Find Python tools
        for extension in ["*.py"]:
            for item in tools_dir.glob(extension):
                if item.is_file() and not item.name.startswith("__"):
                    module_name = item.stem
                    # If tool already exists, newer paths take precedence
                    if module_name in tool_modules:
                        logger.debug("tools_dir=<%s>, module_name=<%s> | tool overridden", tools_dir, module_name)
                    tool_modules[module_name] = item

    logger.debug("tool_modules=<%s> | discovered", list(tool_modules.keys()))
    return tool_modules

get_all_tool_specs()

Get all the tool specs for all tools in this registry..

Returns:

Type Description
list[ToolSpec]

A list of ToolSpecs.

Source code in strands/tools/registry.py
564
565
566
567
568
569
570
571
572
def get_all_tool_specs(self) -> list[ToolSpec]:
    """Get all the tool specs for all tools in this registry..

    Returns:
        A list of ToolSpecs.
    """
    all_tools = self.get_all_tools_config()
    tools: List[ToolSpec] = [tool_spec for tool_spec in all_tools.values()]
    return tools

get_all_tools_config()

Dynamically generate tool configuration by combining built-in and dynamic tools.

Returns:

Type Description
Dict[str, Any]

Dictionary containing all tool configurations.

Source code in strands/tools/registry.py
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
def get_all_tools_config(self) -> Dict[str, Any]:
    """Dynamically generate tool configuration by combining built-in and dynamic tools.

    Returns:
        Dictionary containing all tool configurations.
    """
    tool_config = {}
    logger.debug("getting tool configurations")

    # Add all registered tools
    for tool_name, tool in self.registry.items():
        # Make a deep copy to avoid modifying the original
        spec = tool.tool_spec.copy()
        try:
            # Normalize the schema before validation
            spec = normalize_tool_spec(spec)
            self.validate_tool_spec(spec)
            tool_config[tool_name] = spec
            logger.debug("tool_name=<%s> | loaded tool config", tool_name)
        except ValueError as e:
            logger.warning("tool_name=<%s> | spec validation failed | %s", tool_name, e)

    # Add any dynamic tools
    for tool_name, tool in self.dynamic_tools.items():
        if tool_name not in tool_config:
            # Make a deep copy to avoid modifying the original
            spec = tool.tool_spec.copy()
            try:
                # Normalize the schema before validation
                spec = normalize_tool_spec(spec)
                self.validate_tool_spec(spec)
                tool_config[tool_name] = spec
                logger.debug("tool_name=<%s> | loaded dynamic tool config", tool_name)
            except ValueError as e:
                logger.warning("tool_name=<%s> | dynamic tool spec validation failed | %s", tool_name, e)

    logger.debug("tool_count=<%s> | tools configured", len(tool_config))
    return tool_config

get_tools_dirs()

Get all tool directory paths.

Returns:

Type Description
List[Path]

A list of Path objects for current working directory's "./tools/".

Source code in strands/tools/registry.py
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
def get_tools_dirs(self) -> List[Path]:
    """Get all tool directory paths.

    Returns:
        A list of Path objects for current working directory's "./tools/".
    """
    # Current working directory's tools directory
    cwd_tools_dir = Path.cwd() / "tools"

    # Return all directories that exist
    tool_dirs = []
    for directory in [cwd_tools_dir]:
        if directory.exists() and directory.is_dir():
            tool_dirs.append(directory)
            logger.debug("tools_dir=<%s> | found tools directory", directory)
        else:
            logger.debug("tools_dir=<%s> | tools directory not found", directory)

    return tool_dirs

initialize_tools(load_tools_from_directory=False)

Initialize all tools by discovering and loading them dynamically from all tool directories.

Parameters:

Name Type Description Default
load_tools_from_directory bool

Whether to reload tools if changes are made at runtime.

False
Source code in strands/tools/registry.py
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
def initialize_tools(self, load_tools_from_directory: bool = False) -> None:
    """Initialize all tools by discovering and loading them dynamically from all tool directories.

    Args:
        load_tools_from_directory: Whether to reload tools if changes are made at runtime.
    """
    self.tool_config = None

    # Then discover and load other tools
    tool_modules = self.discover_tool_modules()
    successful_loads = 0
    total_tools = len(tool_modules)
    tool_import_errors = {}

    # Process Python tools
    for tool_name, tool_path in tool_modules.items():
        if tool_name in ["__init__"]:
            continue

        if not load_tools_from_directory:
            continue

        try:
            # Add directory to path temporarily
            tool_dir = str(tool_path.parent)
            sys.path.insert(0, tool_dir)
            try:
                module = import_module(tool_name)
            finally:
                if tool_dir in sys.path:
                    sys.path.remove(tool_dir)

            # Process Python tool
            if tool_path.suffix == ".py":
                # Check for decorated function tools first
                try:
                    function_tools = self._scan_module_for_tools(module)

                    if function_tools:
                        for function_tool in function_tools:
                            self.register_tool(function_tool)
                            successful_loads += 1
                    else:
                        # Fall back to traditional tools
                        # Check for expected tool function
                        expected_func_name = tool_name
                        if hasattr(module, expected_func_name):
                            tool_function = getattr(module, expected_func_name)
                            if not callable(tool_function):
                                logger.warning(
                                    "tool_name=<%s> | tool function exists but is not callable", tool_name
                                )
                                continue

                            # Validate tool spec before registering
                            if not hasattr(module, "TOOL_SPEC"):
                                logger.warning("tool_name=<%s> | tool is missing TOOL_SPEC | skipping", tool_name)
                                continue

                            try:
                                self.validate_tool_spec(module.TOOL_SPEC)
                            except ValueError as e:
                                logger.warning("tool_name=<%s> | tool spec validation failed | %s", tool_name, e)
                                continue

                            tool_spec = module.TOOL_SPEC
                            tool = PythonAgentTool(tool_name, tool_spec, tool_function)
                            self.register_tool(tool)
                            successful_loads += 1

                        else:
                            logger.warning("tool_name=<%s> | tool function missing", tool_name)
                except ImportError:
                    # Function tool loader not available, fall back to traditional tools
                    # Check for expected tool function
                    expected_func_name = tool_name
                    if hasattr(module, expected_func_name):
                        tool_function = getattr(module, expected_func_name)
                        if not callable(tool_function):
                            logger.warning("tool_name=<%s> | tool function exists but is not callable", tool_name)
                            continue

                        # Validate tool spec before registering
                        if not hasattr(module, "TOOL_SPEC"):
                            logger.warning("tool_name=<%s> | tool is missing TOOL_SPEC | skipping", tool_name)
                            continue

                        try:
                            self.validate_tool_spec(module.TOOL_SPEC)
                        except ValueError as e:
                            logger.warning("tool_name=<%s> | tool spec validation failed | %s", tool_name, e)
                            continue

                        tool_spec = module.TOOL_SPEC
                        tool = PythonAgentTool(tool_name, tool_spec, tool_function)
                        self.register_tool(tool)
                        successful_loads += 1

                    else:
                        logger.warning("tool_name=<%s> | tool function missing", tool_name)

        except Exception as e:
            logger.warning("tool_name=<%s> | failed to load tool | %s", tool_name, e)
            tool_import_errors[tool_name] = str(e)

    # Log summary
    logger.debug("tool_count=<%d>, success_count=<%d> | finished loading tools", total_tools, successful_loads)
    if tool_import_errors:
        for tool_name, error in tool_import_errors.items():
            logger.debug("tool_name=<%s> | import error | %s", tool_name, error)

load_tool_from_filepath(tool_name, tool_path)

DEPRECATED: Load a tool from a file path.

Parameters:

Name Type Description Default
tool_name str

Name of the tool.

required
tool_path str

Path to the tool file.

required

Raises:

Type Description
FileNotFoundError

If the tool file is not found.

ValueError

If the tool cannot be loaded.

Source code in strands/tools/registry.py
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
def load_tool_from_filepath(self, tool_name: str, tool_path: str) -> None:
    """DEPRECATED: Load a tool from a file path.

    Args:
        tool_name: Name of the tool.
        tool_path: Path to the tool file.

    Raises:
        FileNotFoundError: If the tool file is not found.
        ValueError: If the tool cannot be loaded.
    """
    warnings.warn(
        "load_tool_from_filepath is deprecated and will be removed in Strands SDK 2.0. "
        "`process_tools` automatically handles loading tools from a filepath.",
        DeprecationWarning,
        stacklevel=2,
    )

    from .loader import ToolLoader

    try:
        tool_path = expanduser(tool_path)
        if not os.path.exists(tool_path):
            raise FileNotFoundError(f"Tool file not found: {tool_path}")

        loaded_tools = ToolLoader.load_tools(tool_path, tool_name)
        for t in loaded_tools:
            t.mark_dynamic()
            # Because we're explicitly registering the tool we don't need an allowlist
            self.register_tool(t)
    except Exception as e:
        exception_str = str(e)
        logger.exception("tool_name=<%s> | failed to load tool", tool_name)
        raise ValueError(f"Failed to load tool {tool_name}: {exception_str}") from e

process_tools(tools)

Process tools list.

Process list of tools that can contain local file path string, module import path string, imported modules, @tool decorated functions, or instances of AgentTool.

Parameters:

Name Type Description Default
tools List[Any]

List of tool specifications. Can be:

  1. Local file path to a module based tool: ./path/to/module/tool.py
  2. Module import path

    2.1. Path to a module based tool: strands_tools.file_read 2.2. Path to a module with multiple AgentTool instances (@tool decorated): tests.fixtures.say_tool 2.3. Path to a module and a specific function: tests.fixtures.say_tool:say

  3. A module for a module based tool

  4. Instances of AgentTool (@tool decorated functions)
  5. Dictionaries with name/path keys (deprecated)
required

Returns:

Type Description
List[str]

List of tool names that were processed.

Source code in strands/tools/registry.py
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
def process_tools(self, tools: List[Any]) -> List[str]:
    """Process tools list.

    Process list of tools that can contain local file path string, module import path string,
    imported modules, @tool decorated functions, or instances of AgentTool.

    Args:
        tools: List of tool specifications. Can be:

            1. Local file path to a module based tool: `./path/to/module/tool.py`
            2. Module import path

                2.1. Path to a module based tool: `strands_tools.file_read`
                2.2. Path to a module with multiple AgentTool instances (@tool decorated):
                    `tests.fixtures.say_tool`
                2.3. Path to a module and a specific function: `tests.fixtures.say_tool:say`

            3. A module for a module based tool
            4. Instances of AgentTool (@tool decorated functions)
            5. Dictionaries with name/path keys (deprecated)


    Returns:
        List of tool names that were processed.
    """
    tool_names = []

    def add_tool(tool: Any) -> None:
        try:
            # String based tool
            # Can be a file path, a module path, or a module path with a targeted function. Examples:
            # './path/to/tool.py'
            # 'my.module.tool'
            # 'my.module.tool:tool_name'
            if isinstance(tool, str):
                tools = load_tool_from_string(tool)
                for a_tool in tools:
                    a_tool.mark_dynamic()
                    self.register_tool(a_tool)
                    tool_names.append(a_tool.tool_name)

            # Dictionary with name and path
            elif isinstance(tool, dict) and "name" in tool and "path" in tool:
                tools = load_tool_from_string(tool["path"])

                tool_found = False
                for a_tool in tools:
                    if a_tool.tool_name == tool["name"]:
                        a_tool.mark_dynamic()
                        self.register_tool(a_tool)
                        tool_names.append(a_tool.tool_name)
                        tool_found = True

                if not tool_found:
                    raise ValueError(f'Tool "{tool["name"]}" not found in "{tool["path"]}"')

            # Dictionary with path only
            elif isinstance(tool, dict) and "path" in tool:
                tools = load_tool_from_string(tool["path"])

                for a_tool in tools:
                    a_tool.mark_dynamic()
                    self.register_tool(a_tool)
                    tool_names.append(a_tool.tool_name)

            # Imported Python module
            elif hasattr(tool, "__file__") and inspect.ismodule(tool):
                # Extract the tool name from the module name
                module_tool_name = tool.__name__.split(".")[-1]

                tools = load_tools_from_module(tool, module_tool_name)
                for a_tool in tools:
                    self.register_tool(a_tool)
                    tool_names.append(a_tool.tool_name)

            # Case 5: AgentTools (which also covers @tool)
            elif isinstance(tool, AgentTool):
                self.register_tool(tool)
                tool_names.append(tool.tool_name)

            # Case 6: Nested iterable (list, tuple, etc.) - add each sub-tool
            elif isinstance(tool, Iterable) and not isinstance(tool, (str, bytes, bytearray)):
                for t in tool:
                    add_tool(t)

            # Case 5: ToolProvider
            elif isinstance(tool, ToolProvider):
                self._tool_providers.append(tool)
                tool.add_consumer(self._registry_id)

                async def get_tools() -> Sequence[AgentTool]:
                    return await tool.load_tools()

                provider_tools = run_async(get_tools)

                for provider_tool in provider_tools:
                    self.register_tool(provider_tool)
                    tool_names.append(provider_tool.tool_name)
            else:
                logger.warning("tool=<%s> | unrecognized tool specification", tool)

        except Exception as e:
            exception_str = str(e)
            logger.exception("tool_name=<%s> | failed to load tool", tool)
            raise ValueError(f"Failed to load tool {tool}: {exception_str}") from e

    for tool in tools:
        add_tool(tool)
    return tool_names

register_dynamic_tool(tool)

Register a tool dynamically for temporary use.

Parameters:

Name Type Description Default
tool AgentTool

The tool to register dynamically

required

Raises:

Type Description
ValueError

If a tool with this name already exists

Source code in strands/tools/registry.py
574
575
576
577
578
579
580
581
582
583
584
585
586
587
def register_dynamic_tool(self, tool: AgentTool) -> None:
    """Register a tool dynamically for temporary use.

    Args:
        tool: The tool to register dynamically

    Raises:
        ValueError: If a tool with this name already exists
    """
    if tool.tool_name in self.registry or tool.tool_name in self.dynamic_tools:
        raise ValueError(f"Tool '{tool.tool_name}' already exists")

    self.dynamic_tools[tool.tool_name] = tool
    logger.debug("Registered dynamic tool: %s", tool.tool_name)

register_tool(tool)

Register a tool function with the given name.

Parameters:

Name Type Description Default
tool AgentTool

The tool to register.

required
Source code in strands/tools/registry.py
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
def register_tool(self, tool: AgentTool) -> None:
    """Register a tool function with the given name.

    Args:
        tool: The tool to register.
    """
    logger.debug(
        "tool_name=<%s>, tool_type=<%s>, is_dynamic=<%s> | registering tool",
        tool.tool_name,
        tool.tool_type,
        tool.is_dynamic,
    )

    # Check duplicate tool name, throw on duplicate tool names except if hot_reloading is enabled
    if tool.tool_name in self.registry and not tool.supports_hot_reload:
        raise ValueError(
            f"Tool name '{tool.tool_name}' already exists. Cannot register tools with exact same name."
        )

    # Check for normalized name conflicts (- vs _)
    if self.registry.get(tool.tool_name) is None:
        normalized_name = tool.tool_name.replace("-", "_")

        matching_tools = [
            tool_name
            for (tool_name, tool) in self.registry.items()
            if tool_name.replace("-", "_") == normalized_name
        ]

        if matching_tools:
            raise ValueError(
                f"Tool name '{tool.tool_name}' already exists as '{matching_tools[0]}'."
                " Cannot add a duplicate tool which differs by a '-' or '_'"
            )

    # Register in main registry
    self.registry[tool.tool_name] = tool

    # Register in dynamic tools if applicable
    if tool.is_dynamic:
        self.dynamic_tools[tool.tool_name] = tool

        if not tool.supports_hot_reload:
            logger.debug("tool_name=<%s>, tool_type=<%s> | skipping hot reloading", tool.tool_name, tool.tool_type)
            return

        logger.debug(
            "tool_name=<%s>, tool_registry=<%s>, dynamic_tools=<%s> | tool registered",
            tool.tool_name,
            list(self.registry.keys()),
            list(self.dynamic_tools.keys()),
        )

reload_tool(tool_name)

Reload a specific tool module.

Parameters:

Name Type Description Default
tool_name str

Name of the tool to reload.

required

Raises:

Type Description
FileNotFoundError

If the tool file cannot be found.

ImportError

If there are issues importing the tool module.

ValueError

If the tool specification is invalid or required components are missing.

Exception

For other errors during tool reloading.

Source code in strands/tools/registry.py
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
def reload_tool(self, tool_name: str) -> None:
    """Reload a specific tool module.

    Args:
        tool_name: Name of the tool to reload.

    Raises:
        FileNotFoundError: If the tool file cannot be found.
        ImportError: If there are issues importing the tool module.
        ValueError: If the tool specification is invalid or required components are missing.
        Exception: For other errors during tool reloading.
    """
    try:
        # Check for tool file
        logger.debug("tool_name=<%s> | searching directories for tool", tool_name)
        tools_dirs = self.get_tools_dirs()
        tool_path = None

        # Search for the tool file in all tool directories
        for tools_dir in tools_dirs:
            temp_path = tools_dir / f"{tool_name}.py"
            if temp_path.exists():
                tool_path = temp_path
                break

        if not tool_path:
            raise FileNotFoundError(f"No tool file found for: {tool_name}")

        logger.debug("tool_name=<%s> | reloading tool", tool_name)

        # Add tool directory to path temporarily
        tool_dir = str(tool_path.parent)
        sys.path.insert(0, tool_dir)
        try:
            # Load the module directly using spec
            spec = util.spec_from_file_location(tool_name, str(tool_path))
            if spec is None:
                raise ImportError(f"Could not load spec for {tool_name}")

            module = util.module_from_spec(spec)
            sys.modules[tool_name] = module

            if spec.loader is None:
                raise ImportError(f"Could not load {tool_name}")

            spec.loader.exec_module(module)

        finally:
            # Remove the temporary path
            sys.path.remove(tool_dir)

        # Look for function-based tools first
        try:
            function_tools = self._scan_module_for_tools(module)

            if function_tools:
                for function_tool in function_tools:
                    # Register the function-based tool
                    self.register_tool(function_tool)

                    # Update tool configuration if available
                    if self.tool_config is not None:
                        self._update_tool_config(self.tool_config, {"spec": function_tool.tool_spec})

                logger.debug("tool_name=<%s> | successfully reloaded function-based tool from module", tool_name)
                return
        except ImportError:
            logger.debug("function tool loader not available | falling back to traditional tools")

        # Fall back to traditional module-level tools
        if not hasattr(module, "TOOL_SPEC"):
            raise ValueError(
                f"Tool {tool_name} is missing TOOL_SPEC (neither at module level nor as a decorated function)"
            )

        expected_func_name = tool_name
        if not hasattr(module, expected_func_name):
            raise ValueError(f"Tool {tool_name} is missing {expected_func_name} function")

        tool_function = getattr(module, expected_func_name)
        if not callable(tool_function):
            raise ValueError(f"Tool {tool_name} function is not callable")

        # Validate tool spec
        self.validate_tool_spec(module.TOOL_SPEC)

        new_tool = PythonAgentTool(tool_name, module.TOOL_SPEC, tool_function)

        # Register the tool
        self.register_tool(new_tool)

        # Update tool configuration if available
        if self.tool_config is not None:
            self._update_tool_config(self.tool_config, {"spec": module.TOOL_SPEC})
        logger.debug("tool_name=<%s> | successfully reloaded tool", tool_name)

    except Exception:
        logger.exception("tool_name=<%s> | failed to reload tool", tool_name)
        raise

replace(new_tool)

Replace an existing tool with a new implementation.

This performs a swap of the tool implementation in the registry. The replacement takes effect on the next agent invocation.

Parameters:

Name Type Description Default
new_tool AgentTool

New tool implementation. Its name must match the tool being replaced.

required

Raises:

Type Description
ValueError

If the tool doesn't exist.

Source code in strands/tools/registry.py
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
def replace(self, new_tool: AgentTool) -> None:
    """Replace an existing tool with a new implementation.

    This performs a swap of the tool implementation in the registry.
    The replacement takes effect on the next agent invocation.

    Args:
        new_tool: New tool implementation. Its name must match the tool being replaced.

    Raises:
        ValueError: If the tool doesn't exist.
    """
    tool_name = new_tool.tool_name

    if tool_name not in self.registry:
        raise ValueError(f"Cannot replace tool '{tool_name}' - tool does not exist")

    # Update main registry
    self.registry[tool_name] = new_tool

    # Update dynamic_tools to match new tool's dynamic status
    if new_tool.is_dynamic:
        self.dynamic_tools[tool_name] = new_tool
    elif tool_name in self.dynamic_tools:
        del self.dynamic_tools[tool_name]

validate_tool_spec(tool_spec)

Validate tool specification against required schema.

Parameters:

Name Type Description Default
tool_spec ToolSpec

Tool specification to validate.

required

Raises:

Type Description
ValueError

If the specification is invalid.

Source code in strands/tools/registry.py
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
def validate_tool_spec(self, tool_spec: ToolSpec) -> None:
    """Validate tool specification against required schema.

    Args:
        tool_spec: Tool specification to validate.

    Raises:
        ValueError: If the specification is invalid.
    """
    required_fields = ["name", "description"]
    missing_fields = [field for field in required_fields if field not in tool_spec]
    if missing_fields:
        raise ValueError(f"Missing required fields in tool spec: {', '.join(missing_fields)}")

    if "json" not in tool_spec["inputSchema"]:
        # Convert direct schema to proper format
        json_schema = normalize_schema(tool_spec["inputSchema"])
        tool_spec["inputSchema"] = {"json": json_schema}
        return

    # Validate json schema fields
    json_schema = tool_spec["inputSchema"]["json"]

    # Ensure schema has required fields
    if "type" not in json_schema:
        json_schema["type"] = "object"
    if "properties" not in json_schema:
        json_schema["properties"] = {}
    if "required" not in json_schema:
        json_schema["required"] = []

    # Validate property definitions
    for prop_name, prop_def in json_schema.get("properties", {}).items():
        if not isinstance(prop_def, dict):
            json_schema["properties"][prop_name] = {
                "type": "string",
                "description": f"Property {prop_name}",
            }
            continue

        # It is expected that type and description are already included in referenced $def.
        if "$ref" in prop_def:
            continue

        has_composition = any(kw in prop_def for kw in _COMPOSITION_KEYWORDS)
        if "type" not in prop_def and not has_composition:
            prop_def["type"] = "string"
        if "description" not in prop_def:
            prop_def["description"] = f"Property {prop_name}"

noop_tool()

No-op tool to satisfy tool spec requirement when tool messages are present.

Some model providers (e.g., Bedrock) will return an error response if tool uses and tool results are present in messages without any tool specs configured. Consequently, if the summarization agent has no registered tools, summarization will fail. As a workaround, we register the no-op tool.

Source code in strands/tools/_tool_helpers.py
 8
 9
10
11
12
13
14
15
16
@tool(name="noop", description="This is a fake tool that MUST be completely ignored.")
def noop_tool() -> None:
    """No-op tool to satisfy tool spec requirement when tool messages are present.

    Some model providers (e.g., Bedrock) will return an error response if tool uses and tool results are present in
    messages without any tool specs configured. Consequently, if the summarization agent has no registered tools,
    summarization will fail. As a workaround, we register the no-op tool.
    """
    pass