Skip to content

[FEATURE] Improve handling for models not supporting reasoning content #1053

@Luke-Shepp

Description

@Luke-Shepp

Problem Statement

In a flow where an initial agent acts as a router and classifies the users message and hands off to the appropriate model there are problems passing through message context / reasoning content.

For example, a model as a prompt router that can route between two agents using openai.gpt-oss-20b-1:0 and global.anthropic.claude-haiku-4-5-20251001-v1:0, depending on user message. This flow currently fails occasionally with:

An error occurred (ValidationException) when calling the ConverseStream operation: The model returned the following errors: messages.1.content.0.thinking.signature: Field required

A search for the error reveals this comment stating that Anthropic models can only process reasoning from Anthropic models.

Within the Strands SDK there is already a block to remove reasoning content for Deepseek models:

# DeepSeek models have issues with reasoningContent
# TODO: Replace with systematic model configuration registry (https://github.com/strands-agents/sdk-python/issues/780)
if "deepseek" in self.config["model_id"].lower() and "reasoningContent" in content_block:
dropped_deepseek_reasoning_content = True
continue

Proposed Solution

No response

Use Case

This should be configurable per model, even if it is something as simple as;

haiku_45 = BedrockModel(
    model_id="global.anthropic.claude-haiku-4-5-20251001-v1:0",
    region_name=os.environ.get("AWS_REGION"),
    temperature=0.0,
    max_tokens=2048,
+   drop_reasoning=True
)

Alternatives Solutions

No response

Additional Context

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions