-
Notifications
You must be signed in to change notification settings - Fork 458
Description
Problem Statement
In a flow where an initial agent acts as a router and classifies the users message and hands off to the appropriate model there are problems passing through message context / reasoning content.
For example, a model as a prompt router that can route between two agents using openai.gpt-oss-20b-1:0 and global.anthropic.claude-haiku-4-5-20251001-v1:0, depending on user message. This flow currently fails occasionally with:
An error occurred (ValidationException) when calling the ConverseStream operation: The model returned the following errors: messages.1.content.0.thinking.signature: Field required
A search for the error reveals this comment stating that Anthropic models can only process reasoning from Anthropic models.
Within the Strands SDK there is already a block to remove reasoning content for Deepseek models:
sdk-python/src/strands/models/bedrock.py
Lines 318 to 322 in 7cd10b9
| # DeepSeek models have issues with reasoningContent | |
| # TODO: Replace with systematic model configuration registry (https://github.com/strands-agents/sdk-python/issues/780) | |
| if "deepseek" in self.config["model_id"].lower() and "reasoningContent" in content_block: | |
| dropped_deepseek_reasoning_content = True | |
| continue |
Proposed Solution
No response
Use Case
This should be configurable per model, even if it is something as simple as;
haiku_45 = BedrockModel(
model_id="global.anthropic.claude-haiku-4-5-20251001-v1:0",
region_name=os.environ.get("AWS_REGION"),
temperature=0.0,
max_tokens=2048,
+ drop_reasoning=True
)Alternatives Solutions
No response
Additional Context
No response