-
Notifications
You must be signed in to change notification settings - Fork 4.3k
Description
Describe the bug
When I try to use Orchestration
(following the sample code from the repo), it always stops after the first agent in the pipeline. I even tried running the sample code exactly as provided, and it does the same thing.
The only difference in my setup is that I’m using Gemini 2.5 Flash (or Flash Lite) as the model. After digging into it, it looks like Gemini’s API (generateContent
) expects the last message in a single-turn request to either have a "user"
role or no role at all.
Orchestration
, however, takes the previous agent’s response (role=model
) and sends it straight to the next agent. That works fine for OpenAI and Anthropic models, but Gemini rejects it with a 400 error.
To Reproduce
- Clone the Semantic Kernel repo and open the Orchestration sample.
- Swap the model out for
gemini-2.5-flash
orgemini-2.5-flash-lite
. - Run any workflow with multiple agents.
- It’ll stop after the first agent and throw the 400 error above.
Expected behavior
The orchestration should move on to the next agent and complete the full sequence. Ideally, SK should handle Gemini’s role requirements automatically so it doesn’t break the chain.
HTTP 400 https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-lite:generateContent
ResponseBody: {
"error": {
"code": 400,
"message": "Please ensure that single turn requests end with a user role or the role field is empty.",
"status": "INVALID_ARGUMENT"
}
}
Platform
- Language: C#
- Source: Main branch (latest)
- AI model: Gemini 2.5 Flash / Flash Lite
- IDE: Visual Studio 2022
- OS: Windows 11
Additional context
This seems to be a mismatch between how SK chains agent outputs and how Gemini validates message roles. The only workaround I’ve found so far is to either manually change the last message’s role from "model"
to "user"
or insert a dummy "user"
message after the "model"
one before sending it to Gemini.
It would be great if the orchestration or the Gemini connector could automatically normalize this so we don’t have to patch message roles manually.