Skip to content

Conversation

@lzjjeff
Copy link

@lzjjeff lzjjeff commented Oct 21, 2025

I found that when using models like Gemini, the call_id in function_call item is empty. This causes difficulty in accurately matching function_call and function_call_output during parallel tool calling.
To address this, I added a check and filling for missing call_id values in src/agents/_run_impl.py. The call_id format follows the GPT series models’ convention — a 22-character random string ID with a prefix call_.

@seratch
Copy link
Member

seratch commented Oct 21, 2025

How do you use the model? We prefer putting this kind of workaround to LiteLLM integration side: https://openai.github.io/openai-agents-python/models/litellm/

@seratch seratch marked this pull request as draft October 21, 2025 08:00
@lzjjeff
Copy link
Author

lzjjeff commented Oct 21, 2025

How do you use the model? We prefer putting this kind of workaround to LiteLLM integration side: https://openai.github.io/openai-agents-python/models/litellm/

Thank you for your review. I haven’t used LiteLLM, I’m using AsyncAzureOpenAI to build the model.
Will you be integrating this feature into LiteLLM in the next release?

@seratch
Copy link
Member

seratch commented Oct 21, 2025

If you use either OpenAI or Azure OpenAI Service, you can directly use the OpenAI API client. However, for other models, we don't recommend it. Indeed, quite basic chat examples should work. However, when you start using other features like tool calling, structured outputs, and so on, the details could vary. For this reason, we generally recommend using LiteLLM to fill the gap. Please note that, even with LiteLLM, there could be some differences due to models' capabilities and requirements.

Going back to the topic here, if our LiteLLM adapter can do something extra for your use case, we may be able to have such custom logic there.

@seratch seratch changed the title fix: add call_id to function_call item if it doesn't have one. fix: add call_id to function_call item if it doesn't have one (Gemini use case) Oct 21, 2025
@lzjjeff
Copy link
Author

lzjjeff commented Oct 21, 2025

If you use either OpenAI or Azure OpenAI Service, you can directly use the OpenAI API client. However, for other models, we don't recommend it. Indeed, quite basic chat examples should work. However, when you start using other features like tool calling, structured outputs, and so on, the details could vary. For this reason, we generally recommend using LiteLLM to fill the gap. Please note that, even with LiteLLM, there could be some differences due to models' capabilities and requirements.

Going back to the topic here, if our LiteLLM adapter can do something extra for your use case, we may be able to have such custom logic there.

Thanks for the clarification! That makes sense. In my case, the missing call_id issue mainly occurs when using non-OpenAI models (like Gemini) via AsyncAzureOpenAI.

I agree that having this logic on the LiteLLM integration side would be cleaner. If LiteLLM plans to standardize call_id handling (especially for third-party model adapters), I’d be happy to switch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants