Skip to content

Conversation

@ishanrajsingh
Copy link

@ishanrajsingh ishanrajsingh commented Oct 22, 2025

This fix addresses an issue in the LiteLlm integration where setting include_contents='none' causes the content payload sent to the LLM provider to be empty, resulting in a BadRequestError. The patch adds a minimal fallback user message in the generate_content_async method when no messages are included, ensuring a non-empty content array is always sent. This prevents errors from providers that require non-empty input and improves compatibility when using LiteLlm with the Agent Development Kit.

GitHub Issue:
This PR addresses the bug reported in issue #3242 titled "LiteLlm + include_contents='none' results in empty content sent to the provider -> Error". The issue is already created and linked here: #3242

Testing Plan:

I verified the fix by running a scenario where an LlmAgent uses LiteLlm with include_contents='none'.

The test confirms that the LLM request no longer sends empty content, preventing the BadRequestError from the LLM provider.

Additionally, I ran existing LiteLlm unit tests to ensure no regressions occur.

For thoroughness, a new test can be added simulating include_contents='none' to confirm fallback content insertion.

@gemini-code-assist
Copy link

Summary of Changes

Hello @ishanrajsingh, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a BadRequestError within the LiteLlm integration by preventing empty content payloads from being sent to Large Language Model providers. The fix ensures that even when include_contents='none' is specified, a default user message is included, thereby maintaining compatibility with LLM APIs that require non-empty input and improving the overall robustness of the Agent Development Kit's LiteLlm component.

Highlights

  • Fixes Empty Content Error: Addresses a BadRequestError in LiteLlm that occurred when include_contents='none' resulted in an empty content payload being sent to LLM providers.
  • Introduces Fallback User Message: A minimal fallback user message is now added in the generate_content_async method if no messages are included, ensuring the content array is never empty.
  • Improves Compatibility: This change prevents errors from providers that require non-empty input and enhances compatibility when using LiteLlm with the Agent Development Kit.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@adk-bot
Copy link
Collaborator

adk-bot commented Oct 22, 2025

Response from ADK Triaging Agent

Hello @ishanrajsingh, thank you for creating this PR!

To help reviewers to review your PR more efficiently, could you please provide the followings:

  • a Github issue associated with this PR. If there is no existing issue, could you please create one?
  • a testing plan for your changes.
  • logs or screenshot after the fix is applied.

Thanks!

@adk-bot adk-bot added the models [Component] Issues related to model support label Oct 22, 2025
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses an issue where the LiteLlm integration would error when include_contents='none' due to an empty content payload. The fix adds a fallback user message to ensure a non-empty content array is always sent to the LLM provider. I have identified a potential issue with the placement of the fallback logic that could lead to unexpected behavior.

Copy link
Author

@ishanrajsingh ishanrajsingh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

handle empty file content

@ishanrajsingh
Copy link
Author

GitHub Issue:
This PR addresses the bug reported in issue #3242 titled "LiteLlm + include_contents='none' results in empty content sent to the provider -> Error". The issue is already created and linked here: #3242

Testing Plan:

I verified the fix by running a scenario where an LlmAgent uses LiteLlm with include_contents='none'.

The test confirms that the LLM request no longer sends empty content, preventing the BadRequestError from the LLM provider.

Additionally, I ran existing LiteLlm unit tests to ensure no regressions occur.

For thoroughness, a new test can be added simulating include_contents='none' to confirm fallback content insertion.

@ishanrajsingh
Copy link
Author

/gemini review

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a critical issue where the LiteLlm integration fails when include_contents='none' due to an empty content payload. The fix introduces a fallback user message to ensure a non-empty content array is always sent to the LLM provider. I have added a review comment to suggest a minor improvement to the fallback message.

…messages empty due to include_contents='none'
@ishanrajsingh
Copy link
Author

/gemini review

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a critical issue where LiteLlm integration with include_contents='none' results in an empty content payload, causing a BadRequestError. The fix introduces a fallback user message when no messages are included, ensuring a non-empty content array is always sent. This review focuses on the correctness and maintainability of the implemented solution, ensuring it effectively prevents the error and adheres to best practices.

ishanrajsingh and others added 2 commits October 22, 2025 22:08
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@GWeale GWeale self-requested a review October 23, 2025 18:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

models [Component] Issues related to model support

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants