-
Notifications
You must be signed in to change notification settings - Fork 4.3k
Open
Labels
.NETIssue or Pull requests regarding .NET codeIssue or Pull requests regarding .NET codebugSomething isn't workingSomething isn't working
Description
When using gpt-image-1 through AzureOpenAITextToImageService, two categories of problems surface:
- Model vs deployment handling
- Observed: Using an Azure OpenAI deployment name other than “gpt-image-1” triggers HTTP 400 from the image API, consistent with the request body passing the deployment name as the model-id.
- Expected: For OpenAI’s non-Azure endpoint, model should be the literal “gpt-image-1”. For Azure OpenAI, the SDK should target the Azure deployment without passing the deployment name as model-id to the OpenAI-style JSON payload. In other words, avoid sending:
and instead either:
{ "model": "my-gpt-image-deployment", "prompt": "..." }
- OpenAI (non-Azure):
"model": "gpt-image-1"
- Azure: use the Azure.AI.OpenAI client with the correct deployment routing, not a model-id string in the JSON payload.
- OpenAI (non-Azure):
- Missing gpt-image-1 execution settings
- OpenAITextToImageExecutionSettings lacks:
- quality: “low” | “medium” | “high”
- output_format: “png” | “jpeg” | “webp”
- Expected: gpt-image-1 supports these parameters; SK execution settings should expose them and map them to the underlying request.
Repro steps
- Configure SK with AzureOpenAITextToImageService and a valid Azure OpenAI deployment for gpt-image-1 (e.g., “my-gpt-image-deployment”).
- Call GetImageContentsAsync and inspect the outbound payload or capture the exception.
- Actual: HTTP 400 invalid_request due to model-id mismatch when the deployment name is used as the model in the payload.
- Attempt to set execution settings for quality=“medium” and output_format=“jpeg”; these aren’t supported by OpenAITextToImageExecutionSettings in current SK releases.
Actual result
- 400 error when using non-default deployment names, due to the deployment name being passed as a model-id in the request.
- No way to specify gpt-image-1 parameters like quality and output_format via SK execution settings.
Expected result
- Azure connector uses Azure.AI.OpenAI v2 client routing via deployment, not a model-id field in the body.
- Execution settings include gpt-image-1 parameters and are correctly mapped:
- quality: low | medium | high
- output_format: png | jpeg | webp
- Requests succeed for any valid Azure deployment name bound to gpt-image-1.
- OpenAI (non-Azure) connector sends "model": "gpt-image-1".
Environment
- Semantic Kernel: 1.66.0
- Connector: Microsoft.SemanticKernel.Connectors.AzureOpenAI.AzureOpenAITextToImageService
- Runtime: .NET
- Azure OpenAI: gpt-image-1 deployment
- OS/IDE: Windows 11 / Visual Studio 2022
Impact
- Blocks proper use of Azure OpenAI deployments for gpt-image-1 when deployment names differ from the literal model string.
- Prevents callers from using supported quality/output_format options, limiting functionality and cost/perf tuning.
Proposed changes
-
Model/deployment handling
- Azure: Use Azure.AI.OpenAI OpenAIClient.GetImageGenerationsAsync (or current equivalent) with the deployment routing, avoiding putting deployment name into “model” in the JSON payload.
- OpenAI (non-Azure): Set "model": "gpt-image-1" explicitly.
-
Execution settings additions
- Extend OpenAITextToImageExecutionSettings to include:
public sealed class OpenAITextToImageExecutionSettings { // New public string? OutputFormat { get; set; } // "png" | "jpeg" | "webp" // Fix GetGeneratedImageQuality to support "low", "medium" for gpt-image-1 }
- Map these properties to the underlying request body for gpt-image-1:
{ "model": "gpt-image-1", "prompt": "...", "quality": "medium", "output_format": "jpeg", "size": "1024x1024" }
- Maintain backwards compatibility: default quality to “high” and output_format to “png” if not specified.
- Extend OpenAITextToImageExecutionSettings to include:
-
Tests
- Azure: Valid deployment with non-“gpt-image-1” name should succeed; no “model” mismatch in payload.
- OpenAI: Requests include "model": "gpt-image-1".
- Execution settings: When set, the service produces images with requested quality and format; serialization verified.
- Negative tests for invalid values produce clear, actionable errors.
Acceptance criteria
- Azure deployments work with arbitrary deployment names mapped to gpt-image-1 without 400/404 caused by model-id misuse.
- OpenAITextToImageExecutionSettings exposes quality/output_format and these are correctly honored in requests.
- Docs/samples briefly mention new properties and Azure vs OpenAI model/deployment behavior.
- Unit/integration tests cover both Azure and OpenAI paths.
References
- OpenAI docs: Image generation parameters (quality, formats) Image generation - OpenAI API
- Prior SK Azure image issue referencing AzureOpenAITextToImageService: .Net: The API deployment for this resource does not exist
- Microsoft Learn image options overview (quality modes) How to Use Image Generation Models from OpenAI - Azure OpenAI
Metadata
Metadata
Assignees
Labels
.NETIssue or Pull requests regarding .NET codeIssue or Pull requests regarding .NET codebugSomething isn't workingSomething isn't working
Type
Projects
Status
No status