openai-cpp is a modern, fully native C++17 client for the OpenAI API. It mirrors the ergonomics of the official TypeScript SDK (client.responses().create(), client.chat().completions().stream(), etc.) while taking advantage of strong typing, RAII-friendly resource management, and portable CMake tooling.
ℹ️ The latest stable release is
1.0.0(tag1.0.0). The library remains under active development, so expect surface area to evolve as the upstream OpenAI platform ships new capabilities.
- Strongly typed request/response models covering Responses, Chat Completions, Assistants, Vector Stores, Files, Images, Audio, Moderations, Embeddings, and more.
- First-class streaming helpers for the Responses API, Chat Completions, and Assistants (Server-Sent Events with incremental callbacks).
- Pluggable HTTP transport via an abstract
HttpClientinterface (libcurl backend provided out of the box). - Consistent error model (
openai::APIError,openai::OpenAIError) with rich metadata. - Cross-platform CMake project with demo binaries (
openai-cli,openai-chat) and optional GoogleTest suite.
- Core:
OpenAIClient, configurableClientOptions, request-level overrides (RequestOptions). - Text & multimodal:
ResponsesResource,ChatResource,CompletionsResource,ModerationsResource,EmbeddingsResource. - Media:
ImagesResource,AudioResource,VideosResource. - File workflows:
FilesResource,UploadsResource,VectorStoresResource,BatchesResource. - Assistants beta:
AssistantsResource,ThreadsResource,ThreadMessagesResource,RunsResource,RunStepsResource, SSE parsers. - Automation & tooling:
GradersResource,ConversationsResource,ContainersResource,WebhooksResource,EvalsResource.
Consult docs/ARCHITECTURE.md for a deeper look at the module layout and parity goals.
- A C++17-compatible compiler (Clang 12+, GCC 10+, MSVC 2019+).
- CMake 3.16 or newer.
- libcurl (detected via
find_package(CURL)). - An OpenAI API key with access to the models you intend to call.
cmake -S . -B build
cmake --build buildEnable unit tests (requires GoogleTest download during configure):
cmake -S . -B build -DOPENAI_CPP_BUILD_TESTS=ON
cmake --build build
ctest --test-dir buildDemo binaries live under apps/ and end up in build/apps/. For example, to run the streaming chat demo:
export OPENAI_API_KEY="sk-your-key"
./build/apps/openai-chatinclude(FetchContent)
FetchContent_Declare(
openai-cpp
GIT_REPOSITORY https://github.com/ItsJustMeChris/openai-cpp.git
GIT_TAG 1.0.0
)
FetchContent_MakeAvailable(openai-cpp)
add_executable(my_app src/main.cpp)
target_link_libraries(my_app PRIVATE openai::openai)Pinning the 1.0.0 tag keeps your build on the released surface area. Swap to main if you need the latest commits before the next release is published.
add_subdirectory(external/openai-cpp)
target_link_libraries(my_app PRIVATE openai::openai)Both approaches export the openai::openai target with public headers under include/.
Include the single umbrella header when you want the full client surface in one shot:
#include "openai.hpp"#include "openai.hpp"
#include <chrono>
#include <cstdlib>
#include <iostream>
int main() {
const char* api_key = std::getenv("OPENAI_API_KEY");
if (!api_key) {
std::cerr << "Set OPENAI_API_KEY in your environment\n";
return 1;
}
openai::ClientOptions options;
options.api_key = api_key;
// Optional tuning
options.timeout = std::chrono::seconds(60);
options.default_headers["OpenAI-Beta"] = "assistants=v2";
openai::OpenAIClient client(options);
// ...
}#include "openai.hpp"
#include <iostream>
openai::Response run_prompt(openai::OpenAIClient& client) {
openai::ResponseRequest request;
request.model = "gpt-4o-mini";
openai::ResponseInputContent content;
content.type = openai::ResponseInputContent::Type::Text;
content.text = "Give me three facts about the Galápagos Islands.";
openai::ResponseInputMessage message;
message.role = "user";
message.content.push_back(content);
openai::ResponseInputItem item;
item.type = openai::ResponseInputItem::Type::Message;
item.message = message;
request.input.push_back(item);
auto response = client.responses().create(request);
std::cout << response.output_text << "\n";
return response;
}#include "openai.hpp"
#include <iostream>
void stream_story(openai::OpenAIClient& client) {
openai::ResponseRequest request;
request.model = "gpt-4o-mini";
openai::ResponseInputContent content;
content.type = openai::ResponseInputContent::Type::Text;
content.text = "Stream a 3-sentence bedtime story about C++ templates.";
openai::ResponseInputMessage message;
message.role = "user";
message.content.push_back(content);
openai::ResponseInputItem item;
item.type = openai::ResponseInputItem::Type::Message;
item.message = message;
request.input.push_back(item);
client.responses().stream(
request,
[&](const openai::ResponseStreamEvent& event) {
if (event.text_delta && event.text_delta->output_index == 0) {
std::cout << event.text_delta->delta << std::flush;
}
if (event.error) {
std::cerr << "\n[stream error] " << event.error->message << std::endl;
return false; // stop streaming
}
return true;
});
std::cout << std::endl;
}-
Embeddings:
openai::EmbeddingRequest embeddings_request; embeddings_request.model = "text-embedding-3-small"; embeddings_request.input = {"Hello world"}; auto embedding = client.embeddings().create(embeddings_request);
-
Moderations: Build a
openai::ModerationRequestand pass it toclient.moderations().create(request); -
Assistants / Threads: Combine
client.assistants(),client.threads(), andclient.runs()to orchestrate assistant conversations (seeapps/chat_demo.cpp). -
Uploads & vector stores: Use
client.uploads().create()then attach file IDs to vector store operations.
All request/response structs live under include/openai/*.hpp. Fields are std::optional when they mirror nullable JSON properties.
Most operations throw on failure:
openai::APIError: the API returned an error status (inspectstatus_code(),error_body()).openai::APIConnectionError/openai::APIConnectionTimeoutError: the HTTP backend failed before reaching the API or timed out.openai::APIUserAbortError: you cancelled an in-flight streaming call by returningfalsefrom a callback.openai::OpenAIError: base class for unexpected library issues.
Wrap calls in a try/catch block when running long-lived processes.
-
Headers and query params: Supply per-request mutations through
RequestOptions.openai::RequestOptions request_options; request_options.headers["OpenAI-Beta"] = "assistants=v2"; client.responses().create(request, request_options);
-
Proxying / custom HTTP: Implement
openai::HttpClientand pass it to theOpenAIClientconstructor to integrate with bespoke transports. -
Azure OpenAI: Toggle
options.azure_deployment_routingor setoptions.azure_deployment_nameandoptions.base_urlto target Azure-hosted deployments (seeinclude/openai/azure.hpp). -
Logging: Configure
options.log_levelandoptions.loggerfor structured diagnostics while debugging.
- Architecture:
docs/ARCHITECTURE.md - Testing strategy:
docs/TESTING.md - Responses port status:
docs/RESPONSES_PORT_PLAN.md - Assistant parity tracking:
docs/PARITY_CHECKLIST.md
Issues and pull requests are welcome. Please run the formatting checks, applicable unit tests, and the demos touching the areas you modify before opening a PR.
The library vendors nlohmann::json under the MIT license (see external/nlohmann/LICENSE.MIT). Many type definitions and surface signatures are modeled after the official OpenAI TypeScript SDK for familiarity across languages.