Skip to content

Commit b707641

Browse files
feat(closes OPEN-7550): add tracing for openai chat completions parse method
1 parent 1ec6ca7 commit b707641

File tree

5 files changed

+1060
-0
lines changed

5 files changed

+1060
-0
lines changed

README.md

Lines changed: 59 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -101,6 +101,65 @@ asyncio.run(main())
101101

102102
Functionality between the synchronous and asynchronous clients is otherwise identical.
103103

104+
## LLM Tracing
105+
106+
Openlayer provides automatic tracing for popular LLM providers, enabling you to monitor model performance, token usage, and response quality.
107+
108+
### OpenAI Tracing
109+
110+
Trace OpenAI chat completions (including the new structured output `parse` method) with automatic monitoring:
111+
112+
```python
113+
import openai
114+
from openlayer.lib import trace_openai
115+
116+
# Trace your OpenAI client
117+
client = trace_openai(openai.OpenAI())
118+
119+
# Use normally - both create and parse methods are automatically traced
120+
response = client.chat.completions.create(
121+
model="gpt-4o-mini",
122+
messages=[{"role": "user", "content": "Hello!"}]
123+
)
124+
125+
# NEW: Parse method support for structured outputs
126+
from pydantic import BaseModel
127+
128+
class Person(BaseModel):
129+
name: str
130+
age: int
131+
132+
structured_response = client.chat.completions.parse(
133+
model="gpt-4o-mini",
134+
messages=[{"role": "user", "content": "Extract: John Doe, 30 years old"}],
135+
response_format=Person
136+
)
137+
```
138+
139+
**What gets traced:**
140+
- Input messages and model parameters
141+
- Response content (structured data for parse method)
142+
- Token usage and latency metrics
143+
- Raw API responses for debugging
144+
- Custom inference IDs for request tracking
145+
146+
### Other LLM Providers
147+
148+
```python
149+
from openlayer.lib import trace_anthropic, trace_mistral, trace_groq
150+
151+
# Anthropic
152+
anthropic_client = trace_anthropic(anthropic.Anthropic())
153+
154+
# Mistral
155+
mistral_client = trace_mistral(mistralai.Mistral())
156+
157+
# Groq
158+
groq_client = trace_groq(groq.Groq())
159+
```
160+
161+
See the [examples directory](examples/tracing/) for comprehensive tracing examples with all supported providers.
162+
104163
### With aiohttp
105164

106165
By default, the async client uses `httpx` for HTTP requests. However, for improved concurrency performance you may also use `aiohttp` as the HTTP backend.
Lines changed: 163 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,163 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"id": "intro",
6+
"metadata": {},
7+
"source": [
8+
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openlayer-ai/openlayer-python/blob/main/examples/tracing/openai/openai_parse_tracing.ipynb)\n",
9+
"\n",
10+
"\n",
11+
"# <a id=\"top\">OpenAI parse method monitoring with Openlayer</a>\n",
12+
"\n",
13+
"This notebook shows how to monitor OpenAI's `chat.completions.parse()` method for structured outputs with Openlayer."
14+
]
15+
},
16+
{
17+
"cell_type": "code",
18+
"execution_count": null,
19+
"id": "install",
20+
"metadata": {},
21+
"outputs": [],
22+
"source": [
23+
"!pip install openlayer openai pydantic"
24+
]
25+
},
26+
{
27+
"cell_type": "markdown",
28+
"id": "setup",
29+
"metadata": {},
30+
"source": [
31+
"## 1. Set the environment variables"
32+
]
33+
},
34+
{
35+
"cell_type": "code",
36+
"execution_count": null,
37+
"id": "env-vars",
38+
"metadata": {},
39+
"outputs": [],
40+
"source": [
41+
"import os\n",
42+
"\n",
43+
"import openai\n",
44+
"from pydantic import BaseModel\n",
45+
"\n",
46+
"# OpenAI API key\n",
47+
"os.environ[\"OPENAI_API_KEY\"] = \"YOUR_OPENAI_API_KEY_HERE\"\n",
48+
"\n",
49+
"# Openlayer configuration\n",
50+
"os.environ[\"OPENLAYER_API_KEY\"] = \"YOUR_OPENLAYER_API_KEY_HERE\"\n",
51+
"os.environ[\"OPENLAYER_INFERENCE_PIPELINE_ID\"] = \"YOUR_OPENLAYER_INFERENCE_PIPELINE_ID_HERE\""
52+
]
53+
},
54+
{
55+
"cell_type": "markdown",
56+
"id": "trace",
57+
"metadata": {},
58+
"source": [
59+
"## 2. Create traced OpenAI client"
60+
]
61+
},
62+
{
63+
"cell_type": "code",
64+
"execution_count": null,
65+
"id": "create-client",
66+
"metadata": {},
67+
"outputs": [],
68+
"source": [
69+
"from openlayer.lib import trace_openai\n",
70+
"\n",
71+
"# Single function traces both create AND parse methods\n",
72+
"client = trace_openai(openai.OpenAI())"
73+
]
74+
},
75+
{
76+
"cell_type": "markdown",
77+
"id": "model",
78+
"metadata": {},
79+
"source": [
80+
"## 3. Define Pydantic model"
81+
]
82+
},
83+
{
84+
"cell_type": "code",
85+
"execution_count": null,
86+
"id": "define-model",
87+
"metadata": {},
88+
"outputs": [],
89+
"source": [
90+
"class Person(BaseModel):\n",
91+
" name: str\n",
92+
" age: int\n",
93+
" occupation: str"
94+
]
95+
},
96+
{
97+
"cell_type": "markdown",
98+
"id": "use-parse",
99+
"metadata": {},
100+
"source": [
101+
"## 4. Use parse method for structured output"
102+
]
103+
},
104+
{
105+
"cell_type": "code",
106+
"execution_count": null,
107+
"id": "parse-example",
108+
"metadata": {},
109+
"outputs": [],
110+
"source": [
111+
"# Parse method automatically returns structured Pydantic object\n",
112+
"completion = client.chat.completions.parse(\n",
113+
" model=\"gpt-4o-mini\",\n",
114+
" messages=[\n",
115+
" {\"role\": \"user\", \"content\": \"Extract: John Doe is 30 years old and works as a software engineer\"}\n",
116+
" ],\n",
117+
" response_format=Person,\n",
118+
")\n",
119+
"\n",
120+
"person = completion.parsed\n",
121+
"person"
122+
]
123+
},
124+
{
125+
"cell_type": "markdown",
126+
"id": "summary",
127+
"metadata": {},
128+
"source": [
129+
"## Summary\n",
130+
"\n",
131+
"The same `trace_openai()` function now supports:\n",
132+
"\n",
133+
"- ✅ **chat.completions.create()** - Traditional completions\n",
134+
"- ✅ **chat.completions.parse()** - Structured outputs with Pydantic/JSON Schema\n",
135+
"- ✅ **Automatic tracing** - Token usage, latency, and response quality\n",
136+
"- ✅ **Streaming support** - Both methods support streaming\n",
137+
"\n",
138+
"All traces are automatically sent to Openlayer!"
139+
]
140+
}
141+
],
142+
"metadata": {
143+
"kernelspec": {
144+
"display_name": "Python 3 (ipykernel)",
145+
"language": "python",
146+
"name": "python3"
147+
},
148+
"language_info": {
149+
"codemirror_mode": {
150+
"name": "ipython",
151+
"version": 3
152+
},
153+
"file_extension": ".py",
154+
"mimetype": "text/x-python",
155+
"name": "python",
156+
"nbconvert_exporter": "python",
157+
"pygments_lexer": "ipython3",
158+
"version": "3.9.18"
159+
}
160+
},
161+
"nbformat": 4,
162+
"nbformat_minor": 5
163+
}

0 commit comments

Comments
 (0)