1+ {
2+ "cells" : [
3+ {
4+ "cell_type" : " markdown" ,
5+ "id" : " 2722b419" ,
6+ "metadata" : {},
7+ "source" : [
8+ " [](https://colab.research.google.com/github/openlayer-ai/openlayer-python/blob/main/examples/tracing/openai/openai_responses_api_tracing.ipynb)\n " ,
9+ " \n " ,
10+ " \n " ,
11+ " # <a id=\" top\" >OpenAI Responses API monitoring with Openlayer</a>\n " ,
12+ " \n " ,
13+ " This notebook shows how to monitor both OpenAI's Chat Completions API and the new Responses API with Openlayer. The same `trace_openai()` function supports both APIs seamlessly."
14+ ]
15+ },
16+ {
17+ "cell_type" : " code" ,
18+ "execution_count" : null ,
19+ "id" : " 020c8f6a" ,
20+ "metadata" : {},
21+ "outputs" : [],
22+ "source" : [
23+ " !pip install openlayer openai"
24+ ]
25+ },
26+ {
27+ "cell_type" : " markdown" ,
28+ "id" : " 75c2a473" ,
29+ "metadata" : {},
30+ "source" : [
31+ " ## 1. Set the environment variables"
32+ ]
33+ },
34+ {
35+ "cell_type" : " code" ,
36+ "execution_count" : null ,
37+ "id" : " f3f4fa13" ,
38+ "metadata" : {},
39+ "outputs" : [],
40+ "source" : [
41+ " import os\n " ,
42+ " import openai\n " ,
43+ " \n " ,
44+ " # OpenAI API key\n " ,
45+ " os.environ[\" OPENAI_API_KEY\" ] = \" YOUR_OPENAI_API_KEY_HERE\"\n " ,
46+ " \n " ,
47+ " # Openlayer configuration\n " ,
48+ " os.environ[\" OPENLAYER_API_KEY\" ] = \" YOUR_OPENLAYER_API_KEY_HERE\"\n " ,
49+ " os.environ[\" OPENLAYER_INFERENCE_PIPELINE_ID\" ] = \" YOUR_OPENLAYER_INFERENCE_PIPELINE_ID_HERE\" "
50+ ]
51+ },
52+ {
53+ "cell_type" : " markdown" ,
54+ "id" : " 9758533f" ,
55+ "metadata" : {},
56+ "source" : [
57+ " ## 2. Create traced OpenAI client"
58+ ]
59+ },
60+ {
61+ "cell_type" : " code" ,
62+ "execution_count" : null ,
63+ "id" : " c35d9860-dc41-4f7c-8d69-cc2ac7e5e485" ,
64+ "metadata" : {},
65+ "outputs" : [],
66+ "source" : [
67+ " from openlayer.lib import trace_openai\n " ,
68+ " \n " ,
69+ " # Single function traces both Chat Completions AND Responses APIs\n " ,
70+ " client = trace_openai(openai.OpenAI())"
71+ ]
72+ },
73+ {
74+ "cell_type" : " markdown" ,
75+ "id" : " 72a6b954" ,
76+ "metadata" : {},
77+ "source" : [
78+ " ## 3. Use Chat Completions API (existing functionality)"
79+ ]
80+ },
81+ {
82+ "cell_type" : " code" ,
83+ "execution_count" : null ,
84+ "id" : " e00c1c79" ,
85+ "metadata" : {},
86+ "outputs" : [],
87+ "source" : [
88+ " # Chat Completions API - works exactly as before\n " ,
89+ " response = client.chat.completions.create(\n " ,
90+ " model=\" gpt-4o-mini\" ,\n " ,
91+ " messages=[{\" role\" : \" user\" , \" content\" : \" What is 2 + 2?\" }],\n " ,
92+ " max_tokens=50\n " ,
93+ " )\n " ,
94+ " \n " ,
95+ " print(\" Chat Completions:\" , response.choices[0].message.content)"
96+ ]
97+ },
98+ {
99+ "cell_type" : " markdown" ,
100+ "id" : " 76a350b4" ,
101+ "metadata" : {},
102+ "source" : [
103+ " ## 4. Use Responses API (new unified interface)"
104+ ]
105+ },
106+ {
107+ "cell_type" : " code" ,
108+ "execution_count" : null ,
109+ "id" : " responses-api-example" ,
110+ "metadata" : {},
111+ "outputs" : [],
112+ "source" : [
113+ " # Responses API - new unified interface with enhanced metadata\n " ,
114+ " if hasattr(client, 'responses'):\n " ,
115+ " response = client.responses.create(\n " ,
116+ " model=\" gpt-4o-mini\" ,\n " ,
117+ " input=\" What is 3 + 3?\" ,\n " ,
118+ " max_output_tokens=50\n " ,
119+ " )\n " ,
120+ " \n " ,
121+ " # Extract response text\n " ,
122+ " if response.output and len(response.output) > 0:\n " ,
123+ " result = response.output[0].content[0].text\n " ,
124+ " print(\" Responses API:\" , result)\n " ,
125+ " print(f\" Response ID: {response.id}\" )\n else:\n " ,
126+ " print(\" Responses API not available in this OpenAI version\" )"
127+ ]
128+ },
129+ {
130+ "cell_type" : " markdown" ,
131+ "id" : " streaming-example" ,
132+ "metadata" : {},
133+ "source" : [
134+ " ## 5. Streaming example"
135+ ]
136+ },
137+ {
138+ "cell_type" : " code" ,
139+ "execution_count" : null ,
140+ "id" : " streaming-code" ,
141+ "metadata" : {},
142+ "outputs" : [],
143+ "source" : [
144+ " # Streaming works with both APIs\n " ,
145+ " stream = client.chat.completions.create(\n " ,
146+ " model=\" gpt-4o-mini\" ,\n " ,
147+ " messages=[{\" role\" : \" user\" , \" content\" : \" Count from 1 to 3\" }],\n " ,
148+ " stream=True\n " ,
149+ " )\n " ,
150+ " \n " ,
151+ " print(\" Streaming response: \" , end=\"\" )\n " ,
152+ " for chunk in stream:\n " ,
153+ " if chunk.choices[0].delta.content:\n " ,
154+ " print(chunk.choices[0].delta.content, end=\"\" )\n print(\"\\ n✓ All requests automatically traced to Openlayer!\" )"
155+ ]
156+ },
157+ {
158+ "cell_type" : " markdown" ,
159+ "id" : " conclusion" ,
160+ "metadata" : {},
161+ "source" : [
162+ " ## Summary\n " ,
163+ " \n " ,
164+ " That's it! The same `trace_openai()` function now supports:\n " ,
165+ " \n " ,
166+ " - ✅ **Chat Completions API** - Full backward compatibility\n " ,
167+ " - ✅ **Responses API** - New unified interface with enhanced metadata\n " ,
168+ " - ✅ **Streaming** - Both APIs support streaming\n " ,
169+ " - ✅ **Function calling** - Tool calls work with both APIs\n " ,
170+ " - ✅ **Async support** - Use `trace_async_openai()` for async clients\n " ,
171+ " \n " ,
172+ " All traces are automatically sent to Openlayer with proper API type differentiation!"
173+ ]
174+ }
175+ ],
176+ "metadata" : {
177+ "kernelspec" : {
178+ "display_name" : " Python 3 (ipykernel)" ,
179+ "language" : " python" ,
180+ "name" : " python3"
181+ },
182+ "language_info" : {
183+ "codemirror_mode" : {
184+ "name" : " ipython" ,
185+ "version" : 3
186+ },
187+ "file_extension" : " .py" ,
188+ "mimetype" : " text/x-python" ,
189+ "name" : " python" ,
190+ "nbconvert_exporter" : " python" ,
191+ "pygments_lexer" : " ipython3" ,
192+ "version" : " 3.9.18"
193+ }
194+ },
195+ "nbformat" : 4 ,
196+ "nbformat_minor" : 5
197+ }
0 commit comments