diff --git a/docs/platforms/javascript/common/tracing/span-metrics/examples.mdx b/docs/platforms/javascript/common/tracing/span-metrics/examples.mdx
index c992a3243e24b2..2bca01cd43b95a 100644
--- a/docs/platforms/javascript/common/tracing/span-metrics/examples.mdx
+++ b/docs/platforms/javascript/common/tracing/span-metrics/examples.mdx
@@ -6,305 +6,771 @@ sidebar_order: 10
 
 
 
-These examples assume you have already set up tracing in your application.
+The sample code contained within this page is for demonstration purposes only. It is not production-ready. Examples are structural and ultimately may not be for your specific language or framework.
 
 
 
-This guide provides practical examples of using span attributes and metrics to solve common monitoring and debugging challenges across your entire application stack. Each example demonstrates how to instrument both frontend and backend components, showing how they work together within a distributed trace to provide end-to-end visibility.
+This guide provides practical examples of using span attributes and metrics to solve common monitoring and debugging challenges across your entire application stack. Each example demonstrates how to instrument both frontend and backend components, showing how they work together within a distributed trace to provide end-to-end visibility. You'll also find example repository code, walkthroughs and attributes to explore.
 
-## File Upload and Processing Pipeline
+## E-Commerce Checkout Flow (React + Backend)
 
-**Challenge:** Understanding bottlenecks and failures in multi-step file processing operations across client and server components.
+
 
-**Solution:** Track the entire file processing pipeline with detailed metrics at each stage, from client-side upload preparation through server-side processing.
+Example Repository: [Crash Commerce](https://github.com/getsentry/crash-commerce-tracing-sample)
 
-**Frontend Instrumentation:**
+**Challenge:** Capture end-to-end checkout flow, understand average cart size and value, diagnose performance of payment providers across frontend, and server API.
+
+**Solution:** Start a client span on the checkout action for the application, and relevant spans on the backend for each step in the checkout flow. Attach attributes that represent critical metrics for the application, such as cart size and value, and payment provider used in the transaction.
+
+**Frontend (React) — instrument the Checkout click handler:**
 
 ```javascript
-// Client-side file upload handling
+// In your Checkout button click handler
 Sentry.startSpan(
   {
-    name: "Client File Upload",
-    op: "file.upload.client",
+    name: 'Checkout',
+    op: 'ui.action',
     attributes: {
-      // Static details available at the start
-      "file.size_bytes": 15728640, // 15MB
-      "file.type": "image/jpeg",
-      "file.name": "user-profile.jpg",
-      "client.compression_applied": true,
+      'cart.item_count': cartCount,
+      'cart.value_minor': cartValueMinor,
+      'cart.currency': 'USD',
+      'payment.provider.ui_selected': paymentProvider,
     },
   },
   async (span) => {
     try {
-      // Begin upload process
-      const uploader = new FileUploader(file);
-
-      // Update progress as upload proceeds
-      uploader.on("progress", (progressEvent) => {
-        span.setAttribute("upload.percent_complete", progressEvent.percent);
-        span.setAttribute("upload.bytes_transferred", progressEvent.loaded);
-      });
-
-      uploader.on("retry", (retryCount) => {
-        span.setAttribute("upload.retry_count", retryCount);
-      });
-
-      const result = await uploader.start();
-
-      // Set final attributes after completion
-      span.setAttribute("upload.total_time_ms", result.totalTime);
-      span.setAttribute("upload.success", true);
-      span.setAttribute("upload.server_file_id", result.fileId);
-
-      return result;
-    } catch (error) {
-      // Record failure information
-      span.setAttribute("upload.success", false);
-      Sentry.captureException(error);
+      const response = await fetch(`${API_URL}/api/checkout`, {
+        method: 'POST',
+        headers: { 'Content-Type': 'application/json' },
+        body: JSON.stringify({ items: cart, paymentProvider }),
+      })
+      if (!response.ok) {
+        const errorData = await response.json().catch(() => ({ error: 'Payment failed' }))
+        throw new Error(errorData.error || `HTTP ${response.status}`)
+      }
+      const data: { orderId: string; paymentProvider: string } = await response.json()
+      span.setAttribute('order.id', data.orderId)
+      span.setAttribute('payment.provider', data.paymentProvider)
+      Sentry.logger.info(Sentry.logger.fmt`✨ Order ${data.orderId} confirmed via ${data.paymentProvider}`)
+      
+      // Show order confirmation
+      setOrderConfirmation({
+        orderId: data.orderId,
+        provider: data.paymentProvider,
+        total: cartValueMinor
+      })
+      setCart([])
+      setIsCartOpen(false)
+    } catch (err) {
+      span.setStatus({ code: 2, message: 'internal_error' })
+      const errorMessage = err instanceof Error ? err.message : 'Checkout failed'
+      setCheckoutError(errorMessage)
+      Sentry.logger.error(Sentry.logger.fmt`❌ ${errorMessage}`)
+    } finally {
+      setIsCheckingOut(false)
     }
   }
-);
+)
 ```
 
-**Backend Instrumentation:**
+Where to put this in your app:
+- In the `onClick` for the checkout button, or inside the submit handler of your checkout form/container component.
+- Auto-instrumentation will add client `fetch` spans; keep the explicit UI span for specific application context.
+
+**Backend — Checkout API with an Order Processing span, and a Payment span:**
 
 ```javascript
-// Server-side processing
-Sentry.startSpan(
-  {
-    name: "Server File Processing",
-    op: "file.process.server",
-    attributes: {
-      // Server processing steps
-      "processing.steps_completed": [
-        "virus_scan",
-        "resize",
-        "compress",
-        "metadata",
-      ],
-
-      // Storage operations
-      "storage.provider": "s3",
-      "storage.region": "us-west-2",
-      "storage.upload_time_ms": 850,
-
-      // CDN configuration
-      "cdn.provider": "cloudfront",
-      "cdn.propagation_ms": 1500,
+// Example: Node/Express
+app.post('/api/checkout', async (req: Request, res: Response) => {
+  await Sentry.startSpan(
+    {
+      name: 'Order Processing',
+      op: 'commerce.order.server',
     },
-  },
-  async () => {
-    // Server-side processing implementation
-  }
-);
+    async (span) => {
+      try {
+        const items = (req.body?.items as { productId: string; quantity: number }[]) || []
+        const requestedProviderRaw = (req.body?.paymentProvider as string | undefined) ?? undefined
+        const requestedProvider = PAYMENT_PROVIDERS.find((p) => p === requestedProviderRaw) ?? pickPaymentProvider()
+
+        // Validate cart
+        if (!Array.isArray(items) || items.length === 0) {
+          span.setAttribute('payment.status', 'failed')
+          span.setAttribute('inventory.reserved', false)
+          res.status(400).json({ error: 'Cart is empty' })
+          return
+        }
+
+        let totalMinor = 0
+        for (const line of items) {
+          const product = PRODUCTS.find((p) => p.id === line.productId)
+          if (!product || line.quantity <= 0) {
+            span.setAttribute('payment.status', 'failed')
+            span.setAttribute('inventory.reserved', false)
+            res.status(400).json({ error: 'Invalid cart item' })
+            return
+          }
+          totalMinor += product.priceMinor * line.quantity
+        }
+
+        // Simulate reserving inventory (80% chance true)
+        const reserved = Math.random() < 0.8
+
+        // Simulate payment
+        const charge = await Sentry.startSpan(
+          {
+            name: `Charge ${requestedProvider}`,
+            op: 'commerce.payment',
+            attributes: {
+              'payment.provider': requestedProvider,
+            },
+          },
+          async (paymentSpan) => {
+            const result = await fakeCharge(totalMinor, requestedProvider)
+            paymentSpan.setAttribute('payment.status', result.status)
+            return result
+          }
+        )
+
+        if (charge.status === 'failed' || !reserved) {
+          span.setAttribute('payment.provider', charge.provider)
+          span.setAttribute('payment.status', 'failed')
+          span.setAttribute('inventory.reserved', reserved)
+          res.status(402).json({ error: 'Payment failed' })
+          return
+        }
+
+        const orderId = randomId()
+        ORDERS.push({ id: orderId, totalMinor, items })
+
+        // Set attributes before returning
+        span.setAttribute('order.id', orderId)
+        span.setAttribute('payment.provider', charge.provider)
+        span.setAttribute('payment.status', 'success')
+        span.setAttribute('inventory.reserved', reserved)
+
+        res.json({ orderId, paymentProvider: charge.provider })
+      } catch (err) {
+        Sentry.captureException(err)
+        res.status(500).json({ error: 'Internal error' })
+      }
+    }
+  )
+})
 ```
 
-**How the Trace Works Together:**
-The frontend span initiates the trace and handles the file upload process. It propagates the trace context to the backend through the upload request headers. The backend span continues the trace, processing the file and storing it. This creates a complete picture of the file's journey from client to CDN, allowing you to:
+**How the trace works together:**
+- UI span starts when checkout is selected → Server Backend starts a span to continue the track when the server `/checkout` API is called. As payment processes, a payment span is started.
+- Attributes and Span metrics let you track more than just the latency of the request. Can track store busienss performances through `cart.item_count` and other `cart` attributes, and store reliabiliyt by checking error performance on `payment.provider` properties.
 
-- Identify bottlenecks at any stage (client prep, upload, server processing, CDN propagation)
-- Track end-to-end processing times and success rates
-- Monitor resource usage across the stack
-- Correlate client-side upload issues with server-side processing errors
+What to monitor with span metrics:
+- p95 span.duration of `op:ui.action` checkout by `cart.item_count` bucket.
+- Error rate for `op:payment` by `payment.provider`.
 
-## LLM Integration Monitoring
+## Media Upload with Background Processing (React + Express)
 
-**Challenge:** Managing cost (token usage) and performance of LLM integrations across frontend and backend components.
+Example Repository: [SnapTrace](https://github.com/getsentry/snaptrace-tracing-example)
 
-**Solution:** Tracking of the entire LLM interaction flow, from user input to response rendering.
+**Challenge:** Track user-perceived upload time, server-side validation, and async media processing (optimization, thumbnail generation) while maintaining trace continuity across async boundaries.
 
-**Frontend Instrumentation:**
+**Solution:** Start a client span for the entire upload experience, create a backend span for upload validation, and a separate span for async media processing. Use rich attributes instead of excessive spans to capture processing details.
 
-```javascript
-// Client-side LLM interaction handling
-Sentry.startSpan(
-  {
-    name: "LLM Client Interaction",
-    op: "gen_ai.generate_text",
-    attributes: {
-      // Initial metrics available at request time
-      "input.char_count": 280,
-      "input.language": "en",
-      "input.type": "question",
+**Frontend (React) — Instrument Upload Action**
+
+```typescript
+// In your UploadForm component's upload handler
+const handleUpload = async () => {
+  if (!selectedFile) return;
+
+  // Start Sentry span for entire upload operation
+  await Sentry.startSpan(
+    {
+      name: 'Upload media',
+      op: 'file.upload',
+      attributes: {
+        'file.size_bytes': selectedFile.size,
+        'file.mime_type': selectedFile.type,
+      }
     },
-  },
-  async (span) => {
-    const startTime = performance.now();
+    async (span) => {
+      const uploadStartTime = Date.now();
+      
+      try {
+        // Single API call to upload and start processing
+        const uploadResponse = await fetch(`${API_BASE_URL}/api/upload`, {
+          method: 'POST',
+          headers: {
+            'Content-Type': 'application/json',
+          },
+          body: JSON.stringify({
+            fileName: selectedFile.name,
+            fileType: selectedFile.type,
+            fileSize: selectedFile.size
+          })
+        });
+
+        if (!uploadResponse.ok) {
+          throw new Error(`Upload failed: ${uploadResponse.statusText}`);
+        }
+
+        const uploadData = await uploadResponse.json();
+        
+        // Set success attributes
+        span?.setAttribute('upload.success', true);
+        span?.setAttribute('upload.duration_ms', Date.now() - uploadStartTime);
+        span?.setAttribute('job.id', uploadData.jobId);
+        
+        // Update UI to show processing status
+        updateUploadStatus(uploadData.jobId, 'processing');
+        
+      } catch (error) {
+        span?.setAttribute('upload.success', false);
+        span?.setAttribute('upload.error', error instanceof Error ? error.message : 'Unknown error');
+        setUploadStatus('error');
+      }
+    }
+  );
+};
+```
 
-    // Begin streaming response from LLM API
-    const stream = await llmClient.createCompletion({
-      prompt: userInput,
-      stream: true,
-    });
+Where to put this in your app:
+- In the upload button click handler or form submit handler
+- In drag-and-drop onDrop callback
+- Auto-instrumentation will capture fetch spans; the explicit span adds business context
+
+**Backend — Upload Validation and Async Processing**
+
+```typescript
+// Import Sentry instrumentation first (required for v10)
+import './instrument';
+import express from 'express';
+import * as Sentry from '@sentry/node';
+
+// POST /api/upload - Receive and validate upload, then trigger async processing
+app.post('/api/upload', async (req: Request<{}, {}, UploadRequest>, res: Response) => {
+  const { fileName, fileType, fileSize } = req.body;
+
+  // Span 2: Backend validates and accepts upload
+  await Sentry.startSpan(
+    {
+      op: 'upload.receive',
+      name: 'Receive upload',
+      attributes: {
+        'file.name': fileName,
+        'file.size_bytes': fileSize,
+        'file.mime_type': fileType,
+        'validation.passed': true
+      }
+    },
+    async (span) => {
+      try {
+        // Validate the upload
+        if (!fileName || !fileType || !fileSize) {
+          span?.setAttribute('validation.passed', false);
+          span?.setAttribute('validation.error', 'Missing required fields');
+          return res.status(400).json({ error: 'Missing required fields' });
+        }
+
+        if (fileSize > 50 * 1024 * 1024) { // 50MB limit
+          span?.setAttribute('validation.passed', false);
+          span?.setAttribute('validation.error', 'File too large');
+          return res.status(400).json({ error: 'File too large (max 50MB)' });
+        }
+
+        // Create a job for processing
+        const job = createJob(fileName, fileType, fileSize);
+        span?.setAttribute('job.id', job.id);
+
+        // Start async processing (Span 3 will be created here)
+        setImmediate(async () => {
+          await processMedia(job);
+        });
+
+        // Respond immediately with job ID
+        res.json({
+          jobId: job.id,
+          status: 'accepted',
+          message: 'Upload received and processing started'
+        });
+
+      } catch (error) {
+        span?.setAttribute('validation.passed', false);
+        span?.setAttribute('error.message', error instanceof Error ? error.message : 'Unknown error');
+        Sentry.captureException(error);
+        res.status(500).json({ error: 'Failed to process upload' });
+      }
+    }
+  );
+});
+```
 
-    // Record time to first token when received
-    let firstTokenReceived = false;
-    let tokensReceived = 0;
+**Backend — Async media processing**
+
+```typescript
+// Async media processing (runs in background via setImmediate)
+export async function processMedia(job: ProcessingJob): Promise {
+  await Sentry.startSpan(
+    {
+      op: 'media.process',
+      name: 'Process media',
+      attributes: {
+        'media.size_bytes': job.fileSize,
+        'media.mime_type': job.fileType,
+        'media.size_bucket': getSizeBucket(job.fileSize),
+        'job.id': job.id
+      }
+    },
+    async (span) => {
+      try {
+        const startTime = Date.now();
+        const operations: string[] = [];
+        
+        // Simulate image optimization and thumbnail generation
+        if (job.fileType.startsWith('image/')) {
+          // Note: No separate spans for these operations - use attributes instead
+          await optimizeImage(); // Simulated delay
+          operations.push('optimize');
+          
+          await generateThumbnail(); // Simulated delay
+          operations.push('thumbnail');
+        }
+        
+        // Calculate results
+        const sizeSaved = Math.floor(job.fileSize * 0.3); // 30% reduction
+        const thumbnailCreated = Math.random() > 0.05; // 95% success rate
+        
+        // Rich attributes instead of multiple spans
+        span?.setAttribute('processing.operations', JSON.stringify(operations));
+        span?.setAttribute('processing.optimization_level', 'high');
+        span?.setAttribute('processing.thumbnail_created', thumbnailCreated);
+        span?.setAttribute('processing.duration_ms', Date.now() - startTime);
+        span?.setAttribute('result.size_saved_bytes', sizeSaved);
+        span?.setAttribute('result.size_reduction_percent', 30);
+        span?.setAttribute('result.status', 'success');
+        
+        // Update job status
+        job.status = 'completed';
+        
+      } catch (error) {
+        span?.setAttribute('result.status', 'failed');
+        span?.setAttribute('error.message', error instanceof Error ? error.message : 'Unknown error');
+        Sentry.captureException(error);
+      }
+    }
+  );
+}
+```
 
-    for await (const chunk of stream) {
-      tokensReceived++;
+**How the trace works together:**
+- Frontend span (`file.upload`) captures the entire user experience from file selection to server response.
+- Backend validation span (`upload.receive`) tracks server-side validation and job creation.
+- Async processing span (`media.process`) runs in background with rich attributes for all processing operations.
+- No unnecessary spans for individual operations — prefer attributes for details.
+- Trace continuity is maintained via Sentry’s automatic context propagation.
 
-      // Record time to first token
-      if (!firstTokenReceived && chunk.content) {
-        firstTokenReceived = true;
-        const timeToFirstToken = performance.now() - startTime;
+What to monitor with span metrics:
+- p95 upload duration by `file.size_bucket`.
+- Processing success rate by `media.mime_type`.
+- Average storage saved via `result.size_saved_bytes` where `result.status = success`.
+- Validation failure reasons grouped by `validation.error`.
 
-        span.setAttribute("ui.time_to_first_token_ms", timeToFirstToken);
-      }
+## Search Autocomplete (debounced, cancellable, performance monitoring)
 
-      // Process and render the chunk
-      renderChunkToUI(chunk);
-    }
+Example Repository: [NullFlix](https://github.com/getsentry/nullflix-tracing-example)
 
-    // Record final metrics after stream completes
-    const totalRequestTime = performance.now() - startTime;
+**Challenge:** Users type quickly in search; you need to debounce requests, cancel in-flight calls, handle errors gracefully, and monitor performance across different query types while keeping latency predictable.
 
-    span.setAttribute("ui.total_request_time_ms", totalRequestTime);
-    span.setAttribute("stream.rendering_mode", "markdown");
-    span.setAttribute("stream.tokens_received", tokensReceived);
-  }
-);
-```
+**Solution:** Start a client span for each debounced request, mark aborted requests, track search patterns, and on the server, instrument search performance with meaningful attributes.
 
-**Backend Instrumentation:**
+**Frontend (React + TypeScript) — instrument debounced search:**
 
-```javascript
-// Server-side LLM processing
-Sentry.startSpan(
+```typescript
+const response = await Sentry.startSpan(
   {
-    name: "LLM API Processing",
-    op: "gen_ai.generate_text",
+    op: 'http.client',
+    name: 'Search autocomplete',
     attributes: {
-      // Model configuration - known at start
-      "llm.model": "claude-3-5-sonnet-20241022",
-      "llm.temperature": 0.5,
-      "llm.max_tokens": 4096,
+      'query.length': searchQuery.length,
+      'ui.debounce_ms': DEBOUNCE_MS,
     },
   },
   async (span) => {
-    const startTime = Date.now();
-
     try {
-      // Check rate limits before processing
-      const rateLimits = await getRateLimits();
-      span.setAttribute("llm.rate_limit_remaining", rateLimits.remaining);
-
-      // Make the actual API call to the LLM provider
-      const response = await llmProvider.generateCompletion({
-        model: "claude-3-5-sonnet-20241022",
-        prompt: preparedPrompt,
-        temperature: 0.5,
-        max_tokens: 4096,
-      });
-
-      // Record token usage and performance metrics
-      span.setAttribute("llm.prompt_tokens", response.usage.prompt_tokens);
-      span.setAttribute(
-        "llm.completion_tokens",
-        response.usage.completion_tokens
-      );
-      span.setAttribute("llm.total_tokens", response.usage.total_tokens);
-      span.setAttribute("llm.api_latency_ms", Date.now() - startTime);
-
-      // Calculate and record cost based on token usage
-      const cost = calculateCost(
-        response.usage.prompt_tokens,
-        response.usage.completion_tokens,
-        "claude-3-5-sonnet-20241022"
+      const response = await fetch(
+        `${API_URL}/api/search?${new URLSearchParams({ q: searchQuery })}`,
+        {
+          signal: controller.signal,
+          headers: { 'Content-Type': 'application/json' },
+        }
       );
-      span.setAttribute("llm.cost_usd", cost);
 
-      return response;
+      if (!response.ok) {
+        const errorData = await response.json().catch(() => ({}));
+        const errorMessage = errorData.error || `Search failed: ${response.status}`;
+        throw new Error(errorMessage);
+      }
+
+      const data: SearchResponse = await response.json();
+      
+      span?.setAttribute('results.count', data.results.length);
+      span?.setAttribute('results.has_results', data.results.length > 0);
+      span?.setAttribute('http.response_size', JSON.stringify(data).length);
+      span?.setStatus({ code: 1, message: 'ok' });
+      
+      return data;
     } catch (error) {
-      // Record error details
-      span.setAttribute("error", true);
-      Sentry.captureException(error);
+      if (error instanceof Error && error.name === 'AbortError') {
+        span?.setAttribute('ui.aborted', true);
+        span?.setStatus({ code: 2, message: 'cancelled' });
+        throw error;
+      }
+      
+      span?.setStatus({ code: 2, message: error instanceof Error ? error.message : 'unknown error' });
+      throw error;
     }
   }
 );
 ```
 
-**How the Trace Works Together:**
-The frontend span captures the user interaction and UI rendering performance, while the backend span tracks the actual LLM API interaction. The distributed trace shows the complete flow from user input to rendered response, enabling you to:
-
-- Analyze end-to-end response times and user experience
-- Track costs and token usage patterns
-- Optimize streaming performance and UI rendering
-- Monitor rate limits and queue times
-- Correlate user inputs with model performance
-
-## E-Commerce Transaction Flow
+Where to put this in your app:
+- In your search input component, triggered after debounce timeout
 
-**Challenge:** Understanding the complete purchase flow and identifying revenue-impacting issues across the entire stack.
+**Backend (Node.js + Express) — instrument search with meaningful attributes:**
 
-**Solution:** Track the full checkout process from cart interaction to order fulfillment.
-
-**Frontend Instrumentation:**
-
-```javascript
-// Client-side checkout process
-Sentry.startSpan(
-  {
-    name: "Checkout UI Flow",
-    op: "commerce.checkout.client",
-    attributes: {
-      // Cart interaction metrics
-      "cart.items_added": 3,
-      "cart.items_removed": 0,
-      "cart.update_count": 2,
-
-      // User interaction tracking
-      "ui.form_completion_time_ms": 45000,
-      "ui.payment_method_changes": 1,
-      "ui.address_validation_retries": 0,
+```typescript
+app.get('/api/search', async (req: Request, res: Response) => {
+  await Sentry.startSpan(
+    {
+      name: 'Search',
+      op: 'search',
     },
-  },
-  async () => {
-    // Client-side checkout implementation
-  }
-);
+    async (span) => {
+      try {
+        const query = String(req.query.q || '');
+        const queryLength = query.length;
+        
+        // Check if request was aborted
+        req.on('close', () => {
+          if (!res.headersSent) {
+            span?.setStatus({ code: 2, message: 'cancelled' });
+            span?.setAttribute('request.aborted', true);
+          }
+        });
+        
+        if (!query) {
+          span?.setAttribute('results.count', 0);
+          span?.setAttribute('search.engine', 'elasticsearch');
+          return res.json({ results: [] });
+        }
+        
+        // Perform search
+        const startSearch = Date.now();
+        const results = await searchMovies(query);
+        const searchDuration = Date.now() - startSearch;
+        
+        // Set span attributes
+        span?.setAttribute('search.engine', 'elasticsearch');
+        span?.setAttribute('search.mode', queryLength < 3 ? 'prefix' : 'fuzzy');
+        span?.setAttribute('results.count', results.length);
+        span?.setAttribute('query.length', queryLength);
+        
+        // Track slow searches
+        if (searchDuration > 500) {
+          span?.setAttribute('performance.slow', true);
+          span?.setAttribute('search.duration_ms', searchDuration);
+        }
+        
+        return res.json({ results });
+      } catch (error: any) {
+        span?.setStatus({ code: 2, message: error?.message || 'error' });
+        span?.setAttribute('error.type', (error as any)?.constructor?.name || 'Error');
+        
+        Sentry.captureException(error);
+        if (!res.headersSent) {
+          return res.status(500).json({ error: 'Search failed' });
+        }
+      }
+    }
+  );
+});
 ```
 
-**Backend Instrumentation:**
-
-```javascript
-// Server-side order processing
-Sentry.startSpan(
-  {
-    name: "Order Processing",
-    op: "commerce.order.server",
-    attributes: {
-      // Order details
-      "order.id": "ord_123456789",
-      "order.total_amount": 159.99,
-      "order.currency": "USD",
-      "order.items": ["SKU123", "SKU456", "SKU789"],
-
-      // Payment processing
-      "payment.provider": "stripe",
-      "payment.method": "credit_card",
-      "payment.processing_time_ms": 1200,
-
-      // Inventory checks
-      "inventory.all_available": true,
-
-      // Fulfillment
-      "fulfillment.warehouse": "WEST-01",
-      "fulfillment.shipping_method": "express",
-      "fulfillment.estimated_delivery": "2024-03-20",
+**How the trace works together:**
+- Client span starts when debounced search triggers → tracks the full user-perceived latency.
+- Aborted requests are marked with `ui.aborted=true` and short duration, showing wasted work.
+- Server span shows search performance characteristics: mode (prefix vs fuzzy), results count, and slow queries.
+
+What to monitor with span metrics:
+- p95 duration of `op:search` grouped by `query.length`.
+- Characteristics of slow searches via `op:search performance.slow:true`.
+- Compare prefix vs fuzzy via `op:search` grouped by `search.mode`.
+- Cancellation rate via `op:http.client ui.aborted:true`.
+- Empty result rate via `op:http.client results.has_results:false`.
+- Distribution of `http.response_size` for payload optimization.
+- Error rate for `op:search` filtered by `status:error`.
+- Backend abandonment via `op:search request.aborted:true`.
+
+## Manual LLM Instrumentation (Custom AI Agent + Tool Calls)
+
+Example Repository: _Coming soon - sample repository in development_
+
+**Challenge:** You're building a custom AI agent that uses a proprietary LLM API (not OpenAI/Anthropic), performs multi-step reasoning with tool calls, and needs comprehensive monitoring to track token usage, tool performance, and agent effectiveness across the entire conversation flow.
+
+**Solution:** Manually instrument each component of the AI pipeline using Sentry's AI agent span conventions. Create spans for agent invocation, LLM calls, tool executions, and handoffs between agents, with rich attributes for monitoring costs, performance, and business metrics.
+
+
+
+**Frontend (React) — Instrument AI Chat Interface:**
+
+```typescript
+import { useState, useEffect } from 'react';
+import { SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN } from '@sentry/core';
+
+// In your AI chat component
+export default function CustomerSupportChat() {
+  const [conversationHistory, setConversationHistory] = useState([]);
+  const [sessionId, setSessionId] = useState('');
+  
+  // Generate sessionId on client-side only to avoid hydration mismatch
+  useEffect(() => {
+    setSessionId(`session_${Date.now()}`);
+  }, []);
+
+const handleSendMessage = async (userMessage: string) => {
+  await Sentry.startSpan(
+    {
+      name: 'invoke_agent Customer Support Agent',
+      op: 'gen_ai.invoke_agent',
+      attributes: {
+        'gen_ai.operation.name': 'invoke_agent',
+        'gen_ai.agent.name': 'Customer Support Agent',
+        'gen_ai.system': 'custom-llm',
+        'gen_ai.request.model': 'custom-model-v2',
+        'gen_ai.request.messages': JSON.stringify([
+          { role: 'system', content: 'You are a helpful customer support agent.' },
+          ...conversationHistory,
+          { role: 'user', content: userMessage }
+        ]),
+        [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'manual.ai.custom-llm',
+        'conversation.turn': conversationHistory.length + 1,
+        'conversation.session_id': sessionId,
+      },
     },
-  },
-  async () => {
-    // Server-side order processing
-  }
-);
+    async (agentSpan) => {
+      try {
+        setIsLoading(true);
+        
+        // Call your backend AI agent endpoint
+        const response = await fetch('/api/ai/chat', {
+          method: 'POST',
+          headers: { 'Content-Type': 'application/json' },
+          body: JSON.stringify({
+            message: userMessage,
+            sessionId: sessionId,
+            conversationHistory: conversationHistory
+          })
+        });
+
+        if (!response.ok) {
+          throw new Error(`AI request failed: ${response.status}`);
+        }
+
+        const aiResponse = await response.json();
+        
+        // Set response attributes
+        agentSpan.setAttribute('gen_ai.response.text', aiResponse.message);
+        agentSpan.setAttribute('gen_ai.response.id', aiResponse.responseId);
+        agentSpan.setAttribute('gen_ai.response.model', 'custom-model-v2');
+        agentSpan.setAttribute('gen_ai.usage.total_tokens', aiResponse.totalTokens);
+        agentSpan.setAttribute('conversation.tools_used', aiResponse.toolsUsed?.length || 0);
+        agentSpan.setAttribute('conversation.resolution_status', aiResponse.resolutionStatus);
+        
+        // Update UI with response
+        setConversationHistory(prev => [
+          ...prev,
+          { role: 'user', content: userMessage },
+          { role: 'assistant', content: aiResponse.message }
+        ]);
+        
+        Sentry.logger.info(Sentry.logger.fmt`AI agent completed conversation turn ${conversationHistory.length + 1}`);
+        
+      } catch (error) {
+        agentSpan.setStatus({ code: 2, message: 'internal_error' });
+        agentSpan.setAttribute('error.type', error instanceof Error ? error.constructor.name : 'UnknownError');
+        setError('Failed to get AI response. Please try again.');
+        Sentry.logger.error(Sentry.logger.fmt`AI agent failed: ${error instanceof Error ? error.message : 'Unknown error'}`);
+      } finally {
+        setIsLoading(false);
+      }
+    }
+  );
+};
 ```
 
-**How the Trace Works Together:**
-The frontend span tracks the user's checkout experience, while the backend span handles order processing and fulfillment. The distributed trace provides visibility into the entire purchase flow, allowing you to:
+Where to put this in your app:
+- In your chat message submit handler or AI conversation component
+- Auto-instrumentation will capture the fetch request; the explicit span adds AI-specific context
+- Consider adding user feedback collection to track conversation quality
+
+**Important:** Generate `sessionId` in `useEffect` to avoid hydration errors when using Server-Side Rendering (SSR). Using `Date.now()` or random values during component initialization will cause mismatches between server and client renders.
+
+**Backend — Custom LLM Integration with Tool Calls:**
+
+```typescript
+import { SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN } from '@sentry/core';
+
+// Express API route for custom AI agent
+app.post('/api/ai/chat', async (req: Request, res: Response) => {
+  const { message, sessionId, conversationHistory } = req.body;
+
+  // Main agent invocation span (matches frontend)
+  await Sentry.startSpan(
+    {
+      name: 'invoke_agent Customer Support Agent',
+      op: 'gen_ai.invoke_agent',
+      attributes: {
+        'gen_ai.operation.name': 'invoke_agent',
+        'gen_ai.agent.name': 'Customer Support Agent',
+        'gen_ai.system': 'custom-llm',
+        'gen_ai.request.model': 'custom-model-v2',
+        [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'manual.ai.custom-llm',
+        'conversation.session_id': sessionId,
+      },
+    },
+    async (agentSpan) => {
+      try {
+        const tools = [
+          { name: 'search_knowledge_base', description: 'Search company knowledge base for answers' }
+        ];
+        
+        agentSpan.setAttribute('gen_ai.request.available_tools', JSON.stringify(tools));
+        
+        let totalTokens = 0;
+        let toolsUsed: string[] = [];
+        let finalResponse = '';
+        let resolutionStatus = 'in_progress';
+        
+        // Step 1: Call custom LLM for initial reasoning
+        const llmResponse = await Sentry.startSpan(
+          {
+            name: 'chat custom-model-v2',
+            op: 'gen_ai.chat',
+            attributes: {
+              'gen_ai.operation.name': 'chat',
+              'gen_ai.system': 'custom-llm',
+              'gen_ai.request.model': 'custom-model-v2',
+              'gen_ai.request.messages': JSON.stringify([
+                { role: 'system', content: 'You are a customer support agent. Use tools when needed.' },
+                ...conversationHistory,
+                { role: 'user', content: message }
+              ]),
+              'gen_ai.request.temperature': 0.7,
+              'gen_ai.request.max_tokens': 500,
+              [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'manual.ai.custom-llm',
+            },
+          },
+          async (llmSpan) => {
+            const llmData = await callCustomLLM(message, conversationHistory);
+            
+            // Set LLM response attributes
+            llmSpan.setAttribute('gen_ai.response.text', llmData.choices[0].message.content || '');
+            llmSpan.setAttribute('gen_ai.response.id', llmData.id);
+            llmSpan.setAttribute('gen_ai.response.model', llmData.model);
+            llmSpan.setAttribute('gen_ai.usage.input_tokens', llmData.usage.prompt_tokens);
+            llmSpan.setAttribute('gen_ai.usage.output_tokens', llmData.usage.completion_tokens);
+            llmSpan.setAttribute('gen_ai.usage.total_tokens', llmData.usage.total_tokens);
+            
+            if (llmData.choices[0].message.tool_calls) {
+              llmSpan.setAttribute('gen_ai.response.tool_calls', JSON.stringify(llmData.choices[0].message.tool_calls));
+            }
+            
+            totalTokens += llmData.usage.total_tokens;
+            return llmData;
+          }
+        );
+        
+        // Step 2: Execute tool calls if present
+        if (llmResponse.choices[0].message.tool_calls) {
+          for (const toolCall of llmResponse.choices[0].message.tool_calls) {
+            await Sentry.startSpan(
+              {
+                name: `execute_tool ${toolCall.function.name}`,
+                op: 'gen_ai.execute_tool',
+                attributes: {
+                  'gen_ai.operation.name': 'execute_tool',
+                  'gen_ai.tool.name': toolCall.function.name,
+                  'gen_ai.tool.type': 'function',
+                  'gen_ai.tool.input': toolCall.function.arguments,
+                  [SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN]: 'manual.ai.custom-llm',
+                },
+              },
+              async (toolSpan) => {
+                const toolOutput = await searchKnowledgeBase(JSON.parse(toolCall.function.arguments).query);
+                
+                toolSpan.setAttribute('gen_ai.tool.output', toolOutput);
+                toolSpan.setAttribute('search.query', JSON.parse(toolCall.function.arguments).query);
+                toolsUsed.push(toolCall.function.name);
+              }
+            );
+          }
+        }
+        
+        // Set final agent attributes
+        finalResponse = llmResponse.choices[0].message.content;
+        agentSpan.setAttribute('gen_ai.response.text', finalResponse);
+        agentSpan.setAttribute('gen_ai.usage.total_tokens', totalTokens);
+        agentSpan.setAttribute('conversation.tools_used', JSON.stringify(toolsUsed));
+        agentSpan.setAttribute('conversation.resolution_status', toolsUsed.length > 0 ? 'resolved' : 'answered');
+        
+        res.json({
+          message: finalResponse,
+          totalTokens,
+          toolsUsed,
+        });
+        
+      } catch (error) {
+        agentSpan.setStatus({ code: 2, message: 'agent_invocation_failed' });
+        agentSpan.setAttribute('error.type', error instanceof Error ? error.constructor.name : 'UnknownError');
+        Sentry.captureException(error);
+        res.status(500).json({ error: 'AI agent processing failed' });
+      }
+    }
+  );
+});
+
+// Helper functions for tool execution
+async function searchKnowledgeBase(query: string): Promise {
+  // Search company knowledge base - returns relevant policy info
+  const results = [
+    "Our return policy allows returns within 30 days of purchase.",
+    "Refunds are processed within 5-7 business days after we receive the item.",
+    "Items must be in original condition with tags attached.",
+    "Free return shipping is provided for defective items."
+  ];
+  return results.join('\n');
+}
+
+
+async function synthesizeResponse(llmResponse: any, toolsUsed: string[]): Promise {
+  // Make final LLM call to synthesize tool results into response
+  return {
+    message: "Based on the information I found, here's your answer...",
+    usage: { total_tokens: 150 }
+  };
+}
+```
 
-- Analyze checkout funnel performance and drop-off points
-- Track payment processing success rates and timing
-- Monitor inventory availability impact on conversions
-- Measure end-to-end order completion times
-- Identify friction points in the user experience
+**How the trace works together:**
+- Frontend span (`gen_ai.invoke_agent`) captures the entire user interaction from message to response.
+- Backend agent span continues the trace with the same operation and agent name for correlation.
+- LLM spans (`gen_ai.chat`) track individual model calls with token usage and performance.
+- Tool execution spans (`gen_ai.execute_tool`) monitor each tool call with input/output and timing.
+- Rich attributes enable monitoring of conversation quality, cost, and business outcomes.
+
+What to monitor with span metrics:
+- p95 duration of `op:gen_ai.invoke_agent` grouped by `conversation.resolution_status`.
+- Token usage trends via `gen_ai.usage.total_tokens` by `gen_ai.request.model`.
+- Tool usage patterns via `op:gen_ai.execute_tool` grouped by `gen_ai.tool.name`.
+- Cost analysis via `conversation.cost_estimate_usd` aggregated by time period.
+- Agent effectiveness via `conversation.resolution_status` distribution.
+- Error rates for each component: `op:gen_ai.chat`, `op:gen_ai.execute_tool`, `op:gen_ai.invoke_agent`.
diff --git a/docs/platforms/javascript/common/tracing/span-metrics/img/custom-llm-monitoring.png b/docs/platforms/javascript/common/tracing/span-metrics/img/custom-llm-monitoring.png
new file mode 100644
index 00000000000000..53fcdfaadd81e2
Binary files /dev/null and b/docs/platforms/javascript/common/tracing/span-metrics/img/custom-llm-monitoring.png differ