-
Notifications
You must be signed in to change notification settings - Fork 4
Add rate limiting functionality using @upstash/ratelimit and integrate with TRPC procedures #118
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…e with TRPC procedures
The latest updates on your projects. Learn more about Vercel for GitHub.
|
WalkthroughAdds per-service Upstash rate limiting with TRPC middleware; extracts client IP in createContext and surfaces rate-limit metadata in errors; route handler maps TOO_MANY_REQUESTS to 429 with Retry-After and headers; domain endpoints annotated with service meta and guarded by middleware; client shows rate-limit toasts; analytics shutdowns are guarded. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Route as "tRPC Route\n(app/api/trpc/[trpc]/route.ts)"
participant Context as "createContext\n(trpc/init.ts)"
participant Handler as "Endpoint\n(server/routers/domain.ts)"
participant RateLimit as "Rate Limiter\n(server/ratelimit.ts)"
participant ClientUI as "Client UI\n(trpc/client.tsx)"
Client->>Route: HTTP request
Route->>Context: createContext({ req }) → extracts IP
Route->>Handler: invoke endpoint (meta.service present)
Handler->>RateLimit: rateLimitMiddleware -> assertRateLimit(service, ip)
alt allowed
RateLimit->>Handler: allowed (limit, remaining, reset)
Handler->>Route: result
Route->>Client: 200 OK
else exceeded
RateLimit-->>Handler: throws TRPCError(TOO_MANY_REQUESTS){ retryAfter,... }
Handler-->>Route: error
Route->>Client: 429 + Retry-After (+ X-RateLimit-* headers via responseMeta)
Client->>ClientUI: errorToastLink sees TOO_MANY_REQUESTS
ClientUI->>Client: show rate-limit toast
end
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: ASSERTIVE Plan: Pro 📒 Files selected for processing (1)
🧰 Additional context used📓 Path-based instructions (2)**/*.{ts,tsx}📄 CodeRabbit inference engine (AGENTS.md)
Files:
**/*.{ts,tsx,css,json}📄 CodeRabbit inference engine (AGENTS.md)
Files:
🧬 Code graph analysis (1)trpc/client.tsx (1)
🔇 Additional comments (2)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (6)
app/api/trpc/[trpc]/route.ts
(1 hunks)package.json
(1 hunks)server/ratelimit.ts
(1 hunks)server/routers/domain.ts
(4 hunks)trpc/client.tsx
(3 hunks)trpc/init.ts
(1 hunks)
🧰 Additional context used
📓 Path-based instructions (5)
package.json
📄 CodeRabbit inference engine (AGENTS.md)
Require Node.js >= 22 via package.json engines
Files:
package.json
**/*.{ts,tsx,css,json}
📄 CodeRabbit inference engine (AGENTS.md)
Use 2-space indentation across the codebase
Files:
package.json
app/api/trpc/[trpc]/route.ts
server/ratelimit.ts
server/routers/domain.ts
trpc/client.tsx
trpc/init.ts
app/{page.tsx,api/**}
📄 CodeRabbit inference engine (AGENTS.md)
Keep app/page.tsx and all app/api/* handlers thin; delegate logic to server/ or lib/
Files:
app/api/trpc/[trpc]/route.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{ts,tsx}
: TypeScript only; keep modules small, pure, and roughly ≤300 LOC
Consolidate imports using @/... path aliases
Files:
app/api/trpc/[trpc]/route.ts
server/ratelimit.ts
server/routers/domain.ts
trpc/client.tsx
trpc/init.ts
server/**
📄 CodeRabbit inference engine (AGENTS.md)
server/ hosts backend integrations and tRPC routers; isolate DNS, RDAP/WHOIS, TLS, and header probing services here
Files:
server/ratelimit.ts
server/routers/domain.ts
🧠 Learnings (1)
📚 Learning: 2025-10-20T02:11:42.179Z
Learnt from: CR
PR: jakejarvis/domainstack.io#0
File: AGENTS.md:0-0
Timestamp: 2025-10-20T02:11:42.179Z
Learning: Applies to server/trpc.ts : Review server/trpc.ts when extending procedures to ensure auth and context remain intact
Applied to files:
trpc/init.ts
🧬 Code graph analysis (3)
app/api/trpc/[trpc]/route.ts (1)
trpc/init.ts (1)
createContext
(5-8)
server/ratelimit.ts (1)
trpc/init.ts (1)
t
(12-17)
server/routers/domain.ts (2)
trpc/init.ts (1)
publicProcedure
(21-21)server/ratelimit.ts (1)
rateLimitMiddleware
(61-66)
🔇 Additional comments (7)
package.json (1)
43-45
: Add of @upstash/ratelimit looks good.No issues; aligns with new server/ratelimit.ts.
trpc/client.tsx (1)
11-12
: Toast addition looks good and is deduped via id.Nice UX touch; no concerns.
server/routers/domain.ts (5)
26-32
: Input validation pattern looks good.The transform → refine sequence correctly normalizes the input before validating registrability. This ensures downstream handlers receive clean, validated domains.
35-37
: Correct middleware application order.The
.meta()
→.use(rateLimitMiddleware)
sequence is correctly ordered, ensuring the middleware can access service metadata for rate limiting.Also applies to: 41-43, 47-49, 61-63, 67-69, 73-75, 79-81, 85-87, 90-92
53-59
: Verify fire-and-forget pattern is appropriate for domain inspection events.The inngest trigger uses a fire-and-forget pattern (void operator) with no error handling. If the event fails to send, it's silently ignored and won't affect the DNS query response.
Confirm that:
- It's acceptable to lose domain inspection events if inngest is unavailable
- No critical downstream logic depends on these events being delivered
- Monitoring/alerting exists for failed inngest sends
Consider adding error logging for debugging:
const result = await resolveAll(input.domain); - // fire-and-forget background fanout if needed - void inngest.send({ + // fire-and-forget background fanout if needed + inngest.send({ name: "domain/inspected", data: { domain: input.domain }, - }); + }).catch((err) => console.error("Failed to send domain/inspected event:", err)); return result;
37-37
: <!-- [scratchpad_start]
[task verify_ctx_ip 3/3 COMPLETE]
Verification complete - ctx.ip is properly populated[observations]
- trpc/init.ts imports ipAddress from @vercel/functions
- Context creation:
const ip = opts?.req ? (ipAddress(opts.req) ?? null) : null;
- ipAddress() is a Vercel utility that extracts client IP from request headers
- Context type properly includes ip property
- Route handler (app/api/trpc/[trpc]/route.ts) passes Request object to createContext
- Middleware fallback: if ctx.ip missing, returns next() without rate limiting (intentional graceful degradation)
[analysis]
- ctx.ip IS properly populated in tRPC context initialization
- ipAddress() correctly handles header extraction (x-forwarded-for, etc.)
- Rate limiting middleware receives ctx.ip correctly
- No missing context initialization or IP extraction logic
[done]
Review comment concern is resolved - ctx.ip IS properly implemented via @vercel/functions/ipAddress
</scratchpad_end] -->ctx.ip is properly populated via @vercel/functions.
The tRPC context initialization in
trpc/init.ts
correctly extracts the client IP using theipAddress()
function from@vercel/functions
, which handles extracting IP from request headers. The context is created with the Request object passed from the route handler, ensuring IP extraction works as expected. The middleware's fallback behavior—skipping rate limiting ifctx.ip
is unavailable—is intentional and graceful.
35-94
: <!-- [scratchpad_start]
[task verify_authentication_change 3/3 COMPLETE]
All information gathered. Critical findings confirmed.[observations]
- loggedProcedure definition: ONLY logs execution time and metadata - NO authentication
- Rate limit middleware behavior: if service NOT in SERVICE_LIMITS, it returns early and skips limiting
- Endpoints without rate limit configs: pricing, hosting, seo, favicon (4 of 9)
- These 4 endpoints have ZERO rate limiting protection
[analysis]
ORIGINAL REVIEW COMMENT IS INCORRECT:
- Claims loggedProcedure provided authentication → FALSE (it only logged)
- Claims authentication was removed → FALSE (there was no authentication to remove)
- This is a logging→rate-limiting change, not an auth removal
LEGITIMATE CONCERN EXISTS:
- 4 endpoints lack explicit rate limit configurations
- They will bypass rate limiting entirely (middleware returns early on line 63)
- These are potentially expensive operations: favicon blob fetch, seo checks, hosting detection, pricing lookups
[verdict]
- Original assumption about authentication is wrong →
- But there IS a real issue: incomplete rate limit coverage
- Should flag missing SERVICE_LIMITS entries for pricing, hosting, seo, favicon
[scratchpad_end]Remove authentication claim; flag missing rate limit configurations for 4 endpoints.
The original review mischaracterizes the change.
loggedProcedure
provided only logging/observability, not authentication. The change replaces logging with rate limiting, which is intentional per the PR objective.However, a legitimate concern exists: the rate limit middleware skips services not in
SERVICE_LIMITS
. Four endpoints lack explicit configurations and will bypass rate limiting entirely:
pricing
hosting
seo
favicon
Add these services to
SERVICE_LIMITS
inserver/ratelimit.ts
with appropriate limits for these potentially expensive operations.Likely an incorrect or invalid review comment.
… analytics in assertRateLimit function
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (7)
app/api/trpc/[trpc]/route.ts (1)
12-37
: Return standard RateLimit- headers and add Vary; keep X- for compatibility.**Expose IETF headers and Vary to guide caches; set Reset from retry-after seconds.
- const headers: Record<string, string> = { - "Retry-After": String(retryAfter), - "Cache-Control": "no-cache, no-store", - }; - if (typeof cause?.limit === "number") - headers["X-RateLimit-Limit"] = String(cause.limit); - if (typeof cause?.remaining === "number") - headers["X-RateLimit-Remaining"] = String(cause.remaining); + const headers: Record<string, string> = { + "Retry-After": String(retryAfter), + "Cache-Control": "no-store", + // cache variance by client hints that affect limiting + Vary: "Origin, X-Forwarded-For, CF-Connecting-IP, X-Real-IP", + // draft standard headers; clients may also read X-RateLimit-* + "RateLimit-Reset": String(retryAfter), + }; + if (typeof cause?.limit === "number") { + headers["RateLimit-Limit"] = String(cause.limit); + headers["X-RateLimit-Limit"] = String(cause.limit); + } + if (typeof cause?.remaining === "number") { + const remaining = Math.max(0, Number(cause.remaining)); + headers["RateLimit-Remaining"] = String(remaining); + headers["X-RateLimit-Remaining"] = String(remaining); + }trpc/client.tsx (2)
95-105
: Optional: add day‑level formatting for long waits.Improves UX for multi‑day windows.
const h = Math.floor(m / 60); const rm = m % 60; - return rm ? `${h}h ${rm}m` : `${h}h`; + if (h < 24) return rm ? `${h}h ${rm}m` : `${h}h`; + const d = Math.floor(h / 24); + const rh = h % 24; + return rh ? `${d}d ${rh}h` : `${d}d`;
39-73
: Bug: checking cause.code prevents rate‑limit toast. Use error code from TRPC error, read retry metadata from cause.cause has retryAfter/service but not code; toast never triggers.
- if ( - opts.direction === "down" && - opts.result instanceof TRPCClientError - ) { - const err = opts.result as TRPCClientError<AppRouter>; - const cause = err.cause as - | { code?: string; retryAfter?: number; service?: string } - | undefined; - if (cause?.code === "TOO_MANY_REQUESTS") { + if ( + opts.direction === "down" && + opts.result instanceof TRPCClientError + ) { + const err = opts.result as TRPCClientError<AppRouter>; + const code = + (err as unknown as { data?: { code?: string } }).data?.code ?? + (err as unknown as { code?: string }).code; + if (code === "TOO_MANY_REQUESTS") { + const cause = err.cause as + | { retryAfter?: number; service?: string } + | undefined; const retryAfterSec = Math.max( 1, - Math.round(Number(cause.retryAfter ?? 1)), + Math.round(Number(cause?.retryAfter ?? 1)), ); - const service = cause.service; + const service = cause?.service; const friendly = formatWait(retryAfterSec);trpc/init.ts (1)
19-24
: Narrow meta typing to match usage (meta.service).Tighter typing catches typos at call sites and reduces casts.
export const t = initTRPC .context<Context>() - .meta<Record<string, unknown>>() + .meta<{ service?: string }>() .create({ transformer: superjson, });server/ratelimit.ts (3)
23-35
: Add env‑scoped prefixes and ephemeral cache to reduce collisions and cost.Prevents cross‑env bucket sharing; cuts redundant round‑trips.
-const limiters = Object.fromEntries( +const envPrefix = process.env.VERCEL_ENV ?? process.env.NODE_ENV ?? "dev"; +const limiters = Object.fromEntries( Object.entries(SERVICE_LIMITS).map(([service, cfg]) => [ service, new Ratelimit({ redis, limiter: Ratelimit.slidingWindow( cfg.points, cfg.window as `${number} ${"s" | "m" | "h"}`, ), - analytics: true, + // lightweight local cache within a single process + ephemeralCache: new Map(), + // avoid mixing dev/prod/test buckets + prefix: `${envPrefix}:rl:${service}`, + analytics: true, }), ]), ) as Record<ServiceName, Ratelimit>;
65-70
: Fail fast on unknown services in dev; avoid silent bypass.Catches meta typos early without breaking prod traffic.
-export const rateLimitMiddleware = t.middleware(async ({ ctx, next, meta }) => { - const service = (meta?.service ?? "") as ServiceName; - if (!service || !(service in SERVICE_LIMITS) || !ctx.ip) return next(); +export const rateLimitMiddleware = t.middleware(async ({ ctx, next, meta }) => { + const service = (meta?.service ?? "") as string; + if (!service || !ctx.ip) return next(); + if (!(service in SERVICE_LIMITS)) { + if (process.env.NODE_ENV !== "production") { + throw new TRPCError({ + code: "INTERNAL_SERVER_ERROR", + message: `Unknown rate-limit service: ${service}`, + }); + } + return next(); + } await assertRateLimit(service, ctx.ip); return next(); });
47-56
: Include reset in error cause to enable RateLimit‑Reset header downstream.Allows route.ts to emit a precise reset value.
throw new TRPCError({ code: "TOO_MANY_REQUESTS", message: `Rate limit exceeded for ${service}. Try again in ${retryAfterSec}s.`, cause: { retryAfter: retryAfterSec, service, limit: res.limit, remaining: res.remaining, + reset: res.reset, }, });
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (6)
app/api/trpc/[trpc]/route.ts
(1 hunks)lib/schemas/internal/storage.ts
(1 hunks)server/ratelimit.ts
(1 hunks)server/routers/domain.ts
(4 hunks)trpc/client.tsx
(3 hunks)trpc/init.ts
(1 hunks)
🧰 Additional context used
📓 Path-based instructions (5)
server/**
📄 CodeRabbit inference engine (AGENTS.md)
server/ hosts backend integrations and tRPC routers; isolate DNS, RDAP/WHOIS, TLS, and header probing services here
Files:
server/ratelimit.ts
server/routers/domain.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{ts,tsx}
: TypeScript only; keep modules small, pure, and roughly ≤300 LOC
Consolidate imports using @/... path aliases
Files:
server/ratelimit.ts
lib/schemas/internal/storage.ts
trpc/client.tsx
server/routers/domain.ts
app/api/trpc/[trpc]/route.ts
trpc/init.ts
**/*.{ts,tsx,css,json}
📄 CodeRabbit inference engine (AGENTS.md)
Use 2-space indentation across the codebase
Files:
server/ratelimit.ts
lib/schemas/internal/storage.ts
trpc/client.tsx
server/routers/domain.ts
app/api/trpc/[trpc]/route.ts
trpc/init.ts
lib/**
📄 CodeRabbit inference engine (AGENTS.md)
lib/ holds domain utilities and caching (lib/cache); import these via @/... aliases
Files:
lib/schemas/internal/storage.ts
app/{page.tsx,api/**}
📄 CodeRabbit inference engine (AGENTS.md)
Keep app/page.tsx and all app/api/* handlers thin; delegate logic to server/ or lib/
Files:
app/api/trpc/[trpc]/route.ts
🧠 Learnings (1)
📚 Learning: 2025-10-20T02:11:42.179Z
Learnt from: CR
PR: jakejarvis/domainstack.io#0
File: AGENTS.md:0-0
Timestamp: 2025-10-20T02:11:42.179Z
Learning: Applies to server/trpc.ts : Review server/trpc.ts when extending procedures to ensure auth and context remain intact
Applied to files:
trpc/init.ts
🧬 Code graph analysis (4)
server/ratelimit.ts (1)
trpc/init.ts (1)
t
(19-24)
trpc/client.tsx (1)
server/routers/_app.ts (1)
AppRouter
(8-8)
server/routers/domain.ts (4)
trpc/init.ts (1)
publicProcedure
(28-28)server/ratelimit.ts (1)
rateLimitMiddleware
(65-70)lib/schemas/internal/storage.ts (1)
StorageUrlSchema
(4-4)server/services/favicon.ts (1)
getOrCreateFaviconBlobUrl
(21-92)
app/api/trpc/[trpc]/route.ts (1)
trpc/init.ts (1)
createContext
(5-15)
🔇 Additional comments (6)
lib/schemas/internal/storage.ts (1)
4-7
: Schema looks good; confirm null vs undefined contract.If callers may omit url entirely, consider nullish() instead of nullable() to accept both null and undefined.
server/routers/domain.ts (5)
12-12
: Imports are clean and properly used.The new imports for
StorageUrlSchema
,rateLimitMiddleware
, andpublicProcedure
are all correctly utilized throughout the file.Also applies to: 15-15, 25-25
27-33
: Excellent centralization of domain input validation.The
DomainInputSchema
consolidates normalization and validation logic in a single reusable schema. The transform → refine pattern ensures all endpoints receive normalized, registrable domains.
36-97
: Consistent pattern across all endpoints.All nine endpoints follow an identical, well-structured pattern:
.meta({ service: "..." })
for rate-limit identification.use(rateLimitMiddleware)
for enforcement.input(DomainInputSchema)
for validation- Explicit
.output()
schemas for type safetyThis consistency improves maintainability and makes the rate-limiting strategy clear.
86-91
: Past review comment resolved: Output schemas added.The previous review requested explicit
.output()
schemas forfavicon
andscreenshot
endpoints. Both now correctly declareStorageUrlSchema
, which matches the{ url: string | null }
return type from the underlying service functions.Also applies to: 92-97
36-97
: Verify product requirements for removing authentication from domain endpoints.Service name consistency and IP extraction verification completed:
- ✓ All 9 services in domain.ts match SERVICE_LIMITS configuration
- ✓ IP extraction is robust: uses Vercel's
ipAddress()
function with fallbacks tox-forwarded-for
,x-real-ip
, andcf-connecting-ip
headers- ✓ Service-specific rate limits properly configured (e.g.,
dns: 60/min
,screenshot: 3/min
)However, removing authentication from all endpoints is a major security/business decision. Please confirm:
- This change aligns with product requirements—are these domain lookup services intended to be fully public?
- Rate limiting alone provides sufficient protection for your use case (fallback: if
ctx.ip
is null, rate limiting is skipped and requests pass through unthrottled)- You accept the operational risk of unauthenticated, rate-limited public endpoints
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (4)
trpc/client.tsx (1)
98-108
: Optional: Add day-level formatting for long waits.The current implementation formats up to hours, which could produce strings like "48h" or "72h 15m" for very long rate limits. Adding day-level formatting (as suggested in the past review) would improve readability.
For reference, the past suggestion was to add logic like:
if (h < 24) return rm ? `${h}h ${rm}m` : `${h}h`; const d = Math.floor(h / 24); const rh = h % 24; return rh ? `${d}d ${rh}h` : `${d}d`;server/ratelimit.ts (3)
23-35
: Add environment-aware prefixes and ephemeral cache to reduce collisions and costs.Without environment-specific prefixes, different environments (dev/staging/prod) will share the same Redis rate-limit buckets if they use the same Redis instance, causing incorrect limit enforcement across environments. Additionally, enabling
ephemeralCache
would reduce redundant Redis calls within a single process.Apply the suggestion from the past review:
+const envPrefix = process.env.VERCEL_ENV ?? process.env.NODE_ENV ?? "dev"; const limiters = Object.fromEntries( Object.entries(SERVICE_LIMITS).map(([service, cfg]) => [ service, new Ratelimit({ redis, limiter: Ratelimit.slidingWindow( cfg.points, cfg.window as `${number} ${"s" | "m" | "h"}`, ), + ephemeralCache: new Map(), analytics: true, + prefix: `${envPrefix}:rl:${service}`, }), ]), ) as Record<ServiceName, Ratelimit>;
37-64
: Remove redundant service prefix from rate-limit key.Line 38 creates a key with
${service}:${ip}
, but sincelimiters[service]
is already service-specific, the service prefix is redundant. Using just the IP as the key would be cleaner.The error handling with
cause
(lines 50-56) correctly implements the past review suggestion and aligns with the client-side handling.Apply this simplification:
- const key = `${service}:${ip}`; - const res = await limiters[service].limit(key); + const res = await limiters[service].limit(ip);
66-71
: Fail fast in development for unknown services to catch configuration errors.The middleware silently skips enforcement when
meta.service
is missing or invalid (line 68), which could hide typos or misconfiguration during development. This creates a false sense of security.Implement the past review suggestion to fail fast in non-production environments:
export const rateLimitMiddleware = t.middleware(async ({ ctx, next, meta }) => { - const service = (meta?.service ?? "") as ServiceName; - if (!service || !(service in SERVICE_LIMITS) || !ctx.ip) return next(); + const service = (meta?.service ?? "") as string; + if (!service || !ctx.ip) return next(); + if (!(service in SERVICE_LIMITS)) { + if (process.env.NODE_ENV !== "production") { + throw new TRPCError({ + code: "INTERNAL_SERVER_ERROR", + message: `Unknown rate-limit service: ${service}`, + }); + } + return next(); + } await assertRateLimit(service, ctx.ip); return next(); });
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (2)
server/ratelimit.ts
(1 hunks)trpc/client.tsx
(3 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{ts,tsx}
: TypeScript only; keep modules small, pure, and roughly ≤300 LOC
Consolidate imports using @/... path aliases
Files:
trpc/client.tsx
server/ratelimit.ts
**/*.{ts,tsx,css,json}
📄 CodeRabbit inference engine (AGENTS.md)
Use 2-space indentation across the codebase
Files:
trpc/client.tsx
server/ratelimit.ts
server/**
📄 CodeRabbit inference engine (AGENTS.md)
server/ hosts backend integrations and tRPC routers; isolate DNS, RDAP/WHOIS, TLS, and header probing services here
Files:
server/ratelimit.ts
🧬 Code graph analysis (2)
trpc/client.tsx (1)
server/routers/_app.ts (1)
AppRouter
(8-8)
server/ratelimit.ts (1)
trpc/init.ts (1)
t
(19-24)
🔇 Additional comments (3)
trpc/client.tsx (1)
9-13
: LGTM: Clean imports for rate-limit UX.The new imports (TRPCClientError, Siren icon, toast) are directly used in the rate-limit error handling flow and appropriate for the feature.
server/ratelimit.ts (2)
1-8
: LGTM: Proper imports and server-only directive.The imports are clean and the
"server-only"
directive correctly prevents client-side usage.
9-21
: LGTM: Well-structured service limits with sensible defaults.The per-service limits are appropriate (lower for expensive operations like screenshot/registration) and the const assertion + derived ServiceName type ensure type safety.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (6)
app/api/trpc/[trpc]/route.ts (1)
12-34
: Return standard RateLimit headers; read fromerror.cause
with data fallback; addVary
.Use
err.cause
as source of truth, include Reset, and prevent cache artifacts. This aligns with earlier feedback.- responseMeta: ({ errors }) => { - const err = errors.find((e) => e.code === "TOO_MANY_REQUESTS"); - if (!err) return {}; - - // Prefer formatted data from errorFormatter for consistent headers - const data = ( - err as { - data?: { retryAfter?: number; limit?: number; remaining?: number }; - } - ).data; - const retryAfter = Math.max(1, Math.round(Number(data?.retryAfter ?? 1))); - const headers: Record<string, string> = { - "Retry-After": String(retryAfter), - "Cache-Control": "no-cache, no-store", - }; - if (typeof data?.limit === "number") - headers["X-RateLimit-Limit"] = String(data.limit); - if (typeof data?.remaining === "number") - headers["X-RateLimit-Remaining"] = String(data.remaining); - - return { headers, status: 429 }; - }, + responseMeta: ({ errors }) => { + const err = errors.find((e) => e.code === "TOO_MANY_REQUESTS"); + if (!err) return {}; + + const cause = (err as unknown as { + cause?: { retryAfter?: number; limit?: number; remaining?: number; reset?: number }; + }).cause; + const data = (err as unknown as { + data?: { retryAfter?: number; limit?: number; remaining?: number }; + }).data; + + const retryAfter = Math.max(1, Math.ceil(Number(cause?.retryAfter ?? data?.retryAfter ?? 1))); + const limit = cause?.limit ?? data?.limit; + const remaining = cause?.remaining ?? data?.remaining; + const resetMs = typeof cause?.reset === "number" ? cause.reset : undefined; + const resetSec = resetMs ? Math.max(1, Math.ceil((resetMs - Date.now()) / 1000)) : retryAfter; + + const headers: Record<string, string> = { + "Retry-After": String(retryAfter), + "RateLimit-Limit": limit != null ? String(limit) : "", + "RateLimit-Remaining": String(Math.max(0, Number(remaining ?? 0))), + "RateLimit-Reset": String(resetSec), + "Cache-Control": "no-store", + "Vary": "Origin, X-Forwarded-For, CF-Connecting-IP, X-Real-IP", + }; + return { headers, status: 429 }; + },server/ratelimit.ts (2)
23-36
: Namespace limiter keys per environment and enableephemeralCache
.Prevents bucket collisions across envs and cuts upstream round-trips. Mirrors prior suggestion.
-const limiters = Object.fromEntries( +const envPrefix = (process.env.VERCEL_ENV ?? process.env.NODE_ENV ?? "dev").toLowerCase(); +const limiters = Object.fromEntries( Object.entries(SERVICE_LIMITS).map(([service, cfg]) => [ service, new Ratelimit({ redis, limiter: Ratelimit.slidingWindow( cfg.points, cfg.window as `${number} ${"s" | "m" | "h"}`, ), - prefix: `@upstash/ratelimit:${service}`, - analytics: true, + // avoid cross-env collisions when Redis is shared + prefix: `${envPrefix}:rl:${service}`, + // reduce duplicate upstream calls within this process + ephemeralCache: new Map(), + analytics: true, }), ]), ) as Record<ServiceName, Ratelimit>;
66-71
: Fail fast on unknownmeta.service
in dev; avoid premature cast.Catching typos early prevents silent bypass of limits. This mirrors earlier feedback and pairs with narrowing
t.meta
typing in trpc/init.ts.-export const rateLimitMiddleware = t.middleware(async ({ ctx, next, meta }) => { - const service = (meta?.service ?? "") as ServiceName; - if (!service || !(service in SERVICE_LIMITS) || !ctx.ip) return next(); - await assertRateLimit(service, ctx.ip); - return next(); -}); +export const rateLimitMiddleware = t.middleware(async ({ ctx, next, meta }) => { + const name = (meta?.service ?? "") as string; + if (!name || !ctx.ip) return next(); + if (!(name in SERVICE_LIMITS)) { + if (process.env.NODE_ENV !== "production") { + throw new TRPCError({ + code: "INTERNAL_SERVER_ERROR", + message: `Unknown rate-limit service: ${name}`, + }); + } + return next(); + } + await assertRateLimit(name as ServiceName, ctx.ip); + return next(); +});trpc/client.tsx (1)
94-104
: Add day-level formatting for long waits.Small UX polish; previously suggested.
function formatWait(seconds: number): string { if (!Number.isFinite(seconds) || seconds <= 1) return "a moment"; const s = Math.round(seconds); const m = Math.floor(s / 60); const sec = s % 60; if (m <= 0) return `${sec}s`; - if (m < 60) return sec ? `${m}m ${sec}s` : `${m}m`; + if (m < 60) return sec ? `${m}m ${sec}s` : `${m}m`; const h = Math.floor(m / 60); const rm = m % 60; - return rm ? `${h}h ${rm}m` : `${h}h`; + if (h < 24) return rm ? `${h}h ${rm}m` : `${h}h`; + const d = Math.floor(h / 24); + const rh = h % 24; + return rh ? `${d}d ${rh}h` : `${d}d`; }trpc/init.ts (2)
5-14
: Harden IP extraction: strip optional:port
and keep robust fallbacks.Prevents malformed IPs from headers like XFF (“1.2.3.4:12345”). Based on learnings.
export const createContext = async (opts?: { req?: Request }) => { const req = opts?.req; const ip = req ? (ipAddress(req) ?? - req.headers.get("x-forwarded-for")?.split(",")[0]?.trim() ?? - req.headers.get("x-real-ip") ?? - req.headers.get("cf-connecting-ip") ?? + req.headers + .get("x-forwarded-for") + ?.split(",")[0] + ?.trim() + ?.replace(/:\d+$/, "") ?? + req.headers.get("x-real-ip")?.replace(/:\d+$/, "") ?? + req.headers.get("cf-connecting-ip")?.replace(/:\d+$/, "") ?? null) : null; return { ip, req } as const; };
19-51
: Narrowt.meta
to the shape you use; propagatereset
for full headers.Typing
meta
to{ service?: ServiceName }
eliminates casts and catches typos; includingreset
helps buildRateLimit-Reset
later.-export const t = initTRPC - .context<Context>() - .meta<Record<string, unknown>>() - .create({ +export const t = initTRPC + .context<Context>() + .meta<{ service?: import("@/server/ratelimit").ServiceName }>() + .create({ transformer: superjson, errorFormatter({ shape, error }) { const cause = ( error as unknown as { cause?: { retryAfter?: number; service?: string; limit?: number; remaining?: number; + reset?: number; }; } ).cause; return { ...shape, data: { ...shape.data, retryAfter: typeof cause?.retryAfter === "number" ? cause.retryAfter : undefined, service: typeof cause?.service === "string" ? cause.service : undefined, limit: typeof cause?.limit === "number" ? cause.limit : undefined, remaining: typeof cause?.remaining === "number" ? cause.remaining : undefined, + reset: typeof cause?.reset === "number" ? cause.reset : undefined, }, }; }, });Run to spot untyped/typoed
meta.service
across routers and ensure all values are valid services:#!/bin/bash # List all service meta usages rg -nP --type=ts -C2 'meta\s*:\s*\{\s*service\s*:\s*([\'"]).*?\1' server # Show any string-literal services not in the allowed set rg -nP --type=ts '(service\s*:\s*[\'"])([^\'"]+)([\'"])' server | \ gawk '{ match($0, /service\s*:\s*'\''([^'\'']+)'\''|service\s*:\s*"([^"]+)"/, m); svc = m[1] m[2]; if (svc && svc !~ /^(dns|headers|certs|registration|screenshot|favicon|seo|hosting|pricing)$/) print "Possible unknown service:", svc, "->", $0; }'
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (4)
app/api/trpc/[trpc]/route.ts
(1 hunks)server/ratelimit.ts
(1 hunks)trpc/client.tsx
(3 hunks)trpc/init.ts
(1 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
app/{page.tsx,api/**}
📄 CodeRabbit inference engine (AGENTS.md)
Keep app/page.tsx and all app/api/* handlers thin; delegate logic to server/ or lib/
Files:
app/api/trpc/[trpc]/route.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{ts,tsx}
: TypeScript only; keep modules small, pure, and roughly ≤300 LOC
Consolidate imports using @/... path aliases
Files:
app/api/trpc/[trpc]/route.ts
server/ratelimit.ts
trpc/init.ts
trpc/client.tsx
**/*.{ts,tsx,css,json}
📄 CodeRabbit inference engine (AGENTS.md)
Use 2-space indentation across the codebase
Files:
app/api/trpc/[trpc]/route.ts
server/ratelimit.ts
trpc/init.ts
trpc/client.tsx
server/**
📄 CodeRabbit inference engine (AGENTS.md)
server/ hosts backend integrations and tRPC routers; isolate DNS, RDAP/WHOIS, TLS, and header probing services here
Files:
server/ratelimit.ts
🧠 Learnings (1)
📚 Learning: 2025-10-20T02:11:42.179Z
Learnt from: CR
PR: jakejarvis/domainstack.io#0
File: AGENTS.md:0-0
Timestamp: 2025-10-20T02:11:42.179Z
Learning: Applies to server/trpc.ts : Review server/trpc.ts when extending procedures to ensure auth and context remain intact
Applied to files:
trpc/init.ts
🧬 Code graph analysis (3)
app/api/trpc/[trpc]/route.ts (1)
trpc/init.ts (1)
createContext
(5-15)
server/ratelimit.ts (1)
trpc/init.ts (1)
t
(19-51)
trpc/client.tsx (1)
server/routers/_app.ts (1)
AppRouter
(8-8)
🔇 Additional comments (1)
server/ratelimit.ts (1)
38-58
: Good: Typed metadata viaTRPCError.cause
.Clean approach; avoids mutating
e.data
and keeps consumers simple.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (2)
server/ratelimit.ts (2)
23-36
: Consider adding ephemeral cache and environment-aware prefixes.Two improvements to reduce Redis round-trips and prevent environment collisions:
Ephemeral cache: Without a local cache, every rate-limit check requires a Redis call. The
@upstash/ratelimit
library supports anephemeralCache
option that can significantly reduce duplicate calls within the same process.Environment-aware prefixes: The current prefix
@upstash/ratelimit:${service}
doesn't include environment separation. If development, staging, and production share the same Redis instance, rate limits will collide across environments.Apply this diff to add both improvements:
+const envPrefix = process.env.VERCEL_ENV ?? process.env.NODE_ENV ?? "dev"; + const limiters = Object.fromEntries( Object.entries(SERVICE_LIMITS).map(([service, cfg]) => [ service, new Ratelimit({ redis, limiter: Ratelimit.slidingWindow( cfg.points, cfg.window as `${number} ${"s" | "m" | "h"}`, ), - prefix: `@upstash/ratelimit:${service}`, + prefix: `${envPrefix}:rl:${service}`, + ephemeralCache: new Map(), analytics: true, }), ]), ) as Record<ServiceName, Ratelimit>;Based on learnings (past review comments suggest these improvements).
70-75
: Add development-time guard for unknown services.The middleware silently skips enforcement when
meta.service
is invalid or missing, which could mask typos during development. Consider failing fast in non-production environments to surface configuration errors early.Apply this diff to add a development guard:
export const rateLimitMiddleware = t.middleware(async ({ ctx, next, meta }) => { - const service = (meta?.service ?? "") as ServiceName; - if (!service || !(service in SERVICE_LIMITS) || !ctx.ip) return next(); + const service = meta?.service as string | undefined; + + if (!service || !ctx.ip) return next(); + + if (!(service in SERVICE_LIMITS)) { + if (process.env.NODE_ENV !== "production") { + throw new TRPCError({ + code: "INTERNAL_SERVER_ERROR", + message: `Unknown rate-limit service: "${service}". Valid services: ${Object.keys(SERVICE_LIMITS).join(", ")}`, + }); + } + return next(); + } + await assertRateLimit(service, ctx.ip); return next(); });This ensures typos in
.meta({ service: "..." })
are caught during development rather than silently bypassing rate limits.Based on learnings (past review comments suggest this validation).
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (4)
lib/analytics/server.ts
(2 hunks)server/ratelimit.ts
(1 hunks)trpc/client.tsx
(3 hunks)trpc/error-toast-link.tsx
(1 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{ts,tsx}
: TypeScript only; keep modules small, pure, and roughly ≤300 LOC
Consolidate imports using @/... path aliases
Files:
trpc/client.tsx
trpc/error-toast-link.tsx
server/ratelimit.ts
lib/analytics/server.ts
**/*.{ts,tsx,css,json}
📄 CodeRabbit inference engine (AGENTS.md)
Use 2-space indentation across the codebase
Files:
trpc/client.tsx
trpc/error-toast-link.tsx
server/ratelimit.ts
lib/analytics/server.ts
server/**
📄 CodeRabbit inference engine (AGENTS.md)
server/ hosts backend integrations and tRPC routers; isolate DNS, RDAP/WHOIS, TLS, and header probing services here
Files:
server/ratelimit.ts
lib/**
📄 CodeRabbit inference engine (AGENTS.md)
lib/ holds domain utilities and caching (lib/cache); import these via @/... aliases
Files:
lib/analytics/server.ts
🧬 Code graph analysis (2)
trpc/client.tsx (1)
trpc/error-toast-link.tsx (1)
errorToastLink
(21-60)
server/ratelimit.ts (1)
trpc/init.ts (1)
t
(19-51)
🔇 Additional comments (8)
lib/analytics/server.ts (2)
68-72
: LGTM! Appropriate guarded background flush pattern.The optional chaining and try/catch wrapper correctly handle environments where
waitUntil
may be unavailable or fail. Swallowing errors in the catch block is appropriate here since background analytics flush failures should not block the main response flow.
92-96
: LGTM! Consistent error handling for exception capture.The same guarded pattern is correctly applied to the exception capture path, ensuring consistent behavior across both analytics functions.
trpc/error-toast-link.tsx (2)
9-19
: LGTM! Time formatting logic is clear and correct.The
formatWait
function handles edge cases well with the non-finite check and provides progressively detailed formatting for different time scales. The existing formatting (seconds → minutes → hours) is sufficient for typical rate-limit scenarios.
21-60
: LGTM! Error toast link correctly handles rate-limit errors.The link implementation properly:
- Listens for
TRPCClientError
withTOO_MANY_REQUESTS
code- Extracts and formats retry-after timing
- Shows user-friendly toasts with deduplication via
id: "rate-limit"
- Propagates errors after displaying the toast
The observable pattern and subscription handling are correct.
trpc/client.tsx (2)
14-14
: LGTM! Correct import path for error toast link.The import uses the proper path alias convention.
41-41
: LGTM! Error toast link correctly positioned in chain.The
errorToastLink()
is placed afterloggerLink
and beforehttpBatchStreamLink
, allowing it to intercept and handle downstream errors appropriately.server/ratelimit.ts (2)
9-19
: LGTM! Service limits are well-defined.The rate limit configurations are clear and reasonable for each service type, with appropriate point allocations and time windows.
38-68
: LGTM! Rate limit enforcement with proper error metadata.The function correctly:
- Checks rate limits per service and IP
- Throws
TRPCError
withTOO_MANY_REQUESTS
code and structured metadata incause
- Calculates retry-after timing accurately
- Guards
waitUntil
for analytics with optional chaining and try/catch- Returns rate limit info on success
The use of
cause
for metadata (instead of mutatingdata
) is the correct pattern and addresses previous review feedback.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Summary by CodeRabbit
New Features
Improvements
Chores