Back to Blog
PerformanceInfrastructureEngineering

How We Reduced AI Copilot Latency by 60%

Michael NguyenOctober 28, 202512 min read

When we first launched Arcten, our median response time hovered around 800ms. Not terrible, but not great either. For AI copilots embedded in user workflows, that kind of latency creates a sluggish experience that breaks flow state.

The Target

We set an aggressive goal: p95 latency under 300ms for simple queries, under 1 second for complex multi-step workflows. Here's how we got there.

1. Edge Deployment

Our first major win came from moving inference to the edge. Instead of routing all requests through centralized data centers, we deployed model inference infrastructure to edge locations using Cloudflare Workers and similar platforms.

For most US-based requests, this alone cut latency by 150-200ms.

2. Streaming Responses

Rather than waiting for a complete response before showing anything to users, we implemented streaming at every layer. As soon as we have the first token from the model, we start sending it to the client.

This doesn't reduce total time-to-completion, but it dramatically improves perceived performance.

3. Predictive Prefetching

We analyze user interaction patterns to predict likely next actions. When a user opens a workflow, we pre-warm relevant context and even speculatively start certain API calls before the user explicitly requests them.

4. Smart Caching

Many copilot queries have similar patterns. We built a semantic caching layer that identifies when a new query is similar enough to a recent one to reuse parts of the response.

Results

After implementing these optimizations, we're now seeing p95 latency of 240ms—well below our target. User satisfaction scores for copilot interactions increased by 34%.