DevOps & Platform Eng

Cloudflare Browser Run Hits Warp Speed on Containers

Forget slow. Cloudflare's Browser Run just got a serious shot of adrenaline. They've ditched old infrastructure for their own Containers, and the results are jaw-dropping.

Diagram illustrating Cloudflare's Browser Run architecture with Containers

Key Takeaways

  • Cloudflare's Browser Run has been rebuilt on Cloudflare Containers, dramatically increasing performance and scalability.
  • New limits are 4x higher, allowing 60 browsers per minute and 120 concurrently, with response times cut by over 50%.
  • The migration addresses issues with shared infrastructure and introduces regional container pools to combat latency for high-demand use cases like AI agents.

Is your headless browser a sluggish beast? Good. Because Cloudflare just strapped a rocket to theirs. They’ve rebuilt their Browser Run service on top of Cloudflare Containers, and the numbers are, shall we say, not disappointing.

We’re talking 4x the previous limits for spinning up browsers. Sixty per minute, 120 concurrently. Quick Actions now snap back in under half the time. This isn’t some beta announcement. It’s live. Now. Today. No action required from you. Just… faster. Better. More reliable. They even claim they’re shipping fixes faster. Because of course they are.

What’s Even Going On Here?

Let’s back up. Browser Run. It lets you, the developer, puppet headless browsers. Think automated testing. Think security investigations. Need a PDF? Done. Screenshots? Easy. And now, apparently, it’s the secret sauce for AI agents trying to grok the web. Cloudflare wants this to be the go-to for running browsers at scale. Securely. Responsibly. Big words. Let’s see if the tech backs them up.

Outgrowing the Cardboard Box

Their old setup shared space with Browser Isolation (BISO). Fine for some things. But BISO’s hefty container images choked startup times. And global distribution? Forget about it. Latency was a problem. Resiliency? A distant dream. Then there’s the user profile clash: BISO’s long, steady sessions versus Browser Run’s frantic, spiky bursts. It created bottlenecks. Availability suffered. They were outgrowing their shared resources like a teenager in a too-small t-shirt.

The savior? Cloudflare’s own Durable Object (DO)-enabled Containers. They’ve been building on their own platform. Smart. Feeling the pain before their customers do. It’s the “Customer Zero” approach. A noble, if sometimes painful, path.

The Great Migration

How do you move a whole damn city? Carefully. They introduced a Worker that rerouted some traffic to the new Containers. Dual support. Performance comparisons. Bug hunting in the wild. Building confidence. Then they phased in the new tech: Quick Actions first, then free accounts, then pay-as-you-go, and finally, everyone else. A smooth ride for the user. A complex ballet behind the scenes.

But it wasn’t all sunshine and roses. Working with an early-stage platform. Light on docs. Light on observability. Light on colleagues in compatible time zones. Sound familiar? Yeah, well, that’s the price of being a pioneer. They fed their feedback back to their own teams. Instant upgrades for everyone else. Meanwhile, they wrestled with the tech itself.

Latency’s Global Game

Here’s the kicker: DO-enabled Containers try to land close to the request. Great. But the browser container? It might spin up across the planet. For simple commands, fine. But for something chatty, like a WebSocket exchange for a screenshot, those milliseconds add up. Global ping-pong is bad for business.

Their fix? Regional pools. Pre-warmed DO-backed browser containers. Keep the DO and the container geographically close. Reduce hops. Lower latency. It adds complexity, sure. But they’ve got eyes on everything with strong observability. So they can shift capacity. All thanks to Workers KV. Well, almost. KV’s eventual consistency is about 30 seconds. For the kind of demand they’re seeing now? That’s an eternity.

AI agent builders discovered Browser Run and quickly brought request volumes outpacing our existing capacity. We quickly hit the limits of how quickly we could adjust our pool capacity to serve this new demand with a scalable approach.

This, friends, is where the story gets interesting. Demand isn’t just growing; it’s exploding. Primarily because of AI. The very thing they’re trying to enable. A good problem to have, I suppose. But one that’s testing the limits of even their new, shiny infrastructure. What comes next when even eventual consistency isn’t soon enough? That’s the real question.

Why Does This Matter for Developers?

Faster, more reliable headless browsers mean smoother CI/CD pipelines. It means less waiting for your automated tests to finish. It means you can build more sophisticated web-scraping tools or AI agents that can actually interact with the web without timing out every other request. For anyone building on or interacting with the web programmatically, this is a win. Cloudflare is demonstrating its platform’s ability to scale under extreme load, a key concern for enterprise developers. It also shows their commitment to dogfooding their own infrastructure – a good sign for any platform vendor.


🧬 Related Insights

Jordan Kim
Written by

Cloud and infrastructure correspondent. Covers Kubernetes, DevOps tooling, and platform engineering.

Worth sharing?

Get the best Developer Tools stories of the week in your inbox — no noise, no spam.

Originally reported by Cloudflare Blog

Stay in the loop

The week's most important stories from DevTools Feed, delivered once a week.