Google Cloud Run is astonishingly simple and increasingly popular among developers. It lets anyone deploy containerized applications quickly and easily, automatically managing scalability. At EagleAI, we use Cloud Run extensively to deploy scalable services and jobs, allowing us to rapidly build and maintain our AI-driven loyalty platform.
By default, most serverless services (including Cloud Run) run in a Google-managed VPC and not in a user-created VPC. This abstracts away infrastructure, which is the advantage of serverless — but also means services can’t reach private resources unless explicitly connected to a VPC. That’s where VPC connectivity solutions come in.
Cloud Run services or jobs often need to access private resources inside a VPC, like a Cloud SQL database or an internal API.
Additionally, you may need services to communicate with external systems that require a deterministic static outbound IP address — such as when integrating with clients who whitelist inbound traffic.
Initially, our VPC connectivity solution was Google's Serverless VPC Access, a managed service that allows Cloud Run to reach private resources by creating connectors, which are essentially clusters of managed virtual machines acting as network proxies. At the time we set up our Cloud Run infrastructure in 2023, this was the only available option, as Direct VPC Egress was still in pre-GA with limited features.
At EagleAI, we're at the cutting edge of retail loyalty innovation. Our AI-powered platform helps retailers deliver personalized, gamified promotions that drive real customer engagement. Since our founding in 2015 — and following our acquisition by Eagle Eye — we've remained focused on transforming the way shoppers interact with loyalty programs.
Just as we use behavioral data to tailor challenges and rewards to individual preferences, we bring that same mindset to our engineering. We continuously refine our stack with the same precision and intent: to build smarter, faster, and more adaptive systems that scale with our customers' needs.
Serverless VPC Access served us well initially. It allowed our Cloud Run jobs and services to communicate with private resources inside our VPC, offered a relatively straightforward setup, and provided us with a deterministic static outbound IP address — an essential requirement for some client integrations.
Note: This setup required the use of a Cloud NAT gateway, which handles egress routing and ensures outbound traffic from the VPC uses the reserved static IP.
However, as our platform evolved, a few limitations became increasingly apparent:
While this solution was functional and stable, we started looking for a more cost-efficient and flexible alternative — and that's where Direct VPC Egress came in.
As our platform matured, we transitioned to Direct VPC Egress, a relatively new capability that allows Cloud Run to communicate directly with private VPC without intermediary connectors.
Instead of routing traffic through provisioned proxies, Cloud Run instances are assigned internal IP addresses directly within the VPC network. This direct network interface enables outbound TCP/UDP traffic to internal resources, eliminating the need for connectors. This streamlined path reduces latency, increases throughput, and removes the overhead of managing additional compute resources.
Implementing Direct VPC Egress involves:
That's it!
The benefits were clear and immediate:
To visualize the impact, below is a latency graph provided by Cloud Run.
Figure: Cloud Run latency before (left) and after (right) switching to Direct VPC Egress.
Notice the drop in latency once connector overhead was removed
Our move to Direct VPC Egress at EagleAI has provided clear advantages in cost, performance, and maintainability. If you run Cloud Run workloads that communicate privately or require predictable external IP addresses, Direct VPC Egress offers a powerful, cost-effective alternative worth exploring.