Skip to the main content.
Platform

AIR
Acquire. Interact. Retain.
Breathe life into your customer relationships

Learn more

ASDA Rewards Logo

How ASDA leveraged Eagle Eye's market-leading loyalty platform and expertise to launch 'ASDA Rewards', deployed just 3 months after project kick-off.

Become a Partner

Contact us to find out how we can enable your teams on our platform.

mach-members-and-google-premier-partners

3 min read

How we optimized Cloud Run Networking with Direct VPC Egress

How we optimized Cloud Run Networking with Direct VPC Egress

Google Cloud Run is astonishingly simple and increasingly popular among developers. It lets anyone deploy containerized applications quickly and easily, automatically managing scalability. At EagleAI, we use Cloud Run extensively to deploy scalable services and jobs, allowing us to rapidly build and maintain our AI-driven loyalty platform.

VPC connectivity

By default, most serverless services (including Cloud Run) run in a Google-managed VPC and not in a user-created VPC. This abstracts away infrastructure, which is the advantage of serverless — but also means services can’t reach private resources unless explicitly connected to a VPC. That’s where VPC connectivity solutions come in.

Cloud Run services or jobs often need to access private resources inside a VPC, like a Cloud SQL database or an internal API.

Additionally, you may need services to communicate with external systems that require a deterministic static outbound IP address — such as when integrating with clients who whitelist inbound traffic.

Initially, our VPC connectivity solution was Google's Serverless VPC Access, a managed service that allows Cloud Run to reach private resources by creating connectors, which are essentially clusters of managed virtual machines acting as network proxies. At the time we set up our Cloud Run infrastructure in 2023, this was the only available option, as Direct VPC Egress was still in pre-GA with limited features.

At EagleAI, we're at the cutting edge of retail loyalty innovation. Our AI-powered platform helps retailers deliver personalized, gamified promotions that drive real customer engagement. Since our founding in 2015 — and following our acquisition by Eagle Eye — we've remained focused on transforming the way shoppers interact with loyalty programs.

Just as we use behavioral data to tailor challenges and rewards to individual preferences, we bring that same mindset to our engineering. We continuously refine our stack with the same precision and intent: to build smarter, faster, and more adaptive systems that scale with our customers' needs.

Serverless VPC Access: a solid starting point with some tradeoffs

Serverless VPC Access served us well initially. It allowed our Cloud Run jobs and services to communicate with private resources inside our VPC, offered a relatively straightforward setup, and provided us with a deterministic static outbound IP address — an essential requirement for some client integrations.

Note: This setup required the use of a Cloud NAT gateway, which handles egress routing and ensures outbound traffic from the VPC uses the reserved static IP.

However, as our platform evolved, a few limitations became increasingly apparent:

  • Limited flexibility
    While you can configure minimum and maximum instance counts for the connector, the scaling is not truly elastic. Once a compute instance is up, it stays up, regardless of demand. This rigid scaling model can quickly become inefficient, especially for variable workloads.
  • Additional cost overhead
    Serverless VPC Access relies on provisioned compute proxies, billed as Compute Engine VMs. These incur extra Compute costs on top of standard Cloud Run pricing (in our case, nearly 40% of our total Cloud Run bill!). Also, the scaling mechanism described earlier contributes directly to this overhead.
  • Lower performance compared to Direct VPC Egress
    By design, Serverless VPC Access introduces an extra network hop between Cloud Run and VPC resources. This adds latency and can reduce throughput.

While this solution was functional and stable, we started looking for a more cost-efficient and flexible alternative — and that's where Direct VPC Egress came in.

Switching to a leaner Networking model

As our platform matured, we transitioned to Direct VPC Egress, a relatively new capability that allows Cloud Run to communicate directly with private VPC without intermediary connectors.

Instead of routing traffic through provisioned proxies, Cloud Run instances are assigned internal IP addresses directly within the VPC network. This direct network interface enables outbound TCP/UDP traffic to internal resources, eliminating the need for connectors. This streamlined path reduces latency, increases throughput, and removes the overhead of managing additional compute resources.

How to switch

Implementing Direct VPC Egress involves:

  • Configure network and subnet
    Direct VPC Egress requires selecting a specific subnet within a VPC network when deploying a Cloud Run job or service. Cloud Run then assigns ephemeral IP addresses from this subnet directly.

    Important: The subnet IPv4 range must be at least /26 (64 IP addresses) to allow efficient IP allocation.
  • Update firewall rules
    With Direct VPC Egress, you're responsible for creating up the necessary firewall rules. Be sure to allow traffic from your Cloud Run subnet to the internal resources your services depend on (such as databases, caches, or APIs).

    Important: Unlike Serverless VPC Access — which automatically creates fairly permissive firewall rules — Direct VPC Egress doesn’t set anything up for you. You can refer to Google's default rules for Serverless VPC connectors as a helpful starting point when configuring your own.

That's it!

Immediate benefits observed

The benefits were clear and immediate:

  • Cost Reduction
    Eliminating the connector removed Compute-related fees, with no extra Networking cost since Direct VPC Egress is billed at the same egress rate as connectors.
  • Improved Performance
    Direct network paths reduced latency, meaning faster response times for end-users and internal services alike.

To visualize the impact, below is a latency graph provided by Cloud Run.

Cloud Run latency before (left) and after (right) switching to Direct VPC Egress. Notice the drop in latency once connector overhead was removed.

Figure: Cloud Run latency before (left) and after (right) switching to Direct VPC Egress.
Notice the drop in latency once connector overhead was removed

Final thoughts

Our move to Direct VPC Egress at EagleAI has provided clear advantages in cost, performance, and maintainability. If you run Cloud Run workloads that communicate privately or require predictable external IP addresses, Direct VPC Egress offers a powerful, cost-effective alternative worth exploring.

Delivering 1:1 Personalization with a Best of Breed SaaS

6 min read

Delivering 1:1 Personalization with a Best of Breed SaaS

Loyalty Platform As I sit at my desk in London, I'm excited to dive into the topic of why personalization matters and why I believe, if you haven’t...

Read More
A Better, Simpler & More Effective Way to Accelerate Customer Loyalty by Utilising MACH Architecture

4 min read

A Better, Simpler & More Effective Way to Accelerate Customer Loyalty by Utilising MACH Architecture

In our previous post, Meet MACH: The Principle of Eagle Eye's Technology Approach, we examined the nature of MACH architecture, its benefits, and how...

Read More
Navigating New Waters: Applying AI to Retail Marketing

6 min read

Navigating New Waters: Applying AI to Retail Marketing

Artificial intelligence (AI) is reimagining the entire retail journey, giving brands more capabilities and their customers more experiences....

Read More