Serverless NEG endpoints not attaching to API Gateway

Hello,
We are currently working on configuring a Google Cloud HTTPS Load Balancer in front of an API Gateway using a Serverless NEG (Network Endpoint Group). We’re using Terraform with the google-beta provider and have implemented the serverless_deployment block to target the API Gateway, following the official documentation.
The terraform apply command runs without any errors, but when we inspect the created NEG, no endpoints are attached. As a result, the Load Balancer is unable to route traffic to the API Gateway.
To rule out Terraform-specific issues, we also attempted to manually create the Serverless NEG using the gcloud CLI in a shell script .While the command executes successfully and the NEG appears in the console, it still shows no endpoints, and the API Gateway does not appear to be connected.
At this point, we are unsure whether this is a platform limitation, a propagation delay, or a misconfiguration. We’d appreciate any guidance or clarification on:
• Whether Serverless NEGs for API Gateway are fully supported and functional at this time as at a moment Google Cloud documentation says its in Preview stage
• Any known issues or prerequisites (e.g., region compatibility, required API Gateway state)
• Recommendations on how to verify or troubleshoot endpoint registration for Serverless NEGs
Thanks in advance for your help!

https://cloud.google.com/api-gateway/docs

https://cloud.google.com/api-gateway/docs/about-api-gateway

for my curiosity - are there particular reasons to use a load balancer in front of an API Gateway?

or do you try to use this -
https://cloud.google.com/api-gateway/docs/gateway-load-balancing

yes I am following this documentation <https://cloud.google.com/api-gateway/docs/gateway-load-balancing|Load balancing for API Gateway | API Gateway Documentation | Google Cloud>

But Serverless NEG to API Gateway connection is not happening

I have used following terraform script to connect API Gateway to Network Endpoint Groups

resource “google_compute_region_network_endpoint_group” “api_gateway_neg” {
name = var.network_endpoint_group_name
region = var.region
network_endpoint_type = “SERVERLESS”
provider = “google-beta”
serverless_deployment {
platform=“http://apigateway.googleapis.com|apigateway.googleapis.com
resource = var.gateway_id
}

After this script serverless NEG was created successfully but endpoint was not added as required

Could you please help me identifying the issue for this

I would guess - to check at first -

<https://cloud.google.com/api-gateway/docs/gateway-load-balancing#limitations>

and

<https://cloud.google.com/api-gateway/docs/gateway-load-balancing#limitations_on_serverless_negs_in_backend_service_configurations>

then to check was the backend in the backend service created (backend is ‘bridge’ between a load balancer backend service and a network endpoint group)

We have tried creating it from console but gcp has not provided its support to create Network endpoints for API Gateway manually , we can either create it from shell or terraform

Script I have used is
gcloud beta compute network-endpoint-groups create api-gateway-serverless-neg1 --region=us-central1 --network-endpoint-type=serverless --serverless-deployment-platform=http://apigateway.googleapis.com|apigateway.googleapis.com --serverless-deployment-resource=dev-mytelemetry-gateway02 --verbosity debug

It says
Serverless NEGs can only point to API Gateway instances residing in the same region where the NEG is created.
Serverless NEGs can only point to API Gateway instances created in the same project as the load balancer using the Serverless NEG backend.

All of these configurations are same as documented

after the NEGs are created - they might need to be bound to the backend services - something like
gcloud compute backend-services add-backend ...

I see what you mean, this is egress /outbound connection from a serverless NEG to API gateway

Very interesting use case :slightly_smiling_face:

Normally it is the opposite, API gateway is the entry point , forwards traffic to LB and then LB to the backend serverless compute