Communicating the GKE cluster IP within the VPC network

In GKE after I deployed with autopilot with in the VPC I could able to ping the POD IP
But the default cluster IP that allocate by GKE
How do I communicate the cluster IP within the VPC

Added your VPC IP range as a CIDR block to master authorized network option while creating / modifying cluster

now when I create a service
I get ip from the range that I created but
from my VM within
VPC I could able to ping which that allocated pod range

But when I create a service that allocate the IP from the range of
But I could not able to ping the service ip from the instances withing the VPC

What kind of service clusterIP, load balancer etc?

If the GCE VM and GKE cluster are in the same VPC they should be able to communicate using internal IPs. Make sure you add any additional firewall rules needed for the app deployed on your GCE VM to access the service. FYI that cluster IP assigns an internal IP / private IP. By default resources in the same VPC can communicate using internal IPs provided firewall rules are in place

What kindly of firewall should I need to place because when I setup GKE AutoPliot it means native vpc is by default
in that case the firewall are all set by default
how the pod ip able to ping but not the service ip

that is what my question is

VM and GKE are all in the same vpc this range are assign to pod which I could able to ping from the instance on the same vpc

But the range that I allocated to the service is not able to access from the instance withing the same vpc

I been stuck really for past three with this really no clue even went through
doc really need some help might much appreciated here

firewall for the egress traffic that allows the protocol and port used by your service deployed inside your GCE VM IP

What firewall I need to setup I didn’t get you when I provision the GKE autopilot under the project I could see it created some vpc preering and some default firewall policy

even I tried allowed by setting firewall
Like allow I tag kube-node and same tag in the instances

and set the firewalld

source as kube-node and allow all

but still I could not able to ping

firewall rule that allows egress traffic for your service deployed on GCE VM whatever port and protocol your application is using

this rule been there already by GKE

I disabled all the firewall rule that GKE created by automatically