Communicating the GKE cluster IP within the VPC network

still I could able to ping pod from vpc and even kubectl access

then what is the point that gke create what it use

`

app101-7865fcbf7f-2qkwz   1/1     Running   0          3d8h   192.168.80.26    gk3-stg-cluster-1-pool-2-892b3788-bnxm   <none>           <none>
app102-6b8bcc954b-2z5tp   1/1     Running   0          22h    192.168.80.206   gk3-stg-cluster-1-pool-2-442dce4f-f48r   <none>           <none>```


From the VPC instances I could able to ping 192.168.80.26

In the service

app-service     ClusterIP      192.168.110.120   <none>        80/TCP         3d1h    app=nginx-neg
ilb-app1        LoadBalancer   192.168.109.39    10.154.0.51   80:30585/TCP   4d10h   app=app101```


I could able to ping when I use internal broadband which works
10.154.0.51

But I question is why I could not able to ping 192.168.109.39
this ip from the instances

even I disabled all the gke default firewall rule all works as it is

I am really really stuck what gke is doing

In the doc they say the service ip are not able to ping

Obtain service type: Determine the type of service you created (e.g., LoadBalancer, ClusterIP). By default in AutoPilot, services are of type ClusterIP, accessible only within the cluster network.

it mean then I cannot ping the IP from the instance when I use cluster IP

they how the firewall works here