Prometheus query receiving only 10 data points instead of 20

Hello Team, a bit confused why this is happening.

I have an interval of 15 seconds and use a 5 minute time range. So it must be 20 data points.

But upon performing the query:

    kube_pod_status_ready{namespace="monitoring-system",pod=~"zabbix-zabbix-web-.*", condition="true"}[$__range]
  )```


why I did only get 10 as a result? Each data point here actually equates to 1, as seen in the second pic/query:


``` kube_pod_status_ready{namespace="monitoring-system",pod=~"zabbix-zabbix-web-.*", condition="true"}```


I am expecting to have 20 as a result as I have 20 data points, but I am not sure why I got 10? I tried asking chatgpt but to no avail.

Is this a new discovered bug in Grafana?
    kube_pod_status_ready{namespace="monitoring-system",pod=~"zabbix-zabbix-web-.*", condition="true"}[$__range:15s]
  )```


looks like this query is the trick for those who are looking, the sum_over_time or count_over_time is using some auto correct, one need to specify the range to make this work

Your data might be scrapped at a different interval than 15 seconds (based on your first message, I’d bet on 30 seconds). Grafana interval will not help with that. If you’re using VictoriaMetrics, you can use scrape_interval(kube_pod_status_ready) to check the scrape interval. If not, you could ask your infra provider (or check it yourself, if you have such permissions) to check the scrape interval on agent / prometheus / alloy / whatever you’re using.

With sum_over_time (and subquery you’re using) you’re forcing your datasource to set the interval to 15 seconds, so it does not have any other way than to oblige but it still might not have this data.