Hi folks I am a bit confused on monitors are being evaluated… The metric value seems to be in 0.2-0.5 range but the evaluation graph is having an average of less than 0.1. Also sharing the metric query that this grapgh is representing
That is a really bizarre query… not sure what you are trying to accomplish by calculating the average of summing all the durations and dividing that by the number of hits.
But - that fill(zero) function is probably throwing your value down a lot, because if you didn’t have any traces for 1 min, it would have the datapoint be 0.
But yeah, you might want to rethink this query approach for what you are looking to alert on versus what you are actually calculating there. I don’t think that data is what you think it is.
It’s the formula for latency…
Why not just use the duration metric alone then? Would avoid the problem with the rollup and fill for the hits.
average query.duration
should be the same as your sum query.duration / query.hits
Then you aren’t going to be adding a bunch of 0
evaluations with the fill function and artificially deflate the value.