Kube-prometheus-stack not pushing some metrics to Prometheus

Hey all.

We have implemented some internal metrics in our pods (eks 1.20) with histogram.

we use the kube-prometheus-stack

How do i make sure it’s sending to our Prometheus ?
Currently it’s collecting all the other stuff like cpu,ram and all the rest… but not the metrics we’ve added.

Who can assist?

Hey, I’m writing up an internal guide right now, so this is close to my heart. AFAIK for service discovery to work, you need:

  1. A running exporter
  2. A service exposing the containerport
  3. A servicemonitor pointing prom to the svc
    I can clarify - hit me up if confused.

Hey mate, sounds great
I think I have 1 and 2.

How do I set the service monitor?

Here’s my template:

kind: ServiceMonitor
  name: watchihuahua
    app: watchihuahua
    release: prometheus
  - port: metrics
    #- targetPort: 80
      app: watchihuahua```

And the service defines the named port metrics :

kind: Service
  name: watchihuahua
    [prometheus.io/scrape](http://prometheus.io/scrape): 'true'
    #[prometheus.io/scheme](http://prometheus.io/scheme): 'http'
    #[prometheus.io/port](http://prometheus.io/port): '80'
    app: watchihuahua
    release: prometheus
  - name: metrics
    port: 9100
    protocol: TCP
    targetPort: 80
    app: watchihuahua
  type: ClusterIP```

Happy to share :smiley:

it have to be in the same namespace as my pod?


Not sure - I have prom + grafana + my exporters living in one namespace so far. I’m pretty sure if they’re separated there’s a way to specify the ns … I’m guessing.

Just made another instance of my svc in default ns and it is not working, probably have to specify it somewhere in the service if the pod is in another ns - is that something you’re forced to do? (I found this so far)

Edit: Yep, looks like you’ll have to make a service in the same ns as your pod/exporter, then a 2nd service in … whatever ns you’re trying to access it from.