Hi friends, We are trying to identify poorly performing queries in aws rds mysql. Is anyone familiar with building observability around this. I understand there is a lot of data to be viwerd performance_schema. I’m looking first to see if there is something prebuilt where we can ship the data to otel > grafana loki/promtheus to view. We are aware of Performance Insights
just looking for something a bit more comprehensive and track historically.
G’day.
My experience is in Postgres, so take my suggestions with a grain of salt.
<https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.MySQL.LogFileSize.html#USER_LogAccess.MySQL.Generallog|It looks like MySQL has a slow log configuration that can be applied>. We use a similar setup with Postgres and apply a custom metric filter to the Cloudwatch logs so we can generate metrics on slow queries, track history and alert on it. So you can get metrics on slow queries and also look at the actual query in the logs.
We haven’t used it extensively, but we have a Grafana setup that hooks into Cloudwatch, which may or may not work for your environment.
Agree with the above but a word of warning with slow logs - if you have any sort of bulk op / data dump going on be careful. e.g. a many-megabytes statement dumping in tons of rows is slow. Having that logged (including the many megabytes of data) is very unhelpful as is the additional logging putting more load on the DB than the insert did in the first place.
thank you both, i will take a look
The database insights on cloudwatch looks very useful
downloading Postgresql slow logs and running them through pgbadger has always worked well for me but has had no real MySQL equivalent (weirdly). Just 2 days ago someone claimed to have done an AI assisted port to work with MySQL logs. https://www.linkedin.com/pulse/story-building-mysqlbadger-ai-marian-simpetru-pimpf|https://www.linkedin.com/pulse/story-building-mysqlbadger-ai-marian-simpetru-pimpf … I can’t vouch for it maybe it works.