Why is the cleanup.policy of the topic `__consumer_offsets` is COMPACT?

Hi all, Why does the cleanup.policy of the topic __consumer_offsets is COMPACT? Is there a specific reason?

Because kafka only needs the latest consumergroup offset per topic-partitions, and it needs it for all topic-partitions, even if the last commit for the topic-partition was a long time ago. the compaction keeps all keys (consumergroup-topic-partition-offset), but only the lates message for each key. this is exactly what kafka requires.
you can find more info https://medium.com/swlh/introduction-to-topic-log-compaction-in-apache-kafka-3e4d4afd2262|here

Thanks, I am asking this because I am using kafka 2.0.0 and there are partitions of the topic __consumer_offsets that are very big ( 500-700GB ). There are thousands of segments of that partitions that are older than 2-3 months. How is this possible? Thanks :slightly_smiling_face:

I have already restarted the broker and this happen only for some partitions of that topic __consumer_offsets