Kafka - re-consuming or reprocessing failed messages

Hello,

I have a microservice, for consumers if I can’t process event I don’t want to commit the message. in this case, what happens to this kind of messages? How can I reliever this message? Because at that moment I couldn’t process this message but now I can process. How do you guys solve this kind of problems?

Hi, if your consumer throws an error before it has completed processing a message, and so does not write to the consumer offsets to mark the message as consumed, then the message will be redelivered.
If you consume a message but don’t want to process it and dont want it to be redelivered, then just skip the processing logic but allow the consume to complete - the consumer offsets are written to which mark the message as consumed, so it is not redelivered.

Hey Thanks for help.
My case is like this.
Order Service creates an event OrderCreated.
Payment Service holds copy of orders and consume order-created topic. If at that moment when message is delivered, if I can’t insert the order in the payment service so I want this message to be redelivered. in nat streaming server, when you don’t commit message, it automatically is redelivered. how can I achieve something like that in Kafka?

> when message is delivered, I can’t insert the order in the payment service
throw an error in the code. the message is not marked as consumed, so it is redelivered in the next consumer poll.

When an error happens, I don’t commit the message it looks uncommitted but let’s say this message has an offset of 40. but next messages are processed successfully and the last commit becomes 50 and 40. message is still uncommitted and next consumer start from 50. offset. 40. message still stands uncommitted.

And apparently I will need a new consumer to start from uncommitted message, but I immediately want this message to be redelivered because I have 3 more replicas for consuming this event maybe other replicas will process it successfully and it was just for a second error.

Messages at offsets 41 to 50 from the same partition will not be processed before the message at offset 40.

this scenario is explained here, see scenario 3:

https://www.lydtechconsulting.com/blog-kafka-message-batch-consumer-retry.html

I am coming from nats backgroudn

Anything can you suggest me to use Kafka in Microservices?

What I mean by offset was this. I didn’t commit message having offset 2, I commit the messages 1, 3, 4. and now this group id is not processing 2 but processing new messages. To process 2. message I need another consumer group with different id.

You have manually committed 4? then yes 2 will not be re-consumed. can you hold off processing/committing later messages until the earlier ones have completed?

I commit messages manually first read then commit if I can record id to mysql if can’t I don’t commit the message. I can’t hold off them. in nats streaming when I don’t commit the message, it is redelivered by ants streaming server as they didn’t get feedback from me that I commit the message. I was looking for something like this in Kafka.

Order Service produces OrderCreatedEvent -> PaymentService holds the copy of the data by inserting it to mysql. I have 4 coming records only when processing 2. offset there was instant error in mysql. so it couldn’t process 2. offset message but it process 1, 3, 4. so as it was instant error I was expecting Kafka or my consumers to consume this this message with offset 2.

Because I didn’t commit the message as it was an error.

If you consume a batch of 4 messages, messages 1,2,3,4 and use auto-commit enabled, and message 2 fails, then you will get the behaviour as per scenario 3 in the above article. a new batch will be consumed with messages 2,3,4,5 etc (as message 1 will have completed successfully)

But if I auto-commit messages, how I can apply business logic. For example, if I am sending mails and sms to new customers, I have 2 consumer groups. I am receiving a thousand new user per minute. but if I can’t send sms this user, it shouldn’t committed maybe redelivered again Kafka and try to reprocessed again becas

Because I didn’t send the email but I have to send.

In this case, user service sends message to user-created topic. and two Microservices consume this topic, and they need to do their job otherwise why I commit the message?

Note that the auto-committing happens at the end of the consumer batch poll, not the beginning. so it only commits offsets after the business logic

If a message fails processing, only the offset of the successfully processed are committed.