Sorry for being newbie in the field, but will be happy if someone could point me to good reading about how streaming is different from micro-batching, meaning in technical point of view, it just making batches very small and more frequent kind of creating streaming by itself.
To me it’s the same thing, as Kafka is already typically micro-batching.
Kafka consumers under the hood are polling so they are not streaming in reality as mentions (in terms of the events being pushed to you with no way for back pressure). Reading this Javadoc helps cover alot of the concepts: https://kafka.apache.org/10/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html
Indeed and the typical (high performing) producer will also bundle multiple messages in one tcp packet when the load is high enough.
If your familiar with spark streaming it operates the same way as Kafka consumers.