kafka "stops working" after a large message is enqueued -


i'm running kafka_2.11-0.9.0.0 , java-based producer/consumer. messages ~70 kb works fine. however, after producer enqueues larger, 70 mb message, kafka appears stop delivering messages consumer. i.e. not large message not delivered subsequent smaller messages. know producer succeeds because used kafka callback confirmation , can see messages in kafka message log.

kafka config custom changes:

message.max.bytes=200000000 replica.fetch.max.bytes=200000000 

consumer config:

props.put("fetch.message.max.bytes",   "200000000"); props.put("max.partition.fetch.bytes", "200000000"); 

you need increase size of messages consumer can consume doesn't stuck trying read message big.

max.partition.fetch.bytes (default value 1048576 bytes) 

the maximum amount of data per-partition server return. maximum total memory used request #partitions * max.partition.fetch.bytes. size must @ least large maximum message size server allows or else possible producer send messages larger consumer can fetch. if happens, consumer can stuck trying fetch large message on partition.


Comments

Popular posts from this blog

Delphi XE2 Indy10 udp client-server interchange using SendBuffer-ReceiveBuffer -

Qt ActiveX WMI QAxBase::dynamicCallHelper: ItemIndex(int): No such property in -

Enable autocomplete or intellisense in Atom editor for PHP -