python - Amazon SQS duplicated messages in queue -


this question has answer here:

i not experienced used in amazon sqs service. have read messages queue not own , process them making small database of information.

up until had code read messages in queue , process them. script running periodically.

however, observed amount of messages in queue has become large. when took 10000 sample messages observed around 6000 duplicates.

i puzzled sudden change in behavior (up until did not observe duplicate messages). queue never seems run out.

this code use read messages queue.

conn = boto.sqs.connect_to_region(     'myregions',     aws_access_key_id='myacceskey',     aws_secret_access_key='secretacceskey') q = boto.sqs.queue.queue(connection=conn, url='outputqueue')  rs = q.get_messages(10) all_messages = [] while len(rs) > 0:     all_messages.extend(rs)     print (len(all_messages))     rs = q.get_messages(10) 

can explain why getting duplicated messages suddenly? not have permissions see how large queue is, how can messages in it? doing right?

after processing message queue need send notification message has been processed , should deleted. failure mean message sits in queue , re-fetched until such time reaches fetch limit , sent dead letter queue or expires.

sqs not guarantee uniqueness , can duplicates, can set visibility timeout prevent message being read period of time after has been retrieved e.g. minute or give time process message , delete queue. should avoid duplicates.

as deleting message iterate on messages, process them , run either...

conn.delete_message(q, message)

or

q.delete_message(message)


Comments

Popular posts from this blog

Delphi XE2 Indy10 udp client-server interchange using SendBuffer-ReceiveBuffer -

Qt ActiveX WMI QAxBase::dynamicCallHelper: ItemIndex(int): No such property in -

Enable autocomplete or intellisense in Atom editor for PHP -