如何在发布kafka消息时实现原子性
我有一个场景,我将迭代一组从数据库中获取的记录,获取后我将迭代这些记录并将每条记录推送到 kafka 主题。现在让我们假设我已经检索了 10 条记录,并且在迭代中我已经推送了前 5 条记录,并且第 6 条记录有一些异常,我想恢复推送到主题中的消息。这类似于数据库事务性。我们能在kafka中实现原子性吗?
谢谢。
I have a scenario where I will be iterating through a set of records that will be fetched from database and after fetching I will be iterating through those records and will be pushing each record to kafka topic. Now let us assume I have retrieved 10 records and in the iteration I have pushed first 5 records and there is some exception on the 6th record I want to revert back the messages that are pushed into the topic. This is similar to database transactionality. Can we attain atomicity in kafka?
Thanks.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
是的,您可以使用交易;记录将保留在日志中,kafka 在日志中放置一个标记来指示事务是已提交还是已回滚。
消费者必须使用isolation.level=read_commited来避免回滚记录。
https://docs.spring.io /spring-kafka/docs/2.8.4-SNAPSHOT/reference/html/#transactions
https://kafka.apache.org/documentation/#consumerconfigs_isolation.level
Yes, you can use transactions; the record(s) will remain in the log and kafka puts a marker in the log to indicate whether the transaction was committed or rolled back.
Consumers must use
isolation.level=read_committed
to avoid getting rolled back records.https://docs.spring.io/spring-kafka/docs/2.8.4-SNAPSHOT/reference/html/#transactions
https://kafka.apache.org/documentation/#consumerconfigs_isolation.level
一旦数据进入主题,就无法修改;您需要删除整个主题并从头开始
Once data is in the topic it cannot be modified; you'd need to delete the entire topic and restart from the very beginning