如何创建“缓冲区”作为MySQL数据线的持有地面的程序,然后将在可用的云DB时通过

发布于 2025-02-05 20:28:41 字数 431 浏览 3 评论 0原文

我想创建一个 python3 程序,该程序包含 mysql 数据并将其持续暂时,然后可以将此数据传递到云MySQL数据库中。

这个想法是,如果我的本地网络下降, ,则缓冲区将能够在以后传递这些条目日期,理论上提供耐受性

我已经对复制和GTID进行了一些研究,目前正在学习这些概念。但是,我想编写自己的解决方案,或者至少拥有一个较小的程序,而不是完整的复制服务器端的实现。

我已经有一个程序生成一些MySQL数据以填充DB,我需要帮助的关键部分是 buffer方面/实现(代码本身,我并不重要,因为我可以重新进行工作以后)。

我非常感谢任何优质的资源或帮助,谢谢!

I want to create a Python3 program that takes in MySQL data and holds it temporarily, and can then pass this data onto a cloud MySQL database.

The idea would be that it acts as a buffer for entries in the event that my local network goes down, the buffer would then be able to pass those entries on at a later date, theoretically providing fault-tolerance.

I have done some research into Replication and GTIDs and I'm currently in the process of learning these concepts. However I would like to write my own solution, or at least have it be a smaller program rather than a full implementation of replication server-side.

I already have a program that generates some MySQL data to fill my DB, the key part I need help with would be the buffer aspect/implementation (The code itself I have isn't important as I can rework it later on).

I would greatly appreciate any good resources or help, thank you!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

苍白女子 2025-02-12 20:28:41

我将使用消息队列实现您描述的内容。

示例:

这个想法是在本地计算机上运行消息队列服务。您的Python应用程序将项目推入MQ,而不是直接将项目推向数据库。

然后,您需要另一个称为“工人”的背景任务,您也可以用Python或其他语言编写,该任务将其从MQ中消费,并在可用时将其写入云数据库。如果云数据库不可用,则背景工作人员暂停。

在暂停后台工作人员时,MQ中的数据可以增长。如果这种情况持续时间太长,您可能会耗尽空间。但是希望增长率足够缓慢,并且云数据库定期可用,因此发生这种情况的风险很低。


您对性能的评论。

这是一个不同的应用程序体系结构,因此有利弊。

一方面,如果您的应用程序“写入”本地MQ而不是远程数据库,则可能会出现在应用程序中,就好像写入延迟较低一样。

另一方面,发布到MQ不会立即写入数据库。仍然需要有一个工人提取项目并将其写入数据库的一步。因此,从应用程序的角度来看,即使数据库似乎可用,数据库中的数据也存在简短的延迟。

因此,该应用不能取决于准备好在应用程序将其推向MQ之后立即查询的数据。也就是说,这可能是一个不在1秒钟的时间,但这与直接写入数据库的情况不同,这确保了数据后可以在写入后立即查询。

将项目写入数据库的工人的性能应与将同一项目写入同一数据库的应用程序相同。从数据库的角度来看,什么都没有改变。

I would implement what you describe using a message queue.

Example: https://hevodata.com/learn/python-message-queue/

The idea is to run a message queue service on your local computer. Your Python application pushes items into the MQ instead of committing directly to the database.

Then you need another background task, called a worker, which you may also write in Python or another language, which consumes items from the MQ and writes them to the cloud database when it's available. If the cloud database is not available, then the background worker pauses.

The data in the MQ can grow while the background worker is paused. If this goes on too long, you may run out of space. But hopefully the rate of growth is slow enough and the cloud database is available regularly, so the risk of this happening is low.


Re your comment about performance.

This is a different application architecture, so there are pros and cons.

On the one hand, if your application is "writing" to a local MQ instead of the remote database, it's likely to appear to the app as if writes have lower latency.

On the other hand, posting to the MQ does not write to the database immediately. There still needs to be a step of the worker pulling an item and initiating its own write to the database. So from the application's point of view, there is a brief delay before the data appears in the database, even when the database seems available.

So the app can't depend on the data being ready to be queried immediately after the app pushes it to the MQ. That is, it might be pretty prompt, under 1 second, but that's not the same as writing to the database directly, which ensures that the data is ready to be queried immediately after the write.

The performance of the worker writing the item to the database should be identical to that of the app writing that same item to the same database. From the database perspective, nothing has changed.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文