避免“MySQL 服务器已消失”在不常用的 Python / Flask 服务器上使用 SQLAlchemy

发布于 2024-11-16 23:03:39 字数 261 浏览 5 评论 0 原文

如果 Flask / SQLAlchemy 不存在,如何配置它来创建新的数据库连接?

我有一个不常访问的 Python / Flask 服务器,它使用 SQLAlchemy。它每隔几天就会被访问一次,并且在第一次访问时它经常会抛出“MySQL 服务器已消失”错误。随后的页面浏览量还好,但出现这个初始错误看起来很不专业。

我想知道处理这个问题的正确方法 - 像“长时间休息”这样的建议(在这种情况下大约需要 4 天)似乎并不正确。如何测试是否缺少数据库连接并在需要时创建一个?

How can Flask / SQLAlchemy be configured to create a new database connection if one is not present?

I have an infrequently visited Python / Flask server which uses SQLAlchemy. It gets visited every couple of days, and on the first visit it often throws a "MySQL server has gone away" error. Subsequent page views are fine, but it looks unprofessional to have this initial error.

I'd like to know the correct way to handle this - advice like "make a really long time out", which would be about 4 days long in this case, doesn't seem correct. How can I test for the lack of a database connection and create one if needed?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(7

执手闯天涯 2024-11-23 23:03:39

我以前遇到过这个问题,发现解决这个问题的方法是不保留会话。问题是你试图保持连接打开的时间太长。相反,请在 __init__.py 中或在随处导入的实用程序包中使用线程本地作用域会话:

from sqlalchemy.orm import scoped_session, sessionmaker
Session = scoped_session( sessionmaker() )

然后设置引擎和元数据一次。这允许您在每次连接/断开连接时跳过配置机制。之后,您可以像这样完成数据库工作:

session = Session()
someObject = session.query( someMappedClass ).get( someId )
# use session like normal ...
session.close()

如果您想保留旧对象并且不想让会话保持打开状态,那么您可以使用上面的模式并重用旧对象,如下所示:

session = Session()
someObject = session.merge( someObject )
# more db stuff
session.close()

重点是,您想要打开会话,完成工作,然后关闭会话。这很好地避免了超时。 .merge 和 .add 有很多选项,允许您包含对分离对象所做的更改或从数据库加载新数据。这些文档非常详细,但是一旦您知道要查找的内容,可能会更容易找到。

为了真正做到这一点并防止 MySQL“消失”,您需要解决连接池保持连接打开时间过长并为您检查旧连接的问题。

要获得新的连接,您可以在 create_engine 调用中设置 pool_recycle 选项。将此pool_recycle设置为您希望创建新连接而不是返回现有连接的结账之间连接池中的时间秒数。

I've had trouble with this before, and found that the way to handle it is by not keeping sessions around. The trouble is you are trying to keep a connection open for way too long. Instead, use a thread local scoped session like so either in __init__.py or in a utility package that you import everywhere:

from sqlalchemy.orm import scoped_session, sessionmaker
Session = scoped_session( sessionmaker() )

Then set up your engines and metadata once. This allows you to skip configuration mechanics every time you connect/disconnect. After that, you can do your db work like this:

session = Session()
someObject = session.query( someMappedClass ).get( someId )
# use session like normal ...
session.close()

If you want to hold on to old objects and you don't want to leave your session open, then you can use the above pattern and reuse old objects like this:

session = Session()
someObject = session.merge( someObject )
# more db stuff
session.close()

The point is, you want to open your session, do your work, then close your session. This avoids timeouts very well. There are lots of options for .merge and .add that allow you to either include changes you've made to detached objects or to load new data from the db. The docs are very verbose, but once you know what you are looking for it might be a little easier to find.

To actually get all the way there and prevent the MySQL from "going away", you need to solve the issue of your connection pool keeping connections open too long and checking out an old connection for you.

To get a fresh connection, you can set the pool_recycle option in your create_engine call. Set this pool_recycle to the number of seconds of time in the connection pool between checkouts that you want a new connection to be created instead of an existing connection to be returned.

谈场末日恋爱 2024-11-23 23:03:39

我遇到了类似的问题,但对我来说,我会在每次会话 5 分钟到 2 小时之间收到“MySQL 已消失”错误。

我正在使用 Flask-SQLAlchemy,因此它应该关闭空闲连接,但似乎并没有这样做,除非连接已经空闲了几个小时以上。

最终我将其范围缩小到以下 Flask-SQLAlchemy 设置:

app.config['SQLALCHEMY_POOL_SIZE'] = 100
app.config['SQLALCHEMY_POOL_RECYCLE'] = 280

这些设置的默认设置分别为 10 和 7200(2 小时)。

只需调整这些设置以适应您的环境即可。

例如,我在很多地方读到 SQLALCHEMY_POOL_RECYCLE 应设置为 3600,但这对我不起作用。我使用 PythonAnywhere 进行托管,它们会在 5 分钟(300 秒)后终止空闲的 MySQL 连接。所以将我的值设置为小于 300 解决了问题。

我希望这对其他人有帮助,因为我在这个问题上浪费了太多时间。

http://flask-sqlalchemy.pocoo.org/2.1/config/#configuration -keys

更新:2019-OCT-08

配置键 'SQLALCHEMY_POOL_SIZE''SQLALCHEMY_POOL_RECYCLE' 自 v2.4 起已弃用,并将在 SQLAlchemy v3.0 中删除。使用'SQLALCHEMY_ENGINE_OPTIONS'设置相应的值。

app.config['SQLALCHEMY_ENGINE_OPTIONS'] = {'pool_size' : 100, 'pool_recycle' : 280}

I had a similar issue, but for me I'd get the 'MySQL has gone away' error somewhere between 5 minutes and 2 hours of each session.

I'm using Flask-SQLAlchemy so it's supposed to close idle connections, but didn't seem to be doing that unless the connection had been idle for over a couple of hours.

Eventually I narrowed it down to the following Flask-SQLAlchemy settings:

app.config['SQLALCHEMY_POOL_SIZE'] = 100
app.config['SQLALCHEMY_POOL_RECYCLE'] = 280

The default settings for these are 10 and 7200 (2 hours) respectively.

It's a matter of playing around with these settings to fit your environment.

For example, I'd read in many places that SQLALCHEMY_POOL_RECYCLE should be set to 3600, but that didn't work for me. I'm hosting with PythonAnywhere and they kill idle MySQL connections after 5 minutes (300 seconds). So setting my value to less than 300 solved the problem.

I hope this helps others, because I wasted WAY too much time on this issue.

http://flask-sqlalchemy.pocoo.org/2.1/config/#configuration-keys

UPDATE: 2019-OCT-08

The configuration keys 'SQLALCHEMY_POOL_SIZE' and 'SQLALCHEMY_POOL_RECYCLE' are deprecated as of v2.4 and will be removed in v3.0 of SQLAlchemy. Use 'SQLALCHEMY_ENGINE_OPTIONS' to set the corresponding values.

app.config['SQLALCHEMY_ENGINE_OPTIONS'] = {'pool_size' : 100, 'pool_recycle' : 280}
轮廓§ 2024-11-23 23:03:39

2018年答案:在SQLAlchemy v1.2.0+中,您有连接池预ping功能可解决“MySQL服务器已消失”的问题。

连接池预 ping - 连接池现在包含一个
可选的“pre ping”功能将测试池的“活性”
每个连接结账时的连接,透明地回收
DBAPI 连接如果数据库已断开。此功能
消除了对“池回收”标志的需要以及问题
在数据库之后使用池连接时引发的错误数
重新启动。

使用新参数可以在结帐时对连接进行悲观测试:

engine = create_engine("mysql+pymysql://user:pw@host/db", pool_pre_ping=True)

2018 answer: In SQLAlchemy v1.2.0+, you have the connection pool pre-ping feature available to address this issue of "MySQL server has gone away".

Connection pool pre-ping - The connection pool now includes an
optional "pre ping" feature that will test the "liveness" of a pooled
connection for every connection checkout, transparently recycling the
DBAPI connection if the database is disconnected. This feature
eliminates the need for the "pool recycle" flag as well as the issue
of errors raised when a pooled connection is used after a database
restart.

Pessimistic testing of connections upon checkout is possible with the new argument:

engine = create_engine("mysql+pymysql://user:pw@host/db", pool_pre_ping=True)
西瑶 2024-11-23 23:03:39

如果您使用 Flask-SQLAlchemy:

似乎有可用的修复程序: https://github.com/mitsuhiko/flask -sqlalchemy/issues/2

遗憾的是,默认安装(pip install Flask-sqlalchemy)尚未正确应用补丁,尤其是在这个问题上:https://github.com/e-dard/flask-sqlalchemy/commit/cf659f346e005d34257d256fa4c42889741fc31f

从 github 获取最新版本应该可以解决这个问题。

IF you are using Flask-SQLAlchemy:

Seems like a fix is available: https://github.com/mitsuhiko/flask-sqlalchemy/issues/2

Sadly, the default installation (pip install flask-sqlalchemy) doesn't apply the patch properly yet, especially on this issue: https://github.com/e-dard/flask-sqlalchemy/commit/cf659f346e005d34257d256fa4c42889741fc31f

Getting the latest version from github should fix it.

苏大泽ㄣ 2024-11-23 23:03:39

@wim 描述的悲观方法

pool_pre_ping=真

现在可以使用配置变量为 Flask-SQLAlchemy 完成 -->

SQLALCHEMY_POOL_PRE_PING = True

The pessimistic approach as described by @wim

pool_pre_ping=True

can now be done for Flask-SQLAlchemy using a config var -->

SQLALCHEMY_POOL_PRE_PING = True

我还不会笑 2024-11-23 23:03:39

当我遇到此错误时,我正在存储大小约为 1MB 的 LONGBLOB / LargeBinary 图像。我必须调整 MySQL 中的 max_allowed_pa​​cket 配置设置。

我使用 mysqld --max-allowed-packet=16M

When I encountered this error I was storing a LONGBLOB / LargeBinary image ~1MB in size. I had to adjust the max_allowed_packet config setting in MySQL.

I used mysqld --max-allowed-packet=16M

想你的星星会说话 2024-11-23 23:03:39

如果使用Pool,则应该设置recycle小于DB的wait_timeout
wait_timeout是60。所以我设置40来回收

from sqlalchemy.pool import Pool
pool.QueuePool(self.get_connection, max_overflow=0,pool_size=40,recycle=50)

If you use Pool, you should set recyle less than wait_timeout of DB
wait_timeout is 60. So I set 40 to recyle

from sqlalchemy.pool import Pool
pool.QueuePool(self.get_connection, max_overflow=0,pool_size=40,recycle=50)
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文