吉尔曼 + SQLAlchemy - 不断丢失 MySQL 线程

发布于 2024-10-16 01:54:25 字数 1558 浏览 0 评论 0原文

我有一个 python 脚本,可以设置几个 gearman 工作人员。它们调用我拥有的 SQLAlchemy 模型上的一些方法,Pylons 应用程序也使用这些方法。

一两个小时内一切正常,然后 MySQL 线程丢失并且所有查询都失败。当我为 pool_recycle 定义如此低的值时,我无法弄清楚为什么线程会丢失(我在 3 个不同的服务器上得到相同的结果)。另外,为什么不创建新连接?

有什么要调查的想法吗?

import gearman
import json
import ConfigParser
import sys
from sqlalchemy import create_engine

class JSONDataEncoder(gearman.DataEncoder):
    @classmethod
    def encode(cls, encodable_object):
        return json.dumps(encodable_object)
    @classmethod
    def decode(cls, decodable_string):
        return json.loads(decodable_string)

# get the ini path and load the gearman server ips:ports
try:
    ini_file = sys.argv[1]
    lib_path = sys.argv[2]
except Exception:
    raise Exception("ini file path or anypy lib path not set")

# get the config
config = ConfigParser.ConfigParser()
config.read(ini_file)
sqlachemy_url =  config.get('app:main', 'sqlalchemy.url')
gearman_servers =  config.get('app:main', 'gearman.mysql_servers').split(",")

# add anypy include path
sys.path.append(lib_path)
from mypylonsapp.model.user import User, init_model
from mypylonsapp.model.gearman import task_rates

# sqlalchemy setup, recycle connection every hour
engine = create_engine(sqlachemy_url, pool_recycle=3600)
init_model(engine)

# Gearman Worker Setup
gm_worker = gearman.GearmanWorker(gearman_servers)
gm_worker.data_encoder = JSONDataEncoder()

# register the workers
gm_worker.register_task('login', User.login_gearman_worker)
gm_worker.register_task('rates', task_rates)

# work
gm_worker.work()

I have a python script that sets up several gearman workers. They call into some methods on SQLAlchemy models I have that are also used by a Pylons app.

Everything works fine for an hour or two, then the MySQL thread gets lost and all queries fail. I cannot figure out why the thread is getting lost (I get the same results on 3 different servers) when I am defining such a low value for pool_recycle. Also, why wouldn't a new connection be created?

Any ideas of things to investigate?

import gearman
import json
import ConfigParser
import sys
from sqlalchemy import create_engine

class JSONDataEncoder(gearman.DataEncoder):
    @classmethod
    def encode(cls, encodable_object):
        return json.dumps(encodable_object)
    @classmethod
    def decode(cls, decodable_string):
        return json.loads(decodable_string)

# get the ini path and load the gearman server ips:ports
try:
    ini_file = sys.argv[1]
    lib_path = sys.argv[2]
except Exception:
    raise Exception("ini file path or anypy lib path not set")

# get the config
config = ConfigParser.ConfigParser()
config.read(ini_file)
sqlachemy_url =  config.get('app:main', 'sqlalchemy.url')
gearman_servers =  config.get('app:main', 'gearman.mysql_servers').split(",")

# add anypy include path
sys.path.append(lib_path)
from mypylonsapp.model.user import User, init_model
from mypylonsapp.model.gearman import task_rates

# sqlalchemy setup, recycle connection every hour
engine = create_engine(sqlachemy_url, pool_recycle=3600)
init_model(engine)

# Gearman Worker Setup
gm_worker = gearman.GearmanWorker(gearman_servers)
gm_worker.data_encoder = JSONDataEncoder()

# register the workers
gm_worker.register_task('login', User.login_gearman_worker)
gm_worker.register_task('rates', task_rates)

# work
gm_worker.work()

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

静若繁花 2024-10-23 01:54:25

无论使用什么数据库库,我都在 Ruby、PHP 和 Python 中看到了这一点。我找不到如何使用 mysql_ping 的“正确”方法来解决这个问题,但是有一个 SQLAlchemy 解决方案,这里有更好的解释 http://groups.google.com/group/sqlalchemy/browse_thread/thread/9412808e695168ea/c31f5c967c135be0

正如该线程中的某人指出的那样,设置回收选项等于 True 相当于将其设置为 1。更好的解决方案可能是找到您的 MySQL 连接超时值并将回收阈值设置为该值的 80%。

您可以通过查找此变量来从实时集中获取该值 http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_connect_timeout

编辑:
我花了一些时间才找到有关使用 pool_recycle 的权威文档
http://www.sqlalchemy.org/docs/05 /reference/sqlalchemy/connections.html?highlight=pool_recycle

I've seen this across the board for Ruby, PHP, and Python regardless of DB library used. I couldn't find how to fix this the "right" way which is to use mysql_ping, but there is a SQLAlchemy solution as explained better here http://groups.google.com/group/sqlalchemy/browse_thread/thread/9412808e695168ea/c31f5c967c135be0

As someone in that thread points out, setting the recycle option to equal True is equivalent to setting it to 1. A better solution might be to find your MySQL connection timeout value and set the recycle threshold to 80% of it.

You can get that value from a live set by looking up this variable http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_connect_timeout

Edit:
Took me a bit to find the authoritivie documentation on useing pool_recycle
http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/connections.html?highlight=pool_recycle

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文