Perl 连接池
现在我们有一个大型 Perl 应用程序,它使用原始 DBI 连接到 MySQL 并执行 SQL 语句。它每次都会创建一个连接并终止。开始接近 mysql 的连接限制(一次 200 个)
它看起来像 DBIx::Connection 支持应用层连接池。
有人有过 DBIx::Connection
的经验吗?连接池还有其他考虑因素吗?
我还看到 mod_dbd
这是一个 Apache mod,看起来它可以处理连接池。 http://httpd.apache.org/docs/2.1/mod/mod_dbd.html
Right now we have a large perl application that is using raw DBI to connect to MySQL and execute SQL statements. It creates a connection each time and terminates. Were starting to approach mysql's connection limit (200 at once)
It looks like DBIx::Connection supports application layer connection pooling.
Has anybody had any experience with DBIx::Connection
?. Are there any other considerations for connection pooling?
I also see mod_dbd
which is an Apache mod that looks like it handles connection pooling.
http://httpd.apache.org/docs/2.1/mod/mod_dbd.html
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
我没有任何使用 DBIx::Connection 的经验,但我使用 DBIx::Connector (本质上是 DBIx::Class 在内部使用的,但内联的)并且它很棒...
我将这些连接与 Moose 对象包装器进行池化,如果连接参数相同,该包装器将返回现有对象实例(这对于任何底层都相同)数据库对象):
I don't have any experience with DBIx::Connection, but I use DBIx::Connector (essentially what DBIx::Class uses internally, but inlined) and it's wonderful...
I pool these connections with a Moose object wrapper that hands back existing object instances if the connection parameters are identical (this would work the same for any underlying DB object):
只是确保:您了解
DBI->connect_cached()
,对吗?它是connect()
的直接替代品,可以在 Perl 脚本的生命周期内尽可能重用 dbh。也许你的问题可以通过添加 7 个字符来解决:)而且,MySQL 的连接相对便宜。以
max_connections=1000
或更高速度运行数据库本身不会导致问题。 (如果您的客户要求的工作量超出了数据库的处理能力,那么这是一个更严重的问题,较低的 max_connections 可能会推迟这个问题,但当然无法解决这个问题。)Just making sure: you know about
DBI->connect_cached()
, right? It's a drop-in replacement forconnect()
that reuses dbh's, where possible, over the life of your perl script. Maybe your problem is solvable by adding 7 characters :)And, MySQL's connections are relatively cheap. Running with your DB at
max_connections=1000
or more won't by itself cause problems. (If your clients are demanding more work than your DB can handle, that's a more serious problem, one which a lowermax_connections
might put off but of course not solve.)