Hibernate/c3p0/MySQL 下的网络延迟
我正在通过延迟相当高(约 80 毫秒)但带宽相对较高的连接连接到 MySQL (InnoDB) 数据库。
我注意到查询时间根据查询的发出方式而有很大差异。在以下示例中,我将按主键对单个小行执行查询。查询时间为:
- 命令行客户端 (
mysql
):约 160 毫秒 - 原始 JDBC:约 240 毫秒
- Hibernate:约 400 毫秒(约 0 毫秒开始,约 160 毫秒获取,约 240 毫秒提交)
- Hibernate,L2:约 240 毫秒(约 0 毫秒开始,约 0 毫秒获取,约 240 毫秒提交)
- 休眠,c3p0:约 880 毫秒(约 160 毫秒开始,约 240 毫秒获取,约 480 毫秒提交)
- 休眠,L2+c3p0:约 640 毫秒(约 160 毫秒开始,约 0 毫秒获取,约 480 毫秒commit)
(“L2”表示启用了 Hibernate 二级缓存,“c3p0”表示启用了 c3p0,“begin”、“get”和“commit”是查询期间调用的各个子方法的计时)
这些是,粗略地说,结果是“稳态”,因此 L2 缓存很热,并且 Hibernate 启动时间被忽略。我假设启用 L2 缓存时“get”通常为 0ms,因为实际上没有发出 get。
我的问题是:
- 为什么所有查询的网络延迟都是这么大的倍数?即使是
mysql
命令行客户端似乎也需要 2 个往返才能完成一个简单的查询。 - 为什么所有 JDBC/Hibernate 查询都比命令行客户端慢得多?即使是原始的 JDBC 客户端似乎也需要 3 次往返。
- 为什么 c3p0 似乎让一切变得更糟?据我所知,我已经禁用了连接测试,否则可以解释事情。
I'm connecting to a MySQL (InnoDB) database over a connection with fairly high latency (~80ms) but relatively high bandwidth.
I've noticed that query times vary dramatically depending on how the query is issued. In the following examples I'm performing a query for a single, small row by primary key. The query times are:
- Command line client (
mysql
): ~160ms - Raw JDBC: ~240ms
- Hibernate: ~400ms (~0ms begin, ~160ms get, ~240ms commit)
- Hibernate, L2: ~240ms (~0ms begin, ~0ms get, ~240ms commit)
- Hibernate, c3p0: ~880ms (~160ms begin, ~240ms get, ~480ms commit)
- Hibernate, L2+c3p0: ~640ms (~160ms begin, ~0ms get, ~480ms commit)
("L2" means Hibernate second-level caching was enabled, "c3p0" means c3p0 was enabled, "begin", "get" and "commit" are timings for the various sub-methods invoked during the query)
These are, roughly, "steady state" results, so the L2 cache is hot, and Hibernate startup time is ignored. I'm assuming that "get" is typically 0ms when the L2 cache is enabled because no get is actually issued.
My questions are:
- Why are all of the queries such large multiples of the network latency? Even the
mysql
command-line client seems to require 2 round-trips for a simple query. - Why are all of the JDBC/Hibernate queries so much slower than the command-line client? Even the raw JDBC client seems to require 3 round-trips.
- Why does c3p0 seem to make everything worse? I have, as far as I can tell, disabled connection testing, which could otherwise explain things.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
我没有针对您的问题的具体建议,但有一些调试技术可以帮助您弄清楚发生了什么。如果您可以将连接参数添加到连接 URL,则 JDBC 驱动程序将基本上记录通过网络传输的所有内容并计时:
jdbc:mysql://server/database?profileSQL=true
http://dev.mysql.com/doc/refman/5.0/en/connector -j-reference-configuration-properties.html
另一种查看此问题的方法是通过 tcpdump 观察网络流量。 Maatkit 工具中的 mk-query-digest 可以读取转储输出并帮助您准确了解发生了什么:
http://www.mysqlperformanceblog.com/2009/07/01/gathering-queries-from-a-server-with-maatkit -and-tcpdump/
希望这有帮助。
I don't have advice specific to your problem, but there are a few debugging techniques that will help you figure out what's going on. If you can add a connection parameter to the connection url, the JDBC driver will log basically everything going over the wire with timings:
jdbc:mysql://server/database?profileSQL=true
http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-configuration-properties.html
The other way you can look at this is to watch your network trafic through tcpdump. mk-query-digest from the Maatkit tools can read the dump output and help you figure out exactly what's going on:
http://www.mysqlperformanceblog.com/2009/07/01/gathering-queries-from-a-server-with-maatkit-and-tcpdump/
Hope this helps.