Oracle JDBC 驱动程序语句缓存与 BoneCP 语句缓存?
我正在使用 Oracle JDBC 驱动程序并评估 BoneCP。 两者都实现了语句缓存。
我问自己是否应该使用其中之一来进行语句缓存。 你怎么认为?每种方式的优点或缺点是什么?
I'm using Oracle JDBC driver and evaluate BoneCP.
Both implement a statement cache.
I am asking myself whether I should use the one or the other for statement caching.
What do you think? What are the advantages or disadvantages for each way?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
作为 BoneCP 的作者,我可以填写我的部分:
如果您忘记正确关闭语句,使用池缓存可以为您提供堆栈跟踪。如果您使用 hibernate/spring jdbc 模板或其他一些托管连接,这是无关紧要的,因为它始终会为您关闭。
语句缓存与每个连接相关联,因此如果您有 100 个连接并且每次都继续执行相同的语句,则需要一段时间才能每个连接填满其缓存。如果数据库支持它,驱动程序可能会进行一些特殊的调整,只准备一次该语句,但这不在 JDBC 规范中,因此如果可能的话,连接池将不会有任何此类设施来对此进行优化。
另一方面,您可以告诉池以 LIFO 模式为您提供一个连接,这将大大提高您命中热缓存的几率。
从性能角度来看,应该不会有太大差异,因为最终他们都尝试重用一条语句。然而,一些驱动程序采用盲目方法在方法级别进行同步,而在 BoneCP 中,我总是尝试使用尽可能细粒度的锁,因此理论上这应该提供更大的可扩展性。
摘要:两者的性能应该大致相同——如果不是,则可能是某个地方的设计错误。
As author of BoneCP, I can fill in my part:
Using the pool cache gives you the possibility of giving you a stack trace if you forget to close off your statements properly. If you're using hibernate/spring jdbc template or some other managed connection this is irrelevant since it will always be closed off for you.
The statement cache is tied to each connection so if you have 100 connections and you keep executing the same statement each time, it will take a while until every connection fills up it's cache. If the DB supports it, the driver might have some special tweaks to only prepare this statement once but this is not in the JDBC spec and so a connection pool will not have any such facility to optimize for this if at all possible.
On the other hand, you can tell the pool to give you a connection in LIFO mode which will greatly improve the odds of you hitting a hot cache.
Performance wise there shouldn't be too much of a difference since in the end they both try to reuse a statement. However several driver adopt the blind approach to synchronize at a method level while in BoneCP I always try to use as fine-grain a lock as possible so in theory this should provide for greater scalability.
Summary: Both should perform roughly the same -- if not it's probably a design bug somewhere.