Nhibernate 性能考虑因素 - 需要帮助比较急切加载与批量大小决策
基于这个问题的答案,看来像批量大小是使用 Select N + 1 查询和通过 Eager Fetch 调用获取过多数据之间的解决方案。
确定我的理想批量大小的过程是什么?
是否应该尝试在一个查询中获取所有内容?批量大小的增加是否会在某个时候开始减慢?
我想我的问题是为什么我不总是有一个非常大的批量大小来“捕获”所有选择的 N + 1 情况?
另外,在这篇文章中nhibernate性能,在围绕nhibernate性能的章节(19.1.5.使用批量获取)中,本节讨论了使用批量获取来优化这些查询(Cats --> Owners, Owners --> Children)但是如果您知道需要访问集合中每个项目的该属性,那么在这些情况下,急切获取不是最佳选择吗?
based on the answers to this question, it seems like batch size is the solution between having Select N + 1 queries and getting too much data with Eager Fetch calls.
What is the though process around determining my ideal batch size.
Should it be to try to get all in one query ? does increasing batch size start to slow down at some point?
I guess my question is why wouldn't i just always have a very large batch size to "catch" all select N + 1 situations?
Also, in this article about nhibernate performance, in the section (19.1.5. Using batch fetching) around nhibernate performance, this section talks about using batch fetching to optimize these queries (Cats --> Owners, Owners --> Children) but wouldn't an Eager Fetch be optimal in these cases if you know you will need to access that property of every item in your collection ??
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
也许我在这里遗漏了一些东西,但是您应该不惜一切代价尝试并避免在一次旅行中
获取大量数据
。当然,您的分页策略首先会减少
您返回的行数。计算出分页大小后,可以相应地微调批处理。就我个人而言,如果我必须将
batch-size
设置为100
,那么我首先会解决为什么我要返回所有这些数据......这当然是我的观点,不知道您的用例可能不正确。
官方文档: 通过使用提高性能批量抓取
Maybe I am missing something here but
grabbing lots of data
in one trip is something you should try and avoid at all costs. Surely your paging strategy will firstreduce
the amount of rows you are returning. After you have worked out paging sizes then the batching can be fine tuned accordingly.Personally if I have to set say a
batch-size
to say100
then I would first address why I am returning all this data...This of course is my opinion and without knowing your use case may not be correct.
Official Docs: Improving performance by using batch fetching