mongoid:返回所有嵌入文档

发布于 2024-10-26 15:17:13 字数 453 浏览 0 评论 0原文

返回所有嵌入文档的最有效方法是什么?

假设一个用户嵌入了许多地址...在 ActiveRecord 中我可以使用 Address.count 获得它们的计数。执行此操作的嵌入式文档/mongo 版本是什么?

当深度为 2 层或更多层时怎么样?产品>按>变体...我怎样才能获得所有书籍、所有作者的所有章节的计数?与用 Ruby 完成这一切相比如何?

Product has_many Pressings
Pressing has_many Variations

Product
  def self.pressings
    all.collect { |p| p.pressings }.flatten
  end
  def self.variations
    self.pressings.collect { |p| p.variations }.flatten
  end
end

what is the most efficient way to return all embedded documents?

say a User has many Addresses embedded... in ActiveRecord i could get a count of them with Address.count. what is the embedded document / mongo version of doing this?

how about when its 2 or more levels deep? Product > Pressing > Variations... how could i get a count of all the chapters across all books, across all authors? how would it compare to doing it all in say, Ruby?

Product has_many Pressings
Pressing has_many Variations

Product
  def self.pressings
    all.collect { |p| p.pressings }.flatten
  end
  def self.variations
    self.pressings.collect { |p| p.variations }.flatten
  end
end

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

稳稳的幸福 2024-11-02 15:17:13

通常它是由 聚合 函数完成的(包括 聚合 函数)。 mongodb.org/display/DOCS/MapReduce" rel="noreferrer">map/reduce 对于更具体的情况),但它们相对较慢,不适合在繁重的应用程序中实时使用。因此,如果性能存在问题,我建议添加额外的数字字段,由 原子操作 更新当变化发生并由聚合函数不时修改时。

Usually it's done by aggregation functions (including map/reduce for more specific situations) but they are relatively slow and not appropriate for real-time using in heavy applications. So, If performance is issue, I suggest additional number fields, updated by atomic operations when changes occur and amended by aggregation functions from time to time.

拥醉 2024-11-02 15:17:13

正如 @maga 所说,Map/Reduce 对于实时聚合来说太慢,存储计数字段是最好的方法。

另一种选择是将整个文档(或特定字段)返回到您的应用程序并在那里进行解析。当您不知道将存在多少个嵌套级别时,这可能是最好的。

不管有些人怎么想,这样做绝对没有坏处。您的数据库服务器将很乐意非常快速地返回该文档,并允许您的应用程序进行后处理。

就可扩展性而言,这种方法意味着您将扩展应用程序服务器(通常更便宜)而不是数据库服务器(通常更昂贵)。

As @maga says, Map/Reduce is too slow for realtime aggregation and storing a counts field is the best way.

Another option is to return the entire document (or specific fields) to your application and have it be parsed there. This may be best when you dont know how many nested levels are going to exist.

Despite what some people may think, there is absolutely no harm in doing this. Your database server will gladly return this document very quickly and allow your application to handle the post processing.

In terms of scalability, this approach means you'll be scaling your application servers (which are usually much more affordable) instead of your database servers (which are typically more expensive).

最初的梦 2024-11-02 15:17:13

Map/reduce 对于数据的批量处理和聚合操​​作非常有用。

Map/reduce in MongoDB is useful for batch processing of data and aggregation operations.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文