按存储顺序遍历 BerkleyDB 数据库

发布于 2024-11-06 15:13:26 字数 180 浏览 4 评论 0原文

在 BerkleyDB JE 中使用游标时,我发现遍历数据集会生成大量随机读取 IO。发生这种情况是因为 BDB 按主键升序遍历数据集。

在我的应用程序中,我没有任何要求按顺序处理数据集(从数学上讲,我的操作是可交换的),并且我对最大化吞吐量感兴趣。

有没有什么方法可以用游标按存储顺序而不是按主键顺序处理数据集。

When using cursors in BerkleyDB JE I found that traversing a dataset generate a lot of random read IO. It happens because BDB traverse dataset in primary key ascending order.

In my application I have not any requirements to process dataset in order (mathematically speaking, my operation is commutative) and I interested in maximizing throughput.

Is there any way to process dataset with cursor in store order and not in primary key order.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

坚持沉默 2024-11-13 15:13:26

我猜不会; BDBJE 是一个日志结构的数据库——即所有写入都附加到日志的末尾。这意味着记录始终附加到最后一个日志,并且可能会取代先前日志中的记录。由于 BDBJE 在设计上无法写入旧日志,因此它无法将旧记录标记为已取代,因此您无法向前遍历存储处理记录,因为如果没有处理日志中稍后的记录,您不知道该记录是否是最新的。

当旧日志的“实时”记录数量减少时,BDBJE 将通过将实时记录复制到新日志中并删除旧文件来清理旧日志,这会进一步打乱排序。

我发现Kyoto Cabinet 的Java 绑定在原始插入性能方面比BDB 更快,并且您可以选择存储格式,这可以让您优化游标排序记录遍历性能。许可证类似(Kyoto Cabinet 是 GPL3,BDB 是 Oracle BDB 许可证(copyleft)),除非您在任何一种情况下都支付商业许可证费用。

更新:自版本 5.0.34 起,BDBJE 包含 DiskOrderedCursor 类,它解决了所需的用例 - 它按日志顺序遍历记录,在未碎片的日志文件中应与磁盘顺序相同。

I would guess not ; BDBJE is a log-structured database - ie, all writes are appended to the end of a log. This means that records are always appended to the last log, and may supercede records in previous logs. Because BDBJE cannot by design write to old logs, it cannot mark old records as superceded, so you cannot walk forward through the storage processing records because you are unaware of whether the record is current without having processed records from later in the log.

BDBJE will clean old logs as their "live" record count diminishes by copying the live records forward into new logs and deleting the old files, which shuffles the ordering yet more.

I found the Java binding of Kyoto Cabinet to be faster than BDB for raw insert performance, and you have a choice of storage formats, which may allow you to optimize your cursor-ordered record traverse performance. The license is similar (Kyoto Cabinet is GPL3, BDB is the Oracle BDB License (copyleft)) unless you pay for a commercial license in either case.

Update : As of version 5.0.34, BDBJE includes the DiskOrderedCursor class which addresses the required use case - it traverses records in log sequence, which in an unfragmented log file should be the same as disk order.

独留℉清风醉 2024-11-13 15:13:26

有新的“批量访问”接口可供使用,允许使用 Db#get()Dbc#get() 方法与 DB_MULTIPLE 标志

该文档适用于 4.2.52 版本,我在 Oracle 网站上查找 com.sleepycat.db 包的文档时遇到了一些麻烦。 在这里我找到了版本的文档4.8.30,但其中没有提到类 DbDbc

啊,类 MultipleEntryMultipleDataEntry 看起来很有希望与上面使用 DB_MULTIPLE 等效。这个想法是,当您使用带有适当大小的缓冲区的 MultipleDataEntry 来获取数据时,您将返回一大堆记录,然后可以使用 MultipleDataEntry#next()

我的印象是界面的这一部分一直在变化。由于我的项目中没有足够新的库版本,因此我还不能声称已经使用了这些批量获取接口。如果您能够调查它们的使用情况,请报告。

There are new "bulk-access" interfaces available that allow one to read multiple presumably contiguous records into a buffer using using either of the Db#get() or Dbc#get() methods in concert with the DB_MULTIPLE flag.

That documentation is for version 4.2.52, and I had some trouble finding documentation for the com.sleepycat.db package on Oracle's site. Here I found the documentation for version 4.8.30, but the classes Db and Dbc are not mentioned there.

Ah, classes MultipleEntry and MultipleDataEntry look to be promising equivalents to the use of DB_MULTIPLE above. The idea is that when you fetch data using, say, MultipleDataEntry with a suitably-sized buffer, you'll get back a whole bunch of records together that can then be picked apart using MultipleDataEntry#next().

I get the impression that this part of the interface has been in flux. As I don't have a fresh enough version of the library available on my project, I can't claim to have used these bulk-fetching interfaces yet. Please report back if you're able to investigate their use.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文