对不同 Java 集合进行基准测试的最佳实践是什么?
我手里有几个有趣的 Java 集合,例如:
- http://code.google.com/ p/guava-libraries/
- Java 7
- Java 7 并发集合
- Scala 集合
- 我们在某家公司拥有的自制集合
我想知道从性能和可扩展性的角度来看,测试这些 API 的最佳实践是什么,即哪一个是最快、最多可扩展、高性能等。我应该设置数百万个随机元素并使用计时器还是其他东西?只是想满足一下自己的好奇心,看看谁会获胜。
I have couple interesting Java collections in my hand such as:
- http://code.google.com/p/guava-libraries/
- Java 7
- Java 7 concurrent collections
- Scala collections
- Homegrown collections that we have at some company
I wonder what would be the best practices to test these API's, from a performance and scalability perspective, i.e. which one is fastest, most scalable, performant, etc. Should I set with million(s) of random elements and use timer or something else? Just wanted to satisfy my curiosity and see which one would win.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
编辑:JMH 现在更好
查看 Caliper。它将于今年秋天发布 1.0 版本,但许多人已经在使用它并取得了良好的效果(通过从源代码构建它;抱歉)。
请浏览 https://github.com/google/caliper/wiki/JavaMicrobenchmarks 上的一些 ScareText ,不过。
Edit: JMH is better nowadays
Check out Caliper. It will be having its 1.0 release this fall, but many people are already using it with good results (by building it from source; sorry).
Glance over some of the ScareText at https://github.com/google/caliper/wiki/JavaMicrobenchmarks, though.
有人对 Java 集合进行基准测试的白皮书 。不过我没有看到任何源代码。
There's a white paper on somebody benchmarking Java collections. I didn't see any source code, though.