C++基准测试工具

发布于 2024-10-24 07:59:06 字数 565 浏览 5 评论 0原文

我有一些应用程序,它发出数据库请求。 我想我使用的数据库类型实际上并不重要,但假设它是一个简单的 SQLite 驱动的数据库。

现在,该应用程序作为服务并每分钟执行一定数量的请求(这个数字实际上可能很大)。

我愿意对查询进行基准测试以检索一段时间内的数量、最大/最小/平均运行时间,并且我希望为此设计自己的工具(显然,有一些,但我需要自己的工具适当的原因:)。

那么 - 您能为这项任务提供建议吗?


我猜有几种可能的情况:

1) 我有权访问应用程序源代码。< /strong> 在这里,显然,我想进行某种跨应用程序集成,可能使用管道。您能否就如何完成此操作以及(如果有的话)任何其他可能的建议提出建议解决方案?

2) 我没有来源。那么,是否有可能从我的应用程序中执行一些巧妙的注入来对另一个应用程序进行基准测试? 我希望有一种方法,也许是 hacky,无论如何。

非常感谢。

I have some application, which makes database requests. I guess it doesn't actually matter, what kind of the database I am using, but let's say it's a simple SQLite-driven database.

Now, this application runs as a service and does some amount of requests per minute (this number might actually be huge).

I'm willing to benchmark the queries to retrieve their number, maximal / minimal / average running time for some period and I wish to design my own tool for this (obviously, there are some, but I need my own for some appropriate reasons :).

So - could you advice an approach for this task?


I guess there are several possible cases:

1) I have access to the application source code. Here, obviously, I want to make some sort of cross-application integration, probably using pipes. Could you advice something about how this should be done and (if there is one) any other possible solution?

2) I don't have sources. So, is this even possible to perform some neat injection from my application to benchmark the other one? I hope there is a way, maybe hacky, whatever.

Thanks a lot.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(7

羅雙樹 2024-10-31 07:59:06

有关一系列分析器,请参阅 C++ 代码分析器

或者 C++ 日志记录和性能调优库用于推出您自己的简单版本

See C++ Code Profiler for a range of profilers.

Or C++ Logging and performance tuning library for rolling your own simple version

破晓 2024-10-31 07:59:06

我的回答仅适用于情况1)。

根据我的经验,分析是一项有趣而又艰巨的任务。使用专业工具可能很有效,但可能需要花费大量时间才能找到合适的工具并学习如何正确使用它。我通常以非常简单的方式开始。我准备了两门非常简单的课程。第一个 ProfileHelper 类在构造函数中填充开始时间,在析构函数中填充结束时间。第二类 ProfileHelperStatistic 是一个具有额外统计功能的容器(一个 std::multimap + 一些返回平均值、标准差和其他有趣内容的方法)。

ProfilerHelper 有一个对容器的引用,在退出析构函数之前将数据推送到容器中。您可以在 main 中声明 ProfileHelperStatistic,如果您在特定函数开始时在堆栈上创建 ProfilerHelper,则工作完成。 ProfileHelper 的构造函数将存储开始时间,析构函数将结果推送到 ProfileHelperStatistic。

它相当容易实现,只需稍加修改即可实现跨平台。创建和销毁对象的时间不会被记录,因此您不会污染结果。计算最终统计数据可能会很昂贵,因此我建议您在最后运行一次。

您还可以自定义要存储在 ProfileHelperStatistic 中的信息,添加额外信息(例如时间戳或内存使用情况)。

实现相当简单,两个类,每个类不超过 50 行。只有两个提示:

1)在析构函数中捕获所有内容!

2)如果要存储大量数据,请考虑使用需要恒定时间插入的集合。

这是一个简单的工具,它可以帮助您以非常有效的方式分析您的应用程序。我的建议是从几个宏函数(5-7个逻辑块)开始,然后增加粒度。记住 80-20 规则:20% 的源代码使用 80% 的时间。

关于数据库的最后一点:数据库会动态调整性能,如果您在最后多次运行查询,查询将比开始时更快(Oracle 是这样做的,我猜其他数据库也是如此)。换句话说,如果您对仅关注少数特定查询的应用程序进行大量人为测试,您可能会得到过于乐观的结果。

My answer is valid just for the case 1).

In my experience profiling it is a fun a difficult task. Using professional tools can be effective but it can take a lot of time to find the right one and learn how to use it properly. I usually start in a very simple way. I have prepared two very simple classes. The first one ProfileHelper the class populate the start time in the constructor and the end time in the destructor. The second class ProfileHelperStatistic is a container with extra statistical capability (a std::multimap + few methods to return average, standard deviation and other funny stuff).

The ProfilerHelper has an reference to the container and before exit the destructor push the data in the container.You can declare the ProfileHelperStatistic in the main and if you create on the stack ProfilerHelper at the beginning of a specific function the job is done. The constructor of the ProfileHelper will store the starting time and the destructor will push the result on the ProfileHelperStatistic.

It is fairly easy to implement and with minor modification can be implemented as cross-platform. The time to create and destroy the object are not recorded, so you will not polluted the result. Calculating the final statistic can be expensive, so I suggest you to run it once at the end.

You can also customize the information that you are going to store in ProfileHelperStatistic adding extra information (like timestamp or memory usage for example).

The implementation is fairly easy, two class that are not bigger than 50 lines each. Just two hints:

1) catch all in the destructor!

2) consider to use collection that take constant time to insert if you are going to store a lot of data.

This is a simple tool and it can help you profiling your application in a very effective way. My suggestion is to start with few macro functions (5-7 logical block) and then increase the granularity. Remember the 80-20 rule: 20% of the source code use 80% of the time.

Last note about database: database tunes the performance dynamically, if you run a query several time at the end the query will be quicker than at the beginning (Oracle does, I guess other database as well). In other word, if you test heavily and artificially the application focusing on just few specific queries you can get too optimistic results.

薄荷港 2024-10-31 07:59:06

其实我觉得这并不重要
我正在使用什么样的数据库,
但假设这是一个简单的
SQLite 驱动的数据库。

使用哪种数据库非常重要,因为数据库管理器可能具有集成监控功能。

我只能谈论 IBM DB/2,但我相信 IBM DB/2 并不是唯一具有集成监控工具的 dbm。

例如,这里是您可以在 IBM DB/2 中监视的内容的简短概述:

  • 语句(所有执行的语句、执行计数、准备时间、CPU 时间、读/写计数:表行、缓冲池、逻辑、物理)
  • 表(计数读/写次数)
  • 缓冲池(数据和索引的逻辑和物理读/写、读/写时间)
  • 活动连接(运行语句、读/写计数、时间)
  • 锁(所有锁和类型)
  • 等等

监控数据可以通过 SQL 或 API 从自己的软件访问,例如 DB2 Monitor 就是这样做的。

I guess it doesn't actually matter,
what kind of the database I am using,
but let's say it's a simple
SQLite-driven database.

It's very important what kind of database you use, because the database-manager might have integrated monitoring.

I could speak only about IBM DB/2, but I beliefe that IBM DB/2 is not the only dbm with integrated monitoring tools.

Here for example an short overview what you could monitor in IBM DB/2:

  • statements (all executed statements, execution count, prepare-time, cpu-time, count of reads/writes: tablerows, bufferpool, logical, physical)
  • tables (count of reads / writes)
  • bufferpools (logical and physical reads/writes for data and index, read/write times)
  • active connections (running statements, count of reads/writes, times)
  • locks (all locks and type)
  • and many more

Monitor-data could be accessed via SQL or API from own software, like for example DB2 Monitor does.

旧城空念 2024-10-31 07:59:06

在 Unix 下,您可能想要使用 gprof 及其图形前端 kprof。使用 -pg 标志编译您的应用程序(我假设您使用的是 g++)并通过 gprof 运行它并观察结果。

但请注意,这种类型的分析将衡量应用程序的整体性能,而不仅仅是 SQL 查询。如果您想要测量查询的性能,则应该使用专为您的 DBMS 设计的特殊工具 - 例如,MySQL 有一个内置的查询分析器(对于 SQLite,请参阅此问题:是否有分析 sqlite 查询的工具?

Under Unix, you might want to use gprof and its graphical front-end, kprof. Compile your app with the -pg flag (I assume you're using g++) and run it through gprof and observe the results.

Note, however, that this type of profiling will measure the overall performance of an application, not just SQL queries. If it's the performance of queries you want to measure, you should use special tools that are designed for your DBMS - for example, MySQL has a builtin query profiler (for SQLite, see this question: Is there a tool to profile sqlite queries? )

¢蛋碎的人ぎ生 2024-10-31 07:59:06

您可能会对一个(linux)解决方案感兴趣,因为它可以在这两种情况下使用。

这是LD_PRELOAD技巧。它是一个环境变量,可让您指定在执行程序之前加载的共享库。从该库加载的符号将覆盖系统上任何其他可用的符号。

基本思想是将此自定义库作为原始函数的包装器。

有大量资源可以解释如何使用此技巧:1 , 2, 3

There is a (linux) solution you might find interesting since it could be used on both cases.

It's the LD_PRELOAD trick. It's an environment variable that let's you specify a shared library to be loaded right before your program is executed. The symbols load from this library will override any other available on the system.

The basic idea is to this custom library as a wrapper around the original functions.

There is a bunch of resources available that explain how to use this trick: 1 , 2, 3

独闯女儿国 2024-10-31 07:59:06

在这里,显然,我想进行某种跨应用程序集成,可能使用管道。

我认为这根本不明显。

如果您有权访问该应用程序,我建议将所有必要的信息转储到日志文件中,并稍后处理该日志文件。
如果您希望能够即时激活和停用此行为,而无需重新启动服务,则可以使用支持即时启用/禁用日志通道的日志记录库。
然后,您只需通过任何方式(套接字连接等)向服务发送消息即可启用/禁用日志记录。

如果您无权访问该应用程序,那么我认为最好的方法是 MacGucky 建议的:让 DBMS 的分析/监视工具来完成它。例如,MS-SQL 有一个很好的分析器,可以捕获对服务器的请求,包括各种有用的数据(每个请求的 CPU 时间、IO 时间、等待时间等)。

如果它确实是 SQLite(加上您无权访问源代码),那么您的机会就相当低了。如果相关程序使用 SQLite 作为 DLL,那么您可以替换您自己的 SQLite 版本,并进行修改以写入必要的日志文件。

Here, obviously, I want to make some sort of cross-application integration, probably using pipes.

I don't think that's obvious at all.

If you have access to the application, I'd suggest dumping all the necessary information to a log file and process that log file later on.
If you want to be able to activate and deactivate this behavior on-the-fly, without re-starting the service, you could use a logging library that supports enabling/disabling log channels on-the-fly.
Then you'd only need to send a message to the service by whatever means (socket connection, ...) to enable/disable logging.

If you don't have access to the application, then I think the best way would be what MacGucky suggested: let the profiling/monitoring tools of the DBMS do it. E.g. MS-SQL has a nice profiler that can capture requests to the server, including all kinds of useful data (CPU time for each request, IO time, wait time etc.).

And if it's really SQLite (plus you don't have access to the source) then your chances are rather low. If the program in question uses SQLite as a DLL, then you could substitute your own version of SQLite, modified to write the necessary log files.

木有鱼丸 2024-10-31 07:59:06

使用 apache jmeter
测试高负载下 sql 查询的性能

Use the apache jmeter.
To test performances of your sql queries under high load

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文