Oracle 11g - AWR报告分析帮助
我是 Oracle 的新手,我从企业经理创建了一份 AWR 报告。这是该报告的一个小快照。
登录:每秒 0.01 - 每笔交易 0.02
内存中排序百分比 100.00
执行 每秒 19.0 - 每笔交易 27.7
% SQL 执行 >1 开始:97.45 - 结束:94.41
缓冲区高速缓存:开始:240M - 结束:240M
回滚 每秒 0.2 次 - 每笔交易 0.2 次
缓冲区命中率 99.98
共享池大小:开始:1,024M - 结束:1,024M
你们能告诉我“登录”、执行等每个术语代表什么以及它们的含义吗?用于确定数据库性能?
很抱歉,如果这个问题看起来很愚蠢,但我在网上找不到任何解释。
I am a newbie to Oracle and I created an AWR report from the enterprise Manager. Here is a small snapshot of the report.
Logons: 0.01 per second - 0.02 per transaction
In-memory Sort % 100.00
Executes 19.0 per second - 27.7 per transaction
% SQL with execution >1 Begin: 97.45 - End: 94.41
Buffer Cache: Begin: 240M - End: 240M
Rollbacks 0.2 per second - 0.2 per transaction
Buffer Hit % 99.98
Shared Pool Size: Begin: 1,024M - End: 1,024M
Can you guys let me know what does each of the terms like 'logons', execute, etc stand for and how they can be used to determine database performance?
Sorry if this question looks stupid, but I couldn't find an explanation anywhere on the net.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
登录非常简单 - 新用户/会话登录系统的次数。一般来说,您不希望每秒进行大量登录,因为这是一个非常慢的操作。如果您这样做,那么您可能应该考虑连接池。
排序 - 通常您希望对数据进行排序(日期、字母顺序)。小数据集可以在内存中排序。较大的文件可能会溢出到磁盘,从而速度较慢。如果你在记忆中完成了所有的事情,那并不意味着有问题。
执行 - SQL 通常会经历 PARSE、BIND、EXECUTE、FETCH。每次执行可能会进行多次提取(提取前 10 行、接下来 10 行等)。同样,有些 SQL 没有获取(例如插入)。一个事务由多个SQL组成。如果每个事务有 20-30 个 SQL,那么您就会获得一定程度的复杂性。并非每个语句本身都是孤立的事务。每秒执行次数更为基本。请参阅我的结束语。
% 执行的 SQL > 1 - 每个解析可以有多个绑定和执行(这是一件好事,因为解析可能很昂贵)。大多数 SQL 都可以多次执行,这很好。
缓冲区高速缓存 - 用于数据块副本的内存量。没有“好”或“坏”之分,因为它取决于服务器上的内存。
回滚 - 每笔交易 0.2 是......奇怪的。它表明 20% 的事务被回滚而不是提交。可能没什么好担心的,这可能只是后退或取消按钮的工作方式。除非在强制回滚时引发大量错误,否则这不是数据库问题,而只是应用程序的工作方式问题。
缓冲区命中百分比 - 直接从内存读取数据块而无需访问磁盘的百分比。从内存中读取的速度比从磁盘中读取的速度快,所以高就是“好”(特别是对于 OLTP 应用程序 - 数据仓库通常处理的数据多于内存所能容纳的数据)。但不要对比率感到兴奋。如果您可以减少物理 IO(从磁盘读取),那就太好了,但是仅仅为了提高比率而从内存中的块生成额外的读取并没有任何好处。
共享池大小——同样,它是内存的度量。
=================================================== =====================
最终,这些都不适合测量数据库性能。重要的是应用程序用户(或开发人员或经理)是否表示他们的程序是否满足其性能规范。
每秒执行次数是一个重要的衡量标准,但仅限于您所需的工作负载。如果用户希望每秒执行 50 次,但实际上只执行了 20 次,结果导致超时,那么就会出现问题。如果他们一秒钟只需要做 10 件事,并且花半天时间在 Youtube 上,那么你就做得很好了。
同样,如果测量的是一个工作日的 8 小时,那么如果测量的是 24 小时,其中每个人都在家 16 小时,情况可能会非常不同。
Logons are pretty simple - how many times a new user / session has been logged onto the system. Generally you don't want lots of logons per second as it is a pretty slow operation. If you do, then you probably should look at connection pooling.
Sorts - often you want data sorted (date, alphabetically). Small data sets can be sorted in memory. Larger ones can spill to disk, which is slower. If you are doing all your sorts in memory, that doesn't suggest a problem.
Executes - SQL generally goes through a PARSE, BIND, EXECUTE, FETCH. You may have multiple fetches per execution (fetch first 10 rows, next 10 etc). Equally some SQLs don't have a fetch (eg an insert). A transaction consists of a numbers of SQLs. If you have 20-30 SQLs per transaction, then you got some reasonable complexity. Not every statement is an isolated transaction in its own right. Executes per second is more fundamental. See my closing comment.
% SQL with execution > 1 - You can have multiple binds&executes per parse (which is a good thing, as parsing can be expensive). Most of your SQLs get multiple executions, so good.
Buffer Cache - Amount of memory for copies of data blocks. There's no 'good' or 'bad', as it depends on the memory on the server.
Rollbacks - 0.2 per transaction is....odd. It suggests that 20% of your transactions are rolled back rather than committed. Might be nothing to worry about, it might be just the way a back or cancel button works. Unless there are lots of errors being thrown around forcing rollbacks, it isn't a database problem but just the way the application works.
Buffer Hit % - Percentage of data block reads coming direct from memory without needing to go to disk. Reading from memory is faster than disk so high is 'good' (especially for OLTP apps - data warehouses generally deal with more data than can fit in memory). But don't get excited about ratios. If you can reduce phyiscal IOs (reads from disk) good, but there's no benefit in generating additional reads from blocks in memory just to up the ratio.
Shared Pool Size - Again, it is a measurement of memory.
========================================================================
Ultimately, none of these are great for measuring database performance. What matters is if the application users (or developers or managers) are saying that their programs are or are not meeting their performance specifications.
Executes per second is an important measure, but only against your required workload. If users want to do 50 executions per second, and are only doing 20 and are working overtime as a result, then you have an issue. If they only need to do 10 a second and they spend half the day on Youtube, you are doing fine.
Equally, if that measurement is for eight hours of a working day, it could be a very different story if it covers 24 hours where everyone is home for 16 of them.