Flex 分析 (Flex Builder):比较两个结果

发布于 2024-08-16 03:38:53 字数 187 浏览 4 评论 0原文

我正在尝试使用 Flex Profiler 来提高应用程序性能(加载时间等)。我已经看到了当前设计的探查器结果。我想将这些结果与同一组数据的新设计进行比较。有什么直接的方法可以做到吗?我不知道有什么方法可以将当前的分析结果保存在历史记录中,并在以后与新设计的结果进行比较。 否则我必须手动完成,将两个结果写在记事本中,然后进行比较。

提前致谢。

I am trying to use Flex Profiler to improve the application performance (loading time, etc). I have seen the profiler results for the current desgn. I want to compare these results with a new design for the same set of data. Is there some direct way to do it? I don't know any way to save the current profiling results in history and compare it later with the results of a new design.
Otherwise I have to do it manually, write the two results in a notepad and then compare it.

Thanks in advance.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

櫻之舞 2024-08-23 03:38:53

您声明的目标是提高应用程序性能的各个方面(加载时间等)。我在其他语言(C#、C++、C 等)中也遇到类似的问题。我建议您不要太关注 Flex 的时序测量。 Profiler 为您提供了,而是在速度缓慢时使用它来提取调用堆栈的少量样本。不要进行摘要,而是仔细检查这些堆栈样本。这可能会让你的想法有点弯曲,因为它不会给你特别精确的时间测量。它会告诉您您需要关注哪些代码行才能获得加速,并且它会让您非常粗略地了解您可以期望获得多少加速。要获得准确的加速量,您可以稍后计时。 (我只是使用秒表。如果我将加载时间从 2 分钟缩短到 10 秒,计时就不是一个高科技问题。)

(如果您想知道这是如何/为什么起作用的,它之所以起作用,是因为原因程序比实际速度慢的原因是它要求完成工作,主要是通过方法调用,因此您将避免执行太多在这些方法调用上花费的时间,它们是暴露的。在堆栈上,您可以轻松地看到它们,例如,如果有一行代码花费了您 60% 的时间,并且您采用了 5 个堆栈样本,那么它将出现在 3 个样本上,加或减 1,粗略地说,无论它是执行一次还是一百万次,因此任何出现在多个堆栈上的此类行都是可能的优化目标,并且如果您采取了足够多的

困难部分, 则优化目标将出现在多个堆栈样本上。这是学习不要被所有不相关的分析结果分散注意力,对于方法来说,毫秒、平均值或总计都是无关紧要的。调用计数无关紧要。 “自我时间”无关紧要。调用图是无关紧要的。有些包担心递归 - 这是无关紧要的。 CPU 限制与 I/O 限制 - 无关紧要。 相关是各个代码行出现的堆栈样本的比例。)

添加:如果您这样做,您会注意到“放大效果”。假设您有两个独立的性能问题 A 和 B,其中 A 的成本为 50%,B 的成本为 25%。如果修复 A,总时间会减少 50%,因此现在 B 占用剩余时间的 50%,并且更容易找到。另一方面,如果你碰巧先修复 B,时间就会减少 25%,因此 A 会放大到 67%。你解决的任何问题都会让其他问题显得更大,所以你可以继续下去,直到你无法再挤压它为止。

Your stated goal is to improve aspects of the application performance (loading time, etc.) I have similar issues in other languages (C#, C++, C, etc.) I suggest that you focus not so much on the timing measurements that the Flex profiler gives you, but rather use it to extract a small number of samples of the call stack while it is being slow. Don't deal in summaries, but rather examine those stack samples closely. This may bend your mind a little bit, because it will not give you particularly precise time measurements. What it will tell you is which lines of code you need to focus on to get your speedup, and it will give you a very rough idea of how much speedup you can expect. To get the exact amount of speedup, you can time it afterward. (I just use a stopwatch. If I'm getting the load time down from 2 minutes to 10 seconds, timing it is not a high-tech problem.)

(If you are wondering how/why this works, it works because the reason for the program being slower than it's going to be is that it's requesting work to be done, mostly by method calls, that you are going to avoid executing so much. For the amount of time being spent in those method calls, they are sitting exposed on the stack, where you can easily see them. For example, if there is a line of code that is costing you 60% of the time, and you take 5 stack samples, it will appear on 3 samples, plus or minus 1, roughly, regardless of whether it is executed once or a million times. So any such line that shows up on multiple stacks is a possible target for optimization, and targets for optimization will appear on multiple stack samples if you take enough.

The hard part about this is learning not to be distracted by all the profiling results that are irrelevant. Milliseconds, average or total, for methods, are irrelevant. Invocation counts are irrelevant. "Self time" is irrelevant. The call graph is irrelevant. Some packages worry about recursion - it's irrelevant. CPU-bound vs. I/O bound - irrelevant. What is relevant is the fraction of stack samples that individual lines of code appear on.)

ADDED: If you do this, you'll notice a "magnification effect". Suppose you have two independent performance problems, A and B, where A costs 50% and B costs 25%. If you fix A, total time drops by 50%, so now B takes 50% of the remaining time and is easier to find. On the other hand, if you happen to fix B first, time drops by 25%, so A is magnified to 67%. Any problem you fix makes the others appear bigger, so you can keep going until you just can't squeeze it any more.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文