诊断 .NET 遗留问题

发布于 2024-08-29 19:37:12 字数 542 浏览 1 评论 0原文

假设您正在接管旧版 .NET 应用程序。用 C# 编写的

用于评估应用程序运行状况的前 5 个诊断措施、分析或其他措施是什么?

我不仅关注诊断的“什么”部分,还关注“如何”。例如,确实有必要评估应用程序的快速/最佳响应时间。 ...但是有没有办法通过代码库的技术诊断来建立/衡量它,而不是仅仅获得用户体验反馈?

替代文本
(来源:gmu.edu

是的,您肯定会使用一些很棒的工具来实现这一目的……如果您也列出它们,那就太好了。

Assume you are taking over a legacy .NET app. written in C#

What are the top 5 diagnostic measures, profiling or otherwise that you would employ to assess the health of the application?

I am not just looking at the "WHAT" part of diagnosis but also at the "HOW". For e.g. It is indeed necessary to assess fast/optimum response times of the app. ... but is there a way to to establish/measure it by technical diagnosis of the code base instead of just getting user-experience feedback?

alt text
(source: gmu.edu)

And yes there are bound to be some awesome tools that you use for the purpose...it would be great if you list them too.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

筱武穆 2024-09-05 19:37:12

1. 用户认知

我要做的第一件事就是简单地调查用户。请记住,他们是我们这样做的对象。无论应用程序的内部看起来有多糟糕,如果用户喜欢它(或者至少不主动不喜欢它),那么您就不会想立即开始将其拆散。

我想问这样的问题:

  • 运行顺利吗?
  • 使用起来方便吗?
  • 当您使用它时,您是否确信它正在按照您的预期进行?
  • 是宝马、思域还是平托?

答案将是主观的。没关系。此时我们只是寻找总体趋势。如果绝大多数用户表示它总是崩溃,或者他们害怕执行基本任务,那么您就有麻烦了。

如果应用程序滋生迷信,您会听到诸如“周四早上它似乎会消失”或“我不知道这个按钮是做什么的,但除非我点击它否则它不会工作”之类的话首先”,向山上跑。

2. 文件

缺乏文件或文件严重过时,是申请有问题的明确标志。没有文档意味着开发人员会偷工减料,或者在不断的死亡行军中过度劳累,以至于他们找不到时间来做这种“不必要”的工作。

我不是在谈论用户手册 - 一个设计良好的应用程序不应该需要它们 - 我的意思是技术文档,架构的外观,组件的功能,环境依赖性,配置设置,需求/用户故事,测试用例/测试计划,文件格式,你明白了。缺陷跟踪系统也是文档的重要组成部分。

在缺乏适当文档的情况下,开发人员最终会做出(错误的)假设。我曾与业内的一些人交谈过,他们认为这是可选的,但我见过或工作过的每个系统都很少或根本没有文档,最终都充满了错误和设计缺陷。

3. 测试

判断应用程序健康状况的最好方法莫过于通过应用程序自身的测试(如果有的话)。单元测试、代码覆盖率、集成测试,甚至手动测试,一切都可以在这里进行。测试套件越完整,系统健康的机会就越大。

成功的测试根本不能保证太多,除了被测试的特定功能按照编写测试的人期望的方式工作之外。但是很多失败的测试,或者多年来没有更新的测试,或者根本没有测试——这些都是危险信号。

我无法在这里指出特定的工具,因为每个团队使用不同的工具进行测试。使用已经在生产中的任何东西。

4. 静态分析

你们中的一些人可能立即想到“FxCop”。还没有。我要做的第一件事就是突破 NDepend

只需快速浏览一下应用程序的依赖关系树,您就可以获得有关应用程序设计程度的大量信息。大多数最糟糕的设计反模式 - 大泥球循环依赖意大利面条代码God Objects - 几乎立即可见 仅从依赖关系的鸟瞰图来看。

接下来,我将运行完整的构建,打开“将警告视为错误”设置。大多数情况下,通过编译器指令或标志忽略特定警告是可以的,但是从字面上看,忽略警告会带来麻烦。再次强调,这并不能保证一切正常或有任何问题,但在确定进入实际编码阶段的关注程度时,这是一个非常有用的启发式方法。

之后我对整体设计/架构不是完全垃圾感到满意,然后我会看看FxCop。我不认为它的输出是福音,但我对 设计警告用法警告(安全警告也是一个危险信号,但非常罕见)。

5. 运行时分析

此时,我已经很满意该应用程序在较高层次上并不是一个巨大的垃圾堆。就显微镜下的特定应用而言,此阶段会有很大差异,但要做的一些好的事情是:

  • 记录正常运行下的所有第一次机会异常。这将有助于衡量应用程序的稳健性,看看是否有太多异常被吞掉,或者异常是否被用作流量控制。如果您看到出现大量顶级 Exception 实例或 SystemException 派生类,请感到害怕。

  • 通过分析器运行它,例如EQATEC。这应该可以帮助您相当轻松地识别任何严重的性能问题。如果应用程序使用 SQL 后端,请使用 SQL 分析工具来监视查询。 (确实有单独的步骤来测试数据库的运行状况,这是测试基于数据库的应用程序的关键部分,但我不想太偏离主题)。

  • 观察一些用户 - 特别寻找“仪式”,他们做的事情显然是没有理由的。这些通常是挥之不去的错误和定时炸弹的迹象。还要看看它是否会生成大量错误消息、在“思考”时长时间锁定 UI 等等。基本上,是您作为用户个人不愿意看到的任何事情。

  • 压力测试。同样,具体工具取决于应用程序,但这尤其适用于基于服务器的应用程序。查看应用程序在重负载下是否仍然可以运行。如果它在接近断点的地方开始超时,那没关系;如果它开始生成奇怪的错误消息或更糟,似乎损坏了数据或状态,这是一个非常不好的迹象。


目前我能想到的就这么多了。如果还有更多想法,我会更新。

1. User Perception

The very first thing I'd do is simply survey the users. Remember, they are the ones we are doing this for. However horrible an application may look inside, if the users love it (or at least don't actively dislike it) then you don't want to immediately start ripping it apart.

I'd want to ask questions such as:

  • Does it run smoothly?
  • Is it easy to use?
  • When you use it, do you feel confident that it's doing what you expect?
  • Is it a BMW, a Civic, or a Pinto?

The answers will be subjective. That's okay. At this point we're just looking for broad trends. If an overwhelming number of users say that it crashes all the time, or that they're afraid to perform basic tasks, then you're in trouble.

If the app breeds superstition, and you hear things like "it seems to flake out on Thursday mornings" or "I don't know what this button does, but it doesn't work unless I click it first", run for the hills.

2. Documentation

A lack of documentation, or documentation that is hideously out of date, is a sure sign of a sick application. No documentation means that development staff cut corners, or are so overworked with the constant death march that they just can't find the time for this kind of "unnecessary" work.

I'm not talking user manuals - a well-designed app shouldn't need them - I mean technical documentation, how the architecture looks, what the components do, environmental dependencies, configuration settings, requirements/user stories, test cases/test plans, file formats, you get the idea. A defect tracking system is also an essential part of documentation.

Developers end up making (incorrect) assumptions in the absence of proper documentation. I've spoken to several people in the industry who think that this is optional, but every system I have ever seen or worked on that had little or no documentation ended up being riddled with bugs and design flaws.

3. Tests

No better way to judge the health of an application than by its own tests, if they're available. Unit tests, code coverage, integration tests, even manual tests, anything works here. The more complete the suite of tests, the better the chance of the system being healthy.

Successful tests don't guarantee much at all, other than that the specific features being tested work the way that the people who wrote the tests expect them to. But a lot of failing tests, or tests that haven't been updated in years, or no tests at all - those are red flags.

I can't point to specific tools here because every team uses different tools for testing. Work with whatever is already in production.

4. Static Analysis

Some of you probably immediately thought "FxCop." Not yet. The first thing I'd do is break out NDepend.

Just a quick look at the dependency tree of an application will give you enormous amounts of information about how well the application is designed. Most of the worst design anti-patterns - the Big Ball of Mud, Circular Dependencies, Spaghetti Code, God Objects - will be visible almost immediately from just a bird's-eye view of the dependencies.

Next, I would run a full build, turning on the "treat warnings as errors" setting. Ignoring specific warnings through compiler directives or flags is alright most of the time, but literally ignoring the warnings spells trouble. Again, this won't guarantee you that everything is OK or that anything is broken, but it's a very useful heuristic in determining the level of care that went into the actual coding phase.

After I am satisfied that the overall design/architecture is not complete garbage, then I would look at FxCop. I don't take its output as gospel, but I am specifically interested in Design Warnings and Usage Warnings (security warnings are also a red flag but very rare).

5. Runtime Analysis

At this point I am already satisfied that the application, at a high level, is not an enormous mound of suck. This phase would vary quite a bit with respect to the specific application under the microscope, but some good things to do are:

  • Log all first-chance exceptions under a normal run. This will help to gauge the robustness of the application, to see if too many exceptions are being swallowed or if exceptions are being used as flow control. If you see a lot of top-level Exception instances or SystemException derivatives appearing, be afraid.

  • Run it through a profiler such as EQATEC. That should help you fairly easily identify any serious performance problems. If the application uses a SQL back-end, use a SQL profiling tool to watch queries. (Really there are separate of steps for testing the health of a database, which is a critical part of testing an application that's based on one, but I don't want to get too off-topic).

  • Watch a few users - look especially for "rituals", things they do for apparently no reason. These are usually the sign of lingering bugs and ticking time bombs. Also look to see if it generates a lot of error messages, locks up the UI for long periods while "thinking", and so on. Basically, anything you'd personally hate to see as a user.

  • Stress tests. Again, the specific tools depend on the application, but this is especially applicable to server-based apps. See if the application can still function under heavy load. If it starts timing out near the breaking point, that's OK; if it starts generating bizarre error message or worse, seems to corrupt data or state, that's a very bad sign.


And that's about all I can think of for now. I'll update if any more come to mind.

陌路终见情 2024-09-05 19:37:12

这些不是编码技巧或分析建议,而是评估任何语言程序运行状况的通用方法。按重要性排序

  1. 最终用户对此满意吗?
  2. 稳定吗?
  3. 它坚固吗?
  4. 快吗?
  5. 内存占用是否长期稳定以及我所期望的?

如果这 5 个问题的答案都是肯定的,那么您就有了一个健康的应用程序。我认为 1-3 确实是最重要的。它的内部可能不漂亮,而且可能丑陋,但如果它满足这些规范,它是健康的,并且应该永远保持在遗留模式(即小错误修复)

These aren't coding tips or profiling advice, but a general way of assessing the health of a program in any language. In order of importance

  1. Is the end user happy with it?
  2. Is it stable?
  3. Is it robust?
  4. Is it fast?
  5. Is the memory footprint stable over long periods and what I would expect?

If the answer to all 5 of those questions is yes, then you have a healthy application. I would argue that 1-3 are really the most important. It may not be pretty on the inside, and possibly down right butt ugly, but its healthy if it meets those specifications and should forever remain in legacy mode (i.e. small bugfixes)

茶花眉 2024-09-05 19:37:12

我建议围绕某些领域编写测试。我不太喜欢单元测试——尽管我最终写了很多单元测试。我更喜欢测试系统的某些部分的系统测试 - 因此从域、服务、演示者等方面来看,不一定是整个系统,而是系统的一部分。如果您正在寻找效率,那么这些测试可以在代码周围运行秒表,如果花费太长时间就会失败。

另一件好事是通过 RedGate 的 ANTs Profiler 或 Jetbrains 的 dotTrace 运行标准任务。它会告诉您什么花费了时间以及运行了多少次,这意味着您可以看到哪里可以优化/缓存。

如果您使用 NHibernate,那么 NHProf 很棒(或者我认为 Ayende 现在发布了 UberProf ,它涵盖了更多数据库访问策略。)这会警告您任何愚蠢的数据库访问正在进行中。如果仅使用SQL Server profiler失败,可能会显示您一次又一次地请求相同的数据,但需要更多的努力来过滤掉垃圾。如果您最终使用它,那么您可以将其保存到数据库表中,然后您可以以更智能的方式进行查询。

如果您正在寻找稳健性,那么最好使用日志记录策略 - 捕获所有异常并记录它们。使用 log4net 可以很容易地进行设置。如果您遇到某些您有点怀疑的点,也请记录下来。然后将其运行到服务器(我使用 kiwi syslog 服务器,它易于设置且非常强大),该服务器可以写入数据库,您可以对结果进行分析。我建议不要使用 log4net 的 ADO.NET 附加程序,因为它不是异步的,因此会减慢您的应用程序的速度。

最后,根据应用程序的类型,如果您真的很愿意花一些时间来测试其运行状况,您可以使用 WaTINWinforms 等效工具 来测试前端。这甚至可能是一个长时间的测试,观察应用程序在使用时的内存/处理器使用情况。如果您不那么担心,那么Windows 性能分析器将允许您在使用应用程序时查看应用程序的各个方面。总是有用的,但你必须真正探索才能获得有用的指标。

希望这有帮助。

I would suggest writing tests around certain areas. I'm not a massive fan of unit tests - although I end up writing quite a few of them. I prefer system tests that test parts of the system - so from domain down, service down, presenter down etc not neccesarily the whole system but parts of it. If you're looking for efficiency then these tests can run a StopWatch around the code and fail if it takes too long.

Another nice thing to do is run standard tasks through ANTs Profiler from RedGate or dotTrace from Jetbrains. It'll tell you what's taking the time and how many times it's been run, meaning you can see where can be optimised/cached.

If you're using NHibernate then NHProf is great (or I think Ayende has now released the UberProf which covers more DB access strategies.) This will warn you of any stupid DB access going on. Failing this just using the SQL Server profiler might show you requesting the same data again and again but will require more effort filtering out the rubbish. If you do end up using that then you can save it to a DB table which you can then query in a more intelligent way.

If you're looking for robustness, a good thing to use is a logging strategy - catch all exceptions and log them. This is easy enough to set up using log4net. Also log if you hit certain points that you're slightly suspicious of. Then have this running into a server (I use kiwi syslog server which is easy to set up and quite powerful) which can write to a DB and you can run analysis on the results. I would recommend against the ADO.NET appender for log4net as it is not async and so will slow your app down.

Finally depending on what the app is if you're really really keen on spending some time on testing its health you can use WaTIN or the Winforms equivalent to test the front end. This could even be a prolonged test watching the memory/processor usage of the application while it's being used. If you're not that worried then the windows performance analyser will allow you to look at various aspects of the application while you use it. Always useful but you have to really poke around to get the useful metrics.

Hope this helps.

别想她 2024-09-05 19:37:12

我要研究的前两个大项目是:

  1. 通过日志记录添加全局异常处理,以及搜索可能“吞噬”异常的任何异常处理,隐藏应用程序的问题(我认为还有一个 Windows 性能计数器这将公开您的应用程序每秒抛出的异常数)。这可以帮助发现应用程序中任何潜在的数据一致性问题。
  2. 向应用程序可能使用的任何数据持久性和/或外部网络服务依赖项添加一些性能监视和日志记录,例如记录需要超过 X 时间才能完成的数据库查询或 Web 服务调用。

The first two big items I would look into would be:

  1. Adding global exception handling with logging, as well as searching for any exception handling that might be "swallowing" exceptions, hiding problems with your application (I think there is also a windows performance counter that will expose the number of exceptions per second that are being thrown by your application). This can help to uncover any potential data consistency issues in your application.
  2. Add some performance monitoring and logging to any data persistence and/or external network service dependencies which the application might be using, such as logging database queries or web service calls that take longer than X amount of time to complete.
狼性发作 2024-09-05 19:37:12

如果这与数据库交互,您应该了解磁盘 I/O 以及磁盘阵列/硬盘驱动器的碎片程度。对于 MS SQL,分析任何存储过程并检查表上的索引和主键。

您确实不需要为此使用任何工具,只需要检查计数器并与 DBA 交谈的繁重工作即可。

If this interacts with a database, you should get a feel for Disk I/O and the degree of fragmentation of the disk array / hard drive. For MS SQL, analyze any stored procedures and review the indexes and primary keys on the tables.

You really do not needs tools for this, just the grunt work of reviewing counters and talking with the DBA .

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文