嵌入式软件缺陷率

发布于 2024-09-29 12:21:12 字数 118 浏览 3 评论 0原文

鉴于没有单元测试、没有代码审查、没有静态代码分析,并且编译项目会生成大约 1500 个警告,那么在为嵌入式处理器 (DSP) 编写的 C++ 代码库中,我预期的缺陷率是多少。 5 个缺陷/100 行代码是一个合理的估计吗?

What defect rate can I expect in a C++ codebase that is written for an embedded processor (DSP), given that there have been no unit tests, no code reviews, no static code analysis, and that compiling the project generates about 1500 warnings. Is 5 defects/100 lines of code a reasonable estimate?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(7

月依秋水 2024-10-06 12:21:12

您的问题是“5 个缺陷/100 行代码是一个合理的估计吗?”这个问题非常难以回答,并且它高度依赖于代码库和代码库。代码复杂度。

您还在评论中提到“向管理层表明代码库中可能存在很多错误”——这很好,值得称赞,就这样。

为了打开管理层的眼睛,我建议至少采用三管齐下的方法:

  • 采取特定的编译器警告,并展示其中一些警告如何导致未定义/灾难性的行为。并非所有警告都会如此重要。例如,如果有人使用未初始化的指针,那就是纯金。如果有人将一个无符号 16 位值填充到一个无符号 8 位值中,并且可以证明该 16 位值将始终 <= 255,那么这个人不会帮助你证明这一点。
  • 运行静态分析工具。 PC-Lint(或 Flexelint)价格便宜且价格低廉。提供良好的“性价比”。它几乎肯定会捕获编译器不会捕获的内容,并且它还可以跨翻译单元运行(将所有内容整理在一起,即使有 2 次或更多遍)并发现更微妙的错误。再次,使用其中一些作为指示。
  • 运行一个工具,该工具将提供有关代码复杂性的其他指标,这是错误的另一个来源。我建议使用 M Squared 的资源标准指标 (RSM),它将为您提供比您希望的更多的信息和指标(包括代码复杂性)。当您告诉管理层超过 50 的复杂性分数“基本上无法测试”并且您的分数为一次练习 200 次,应该会让人大开眼界。

另一点:我需要在我的组中进行干净的编译,并且也需要干净的 Lint 输出。通常,这可以通过编写良好的代码来完成,但有时需要调整编译器/lint 警告,以使工具安静下来,处理不存在问题的事情(明智地使用)。

但我想说的重要一点是:进去时要非常小心。修复编译器和棉绒警告。这是一个令人钦佩的目标,但您也可能会无意中破坏工作代码,和/或发现意外在“损坏”代码中工作的未定义行为。是的,这确实发生了。所以要小心行事。

最后,如果您已经有了一套可靠的测试,这将帮助您确定在重构时是否意外破坏了某些内容。

祝你好运!

Your question is "Is 5 defects/100 lines of code a reasonable estimate?" That question is extremely difficult to answer, and it's highly dependent on the codebase & code complexity.

You also mentioned in a comment "to show the management that there are probably lots of bugs in the codebase" -- that's great, kudos, right on.

In order to open management's figurative eyes, I'd suggest at least a 3-pronged approach:

  • take specific compiler warnings, and show how some of them can cause undefined / disastrous behavior. Not all warnings will be as weighty. For example, if you have someone using an uninitialized pointer, that's pure gold. If you have someone stuffing an unsigned 16-bit value into an unsigned 8-bit value, and it can be shown that the 16-bit value will always be <= 255, that one isn't gonna help make your case as strongly.
  • run a static analysis tool. PC-Lint (or Flexelint) is cheap & provides good "bang for the buck". It will almost certainly catch stuff the compiler won't, and it can also run across translation units (lint everything together, even with 2 or more passes) and find more subtle bugs. Again, use some of these as indications.
  • run a tool that will give other metrics on code complexity, another source of bugs. I'd recommend M Squared's Resource Standard Metrics (RSM) which will give you more information and metrics (including code complexity) than you could hope for. When you tell management that a complexity score over 50 is "basically untestable" and you have a score of 200 in one routine, that should open some eyes.

One other point: I require clean compiles in my groups, and clean Lint output too. Usually this can accomplished solely by writing good code, but occasionally the compiler / lint warnings need to be tweaked to quiet the tool for things that aren't problems (use judiciously).

But the important point I want to make is this: be very careful when going in & fixing compiler & lint warnings. It's an admirable goal, but you can also inadvertantly break working code, and/or uncover undefined behavior that accidentally worked in the "broken" code. Yes, this really does happen. So tread carefully.

Lastly, if you have a solid set of tests already in place, that will help you determine if you accidentally break something while refactoring.

Good luck!

过潦 2024-10-06 12:21:12

尽管我对这种情况下任何估计的有效性持怀疑态度,但我发现了一些可能相关的统计数据。

本文中,作者引用了来自“一个大型机构”的数据实证研究”,发表于软件评估、基准测试和最佳实践(琼斯,2000)。在 SIE CMM Level 1(听起来像是此代码的级别),人们可以期待每个功能点的缺陷率为 0.75。我将让您自行决定功能点和 LOC 在您的代码中如何关联 - 您可能需要一个 指标工具来执行该分析。

Steve McConnell 在 Code Complete 引用了一项研究 同一团队开发的 11 个项目中,5 个没有代码审查,6 个有代码审查。未经审查的代码的缺陷率为每 100 个 LOC 4.5,而经过审查的代码的缺陷率为 0.82。因此,在此基础上,在没有任何其他信息的情况下,您的估计似乎是公平的。然而,我必须假设这个团队具有一定的专业水平(仅仅因为他们认为有必要进行这项研究),并且他们至少会注意警告;您的缺陷率可能高得多

关于警告的一点是,有些是良性的,有些是错误的(即会导致软件出现不良行为),如果您假设它们都是良性的而忽略它们,则会引入错误。此外,当其他条件发生变化时,某些错误会在维护过程中成为错误,但如果您已经选择接受警告,则您无法防御此类错误的引入。

Despite my scepticism of the validity of any estimate in this case, I have found some statistics that may be relevant.

In this article, the author cites figures from a "a large body of empirical studies", published in Software Assessments, Benchmarks, and Best Practices (Jones, 2000). At SIE CMM Level 1, which sounds like the level of this code, one can expect a defect rate of 0.75 per function point. I'll leave it to you to determine how function points and LOC may relate in your code - you'll probably need a metrics tool to perform that analysis.

Steve McConnell in Code Complete cites a study of 11 projects developed by the same team, 5 without code reviews, 6 with code reviews. The defect rate for the non-reviewed code was 4.5 per 100 LOC, and for the reviewed it was 0.82. So on that basis, your estimate seems fair in the absence of any other information. However I have to assume a level of professionalism amongst this team (just from the fact that they felt the need to perform the study), and that they would have at least attended to the warnings; your defect rate could be much higher.

The point about warnings is that some are benign, and some are errors (i.e. will result in undesired behaviour of the software), if you ignore them on the assumption that they are all benign, you will introduce errors. Moreover some will become errors under maintenance when other conditions change, but if you have already chosen to accept a warning, you have no defence against introduction of such errors.

善良天后 2024-10-06 12:21:12

看一下代码质量。它会很快告诉您隐藏在源代码中的问题数量。如果源代码很丑陋并且需要很长时间才能理解,那么代码中将会出现很多错误。

结构良好、风格一致且易于理解的代码将包含更少的问题。代码显示了其中付出了多少努力和思考。

我的猜测是,如果源代码包含那么多警告,那么代码中就会隐藏很多错误。

Take a look at the code quality. It would quickly give you a indication of the amount of problems hiding in the source. If the source is ugly and take a long time to understand there will be a lot of bugs in the code.

Well structured code with consistent style and that is easy to understand are going to contain less problems. Code shows how much effort and thought went into it.

My guess is if the source contains that many warnings there is going to be a lot of bugs hiding out in the code.

寄人书 2024-10-06 12:21:12

这还取决于代码的编写者(经验水平)以及代码库有多大。

我会将所有警告视为错误。

当你对代码运行静态分析工具时,你会得到多少错误?

编辑

运行 cccc,并检查 mccabe 的循环复杂度。它应该告诉我们代码有多复杂。

http://sourceforge.net/projects/cccc/

运行其他静态分析工具。

That also depends on who wrote the code (level of experience), and how big the code base is.

I would treat all warnings as errors.

How many errors do you get when you run a static analysis tool on the code?

EDIT

Run cccc, and check the mccabe's cyclic complexity. It should tell how complex the code it.

http://sourceforge.net/projects/cccc/

Run other static analysis tools.

琉璃繁缕 2024-10-06 12:21:12

如果您想估计缺陷数量,统计估计的常用方法是对数据进行二次抽样。我会随机选择三个中等大小的子例程,并仔细检查它们是否有错误(消除编译器警告,运行静态分析工具等)。如果您在随机选择的 100 行代码中发现了 3 个错误,那么其余代码中的错误密度相似似乎是合理的。

这里提到的引入新错误的问题是一个重要问题,但是您不需要将修改后的代码签回生产分支来运行此测试。我建议在修改任何子例程之前进行一组彻底的单元测试,并在将新代码发布到生产环境之前清理所有代码,然后进行非常彻底的系统测试。

If you want to get an estimate of the number of defects, the usual way of statistical estimatation is to subsample the data. I would pick three medium-sized subroutines at random, and check them carefully for bugs (eliminate compiler warnings, run static analysis tool, etc). If you find three bugs in 100 total lines of code selected at random, it seems reasonable that a similar density of bugs are in the rest of the code.

The problem mentioned here of introducing new bugs is an important issue, but you don't need to check the modified code back into the production branch to run this test. I would suggest a thorough set of unit tests before modifying any subroutines, and cleaning up all the code followed by very thorough system testing before releasing new code to production.

踏月而来 2024-10-06 12:21:12

如果您想展示单元测试、代码审查、静态分析工具的好处,我建议您进行试点研究。

进行一些单元测试、代码审查,并对部分代码运行静态分析工具。向管理层展示您使用这些方法发现了多少错误。希望结果不言而喻。

If you want to demonstrate the benefits of unit tests, code reviews, static analysis tools, I suggest doing a pilot study.

Do some unit tests, code reviews, and run static analysis tools on a portion of the code. Show management how many bugs you find using those methods. Hopefully, the results speak for themselves.

忆悲凉 2024-10-06 12:21:12

以下文章提供了一些基于应用了静态分析的现实项目的数字:http://www.stsc.hill.af.mil/crosstalk/2003/11/0311German.html

当然,异常计数的标准可能会极大地影响结果,导致表 1 中所示数字的巨大变化。在此表中,C 语言每千行代码的异常数量范围从 500(!)到大约 10(自动生成)。

The following article has some numbers based on real-life projects to which static analysis has been applied to: http://www.stsc.hill.af.mil/crosstalk/2003/11/0311German.html

Of course the criteria by which an anomaly is counted can affect the results dramatically, leading to the large variation in the figures shown in Table 1. In this table, the number of anomalies per thousand lines of code for C ranges from 500 (!) to about 10 (auto generated).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文