这些代码中哪一段在 Java 中速度更快?

发布于 2024-08-09 03:06:59 字数 374 浏览 4 评论 0原文

a) for(int i = 100000; i > 0; i--) {}

b) for(int i = 1; i < 100001; i++) {}

答案就在这里网站(问题 3)。我只是不明白为什么?来自网站:

3.a

a) for(int i = 100000; i > 0; i--) {}

b) for(int i = 1; i < 100001; i++) {}

The answer is there on this website (question 3). I just can't figure out why? From website:

3. a

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(16

故人的歌 2024-08-16 03:07:00

我现在已经进行了大约 15 分钟的测试,除了 eclipse 之外没有运行任何东西,以防万一,我看到了真正的区别,你可以尝试一下。

当我尝试计算 java“什么都不做”需要多长时间时,仅仅需要大约 500 纳秒才有一个想法。

然后我测试了运行 for 语句所需的时间,其中它会增加:

for(i=0;i<100;i++){}

然后五分钟后我尝试了“向后”的一个:

for(i=100;i>0;i--)

第一个和第二个之间有 16% 的巨大差异(在很小的水平上)第二个 for 语句,后者速度提高了 16%。

的平均时间:1838 n/s

2000 次测试期间运行“递增”for 语句 2000 次测试:1555 n/s

用于此类测试的代码:

public static void main(String[] args) {
    long time = 0;  
    for(int j=0; j<100; j++){
    long startTime = System.nanoTime();
    int i;
        /*for(i=0;i<100;i++){

        }*/
        for(i=100;i>0;i--){

        }
    long endTime = System.nanoTime();
    time += ((endTime-startTime));
    }
    time = time/100;
    System.out.print("Time: "+time);
}

结论:
区别基本上没有什么区别,相对于 for 语句测试来说,它已经需要大量的“无事”来执行“无事”,使得它们之间的差异可以忽略不计,只是导入一个像 java.util.Scanner 这样的库需要比运行 for 语句更多的加载时间,它不会显着提高应用程序的性能,但了解它仍然很酷。

I've been making tests for about 15 minutes now, with nothing running other than eclipse just in case, and I saw a real difference, you can try it out.

When I tried timing how long java takes to do "nothing" and it took around 500 nanoseconds just to have an idea.

Then I tested how long it takes to run a for statement where it increases:

for(i=0;i<100;i++){}

Then five minutes later I tried the "backwards" one:

for(i=100;i>0;i--)

And I've got a huge difference (in a tinny tiny level) of 16% between the first and the second for statements, the latter being 16% faster.

Average time for running the "increasing" for statement during 2000 tests: 1838 n/s

Average time for running the "decreasing" for statement during 2000 tests: 1555 n/s

Code used for such tests:

public static void main(String[] args) {
    long time = 0;  
    for(int j=0; j<100; j++){
    long startTime = System.nanoTime();
    int i;
        /*for(i=0;i<100;i++){

        }*/
        for(i=100;i>0;i--){

        }
    long endTime = System.nanoTime();
    time += ((endTime-startTime));
    }
    time = time/100;
    System.out.print("Time: "+time);
}

Conclusion:
The difference is basically nothing, it already takes a significant amount of "nothing" to do "nothing" in relation to the for statement tests, making the difference between them negligible, just the time taken for importing a library such as java.util.Scanner takes way more to load than running a for statement, it will not improve your application's performance significantly, but it's still really cool to know.

轻许诺言 2024-08-16 03:06:59

当您到达最低级别时(机器代码,但我将使用汇编,因为它主要一对一映射),空循环递减到 0 和 1 递增到 50(例如)之间的差异通常沿着 。

      ld  a,50                ld  a,0
loop: dec a             loop: inc a
      jnz loop                cmp a,50
                              jnz loop

这是因为大多数理智的 CPU 中的零标志是在达到零时由递减指令设置的 当增量指令达到 50 时,通常不能说同样的情况(因为该值没有什么特别之处,与零不同)。所以你需要将寄存器与50进行比较来设置零标志。


然而,询问两个循环中哪一个

for(int i = 100000; i > 0; i--) {}
for(int i = 1; i < 100001; i++) {}

更快(在几乎任何环境中,Java或其他)是没有用的,因为它们都没有做任何有用的事情。这两个循环的最快版本根本没有循环。我挑战任何人想出一个比这更快的版本:-)

只有当你开始在大括号内做一些有用的工作时它们才会变得有用,此时,工作将决定哪个顺序你应该使用。

例如,如果您需要从 1 计数到 100,000,则应使用第二个循环。这是因为,每次需要使用它时,您都必须在循环内计算 100000-i ,这一事实可能会淹没倒计时(如果有的话)的优势。在汇编术语中,这将是以下之间的区别:(

     ld  b,100000             dsw a
     sub b,a
     dsw b

dsw 当然是臭名昭著的do some with 汇编程序助记符)。

由于每次迭代您只会执行一次递增循环的命中,并且每次迭代您将至少执行一次减法的命中(假设您将使用 i,否则根本不需要循环),您应该使用更自然的版本。

如果需要计数,就计数。如果需要倒数,就倒数。

When you get down to the lowest level (machine code but I'll use assembly since it maps one-to-one mostly), the difference between an empty loop decrementing to 0 and one incrementing to 50 (for example) is often along the lines of:

      ld  a,50                ld  a,0
loop: dec a             loop: inc a
      jnz loop                cmp a,50
                              jnz loop

That's because the zero flag in most sane CPUs is set by the decrement instruction when you reach zero. The same can't usually be said for the increment instruction when it reaches 50 (since there's nothing special about that value, unlike zero). So you need to compare the register with 50 to set the zero flag.


However, asking which of the two loops:

for(int i = 100000; i > 0; i--) {}
for(int i = 1; i < 100001; i++) {}

is faster (in pretty much any environment, Java or otherwise) is useless since neither of them does anything useful. The fastest version of both those loops no loop at all. I challenge anyone to come up with a faster version than that :-)

They'll only become useful when you start doing some useful work inside the braces and, at that point, the work will dictate which order you should use.

For example if you need to count from 1 to 100,000, you should use the second loop. That's because the advantage of counting down (if any) is likely to be swamped by the fact that you have to evaluate 100000-i inside the loop every time you need to use it. In assembly terms, that would be the difference between:

     ld  b,100000             dsw a
     sub b,a
     dsw b

(dsw is, of course, the infamous do something with assembler mnemonic).

Since you'll only be taking the hit for an incrementing loop once per iteration, and you'll be taking the hit for the subtraction at least once per iteration (assuming you'll be using i, otherwise there's little need for the loop at all), you should just go with the more natural version.

If you need to count up, count up. If you need to count down, count down.

假情假意假温柔 2024-08-16 03:06:59

在许多编译器上,为向后循环发出的机器指令更有效,因为测试零(从而将寄存器清零)比常量值的立即加载更快。

另一方面,一个好的优化编译器应该能够检查循环内部并确定向后不会造成任何副作用......

顺便说一句,在我看来这是一个糟糕的面试问题。除非您正在谈论运行 1000 万次的循环,并且您已确定重新创建前向循环值 (n - i) 的许多实例不会抵消轻微的增益,否则任何性能增益都将是最小的。

一如既往,不要在没有性能基准测试的情况下进行微优化,并且会以代码难以理解为代价。

On many compilers, the machine instructions emitted for a loop going backwards, are more efficient, because testing for zero (and therefore zero'ing a register) is faster than a load immediate of a constant value.

On the other hand, a good optimising compiler should be able to inspect the loop inner and determine that going backwards won't cause any side effects...

BTW, that is a terrible interview question in my opinion. Unless you are talking about a loop which runs 10 millions of times AND you have ascertained that the slight gain is not outweighed by many instances of recreating the forward loop value (n - i), any performance gain will be minimal.

As always, don't micro-optimise without performance benchmarking and at the expense of harder to understand code.

音栖息无 2024-08-16 03:06:59

这类问题很大程度上是一些无关紧要的干扰,让一些人沉迷其中。将其称为微优化崇拜或任何你喜欢的名字,但是向上循环还是向下循环更快?严重地?您可以使用适合您正在做的事情的任何一个。您不会围绕节省两个时钟周期或其他什么来编写代码。

让编译器做它该做的事,让你的意图清晰(对编译器和读者来说)。另一个常见的 Java 悲观情绪是:

public final static String BLAH = new StringBuilder().append("This is ").append(3).append(' text").toString();

因为过多的连接确实会导致内存碎片,但对于常量,编译器可以(并且将会)对此进行优化:

public final static String BLAH = "This is a " + 3 + " test";

它不会优化第一个,而第二个更容易阅读。

那么 (a>b)?a:bMath.max(a,b) 又如何呢?我知道我宁愿阅读第二个,所以我并不真正关心第一个不会产生函数调用开销。

此列表中有一些有用的内容,例如了解 System.exit() 上未调用 finally可能很有用。知道浮点数除以 0.0 不会引发异常是很有用的。

但是不要费心事后猜测编译器,除非它真的很重要(我敢打赌,99.99% 的情况下它并不重要)。

These kinds of questions are largely an irrelevant distraction that some people get obsessed with it. Call it the Cult of Micro-optimization or whatever you like but is it faster to loop up or down? Seriously? You use whichever is appropriate for what you're doing. You don't write your code around saving two clock cycles or whatever it is.

Let the compiler do what it's for and make you intent clear (both to the compiler and the reader). Another common Java pessimization is:

public final static String BLAH = new StringBuilder().append("This is ").append(3).append(' text").toString();

because excessive concatenation does result in memory fragmentation but for a constant the compiler can (and will) optimize this:

public final static String BLAH = "This is a " + 3 + " test";

where it won't optimize the first and the second is easier to read.

And how about (a>b)?a:b vs Math.max(a,b)? I know I'd rather read the second so I don't really care that the first doesn't incur a function call overhead.

There are a couple of useful things in this list like knowing that a finally block isn't called on System.exit() is potentially useful. Knowing that dividing a float by 0.0 doesn't throw an exception is useful.

But don't bother second-guessing the compiler unless it really matters (and I bet you that 99.99% of the time it doesn't).

虐人心 2024-08-16 03:06:59

更好的问题是;

哪个更容易理解/使用?

这比性能上的概念差异重要得多。就我个人而言,我会指出,性能不应该成为确定差异的标准。如果他们不喜欢我在这方面挑战他们的假设,我不会因为没有得到这份工作而感到不高兴。 ;)

A better question is;

Which is easier to understand/work with?

This is far more important than a notional difference in performance. Personally, I would point out that performance shouldn't be the criteria for determining the difference here. If they didn't like me challenging their assumption on this, I wouldn't be unhappy about not getting the job. ;)

童话 2024-08-16 03:06:59

在现代 Java 实现中,情况并非如此。
将数字汇总到十亿作为基准:

Java(TM) SE Runtime Environment 1.6.0_05-b13
Java HotSpot(TM) Server VM 10.0-b19
up 1000000000: 1817ms 1.817ns/iteration (sum 499999999500000000)
up 1000000000: 1786ms 1.786ns/iteration (sum 499999999500000000)
up 1000000000: 1778ms 1.778ns/iteration (sum 499999999500000000)
up 1000000000: 1769ms 1.769ns/iteration (sum 499999999500000000)
up 1000000000: 1769ms 1.769ns/iteration (sum 499999999500000000)
up 1000000000: 1766ms 1.766ns/iteration (sum 499999999500000000)
up 1000000000: 1776ms 1.776ns/iteration (sum 499999999500000000)
up 1000000000: 1768ms 1.768ns/iteration (sum 499999999500000000)
up 1000000000: 1771ms 1.771ns/iteration (sum 499999999500000000)
up 1000000000: 1768ms 1.768ns/iteration (sum 499999999500000000)
down 1000000000: 1847ms 1.847ns/iteration (sum 499999999500000000)
down 1000000000: 1842ms 1.842ns/iteration (sum 499999999500000000)
down 1000000000: 1838ms 1.838ns/iteration (sum 499999999500000000)
down 1000000000: 1832ms 1.832ns/iteration (sum 499999999500000000)
down 1000000000: 1842ms 1.842ns/iteration (sum 499999999500000000)
down 1000000000: 1838ms 1.838ns/iteration (sum 499999999500000000)
down 1000000000: 1838ms 1.838ns/iteration (sum 499999999500000000)
down 1000000000: 1847ms 1.847ns/iteration (sum 499999999500000000)
down 1000000000: 1839ms 1.839ns/iteration (sum 499999999500000000)
down 1000000000: 1838ms 1.838ns/iteration (sum 499999999500000000)

请注意,时间差异很脆弱,循环附近某处的微小变化可能会扭转它们。

编辑:
基准循环是

        long sum = 0;
        for (int i = 0; i < limit; i++)
        {
            sum += i;
        }

并且

        long sum = 0;
        for (int i = limit - 1; i >= 0; i--)
        {
            sum += i;
        }

使用 int 类型的总和大约快三倍,但是总和会溢出。
使用 BigInteger 会慢 50 倍以上:

BigInteger up 1000000000: 105943ms 105.943ns/iteration (sum 499999999500000000)

On a modern Java implementation this is not true.
Summing up the numbers up to one billion as a benchmark:

Java(TM) SE Runtime Environment 1.6.0_05-b13
Java HotSpot(TM) Server VM 10.0-b19
up 1000000000: 1817ms 1.817ns/iteration (sum 499999999500000000)
up 1000000000: 1786ms 1.786ns/iteration (sum 499999999500000000)
up 1000000000: 1778ms 1.778ns/iteration (sum 499999999500000000)
up 1000000000: 1769ms 1.769ns/iteration (sum 499999999500000000)
up 1000000000: 1769ms 1.769ns/iteration (sum 499999999500000000)
up 1000000000: 1766ms 1.766ns/iteration (sum 499999999500000000)
up 1000000000: 1776ms 1.776ns/iteration (sum 499999999500000000)
up 1000000000: 1768ms 1.768ns/iteration (sum 499999999500000000)
up 1000000000: 1771ms 1.771ns/iteration (sum 499999999500000000)
up 1000000000: 1768ms 1.768ns/iteration (sum 499999999500000000)
down 1000000000: 1847ms 1.847ns/iteration (sum 499999999500000000)
down 1000000000: 1842ms 1.842ns/iteration (sum 499999999500000000)
down 1000000000: 1838ms 1.838ns/iteration (sum 499999999500000000)
down 1000000000: 1832ms 1.832ns/iteration (sum 499999999500000000)
down 1000000000: 1842ms 1.842ns/iteration (sum 499999999500000000)
down 1000000000: 1838ms 1.838ns/iteration (sum 499999999500000000)
down 1000000000: 1838ms 1.838ns/iteration (sum 499999999500000000)
down 1000000000: 1847ms 1.847ns/iteration (sum 499999999500000000)
down 1000000000: 1839ms 1.839ns/iteration (sum 499999999500000000)
down 1000000000: 1838ms 1.838ns/iteration (sum 499999999500000000)

Note that the time differences are brittle, small changes somewhere near the loops can turn them around.

Edit:
The benchmark loops are

        long sum = 0;
        for (int i = 0; i < limit; i++)
        {
            sum += i;
        }

and

        long sum = 0;
        for (int i = limit - 1; i >= 0; i--)
        {
            sum += i;
        }

Using a sum of type int is about three times faster, but then sum overflows.
With BigInteger it is more than 50 times slower:

BigInteger up 1000000000: 105943ms 105.943ns/iteration (sum 499999999500000000)
热风软妹 2024-08-16 03:06:59

通常,真实代码向上计数会运行得更快。造成这种情况的原因有几个:

  • 处理器针对向前读取内存进行了优化。
  • HotSpot(大概还有其他字节码->本机编译器)大力优化前向循环,但不关心后向循环,因为它们很少发生。
  • 向上通常更明显,更干净的代码通常更快。

因此,快乐地做正确的事情通常会更快。不必要的微优化是邪恶的。自从编写 6502 汇编器以来,我就没有特意编写过向后循环。

Typically real code will run faster counting upwards. There are a few reasons for this:

  • Processors are optimised for reading memory forwards.
  • HotSpot (and presumably other bytecode->native compilers) heavily optimise forward loops, but don't bother with backward loops because they happen so infrequently.
  • Upwards is usually more obvious, and cleaner code is often faster.

So happily doing the right thing will usually be faster. Unnecessary micro-optimisation is evil. I haven't purposefully written backward loops since programming 6502 assembler.

愿与i 2024-08-16 03:06:59

实际上只有两种方法可以回答这个问题。

  1. 告诉你这真的、真的不重要,你甚至想都在浪费你的时间。

    告诉你,这

  2. 告诉您,了解的唯一方法是在您关心的实际生产硬件、操作系统和 JRE 安装上运行值得信赖的基准测试。

    告诉

因此,我为您制作了一个可运行的基准测试,您可以在这里尝试一下:

http://code.google.com/p/caliper/source/browse/trunk/test/examples/LoopingBackwardsBenchmark.java

这个 Caliper 框架还没有真正准备好迎接黄金时期,因此,如何处理这个问题可能并不完全明显,但如果您真的足够关心,您就可以弄清楚。以下是它在我的 Linux 机器上给出的结果:

     max benchmark        ns
       2  Forwards         4
       2 Backwards         3
      20  Forwards         9
      20 Backwards        20
    2000  Forwards      1007
    2000 Backwards      1011
20000000  Forwards   9757363
20000000 Backwards  10303707

向后循环对任何人来说都是胜利吗?

There are really only two ways to answer this question.

  1. To tell you that it really, really doesn't matter, and you're wasting your time even wondering.

  2. To tell you that the only way to know is to run a trustworthy benchmark on your actual production hardware, OS and JRE installation that you care about.

So, I made you a runnable benchmark you could use to try that out here:

http://code.google.com/p/caliper/source/browse/trunk/test/examples/LoopingBackwardsBenchmark.java

This Caliper framework is not really ready for prime time yet, so it may not be totally obvious what to do with this, but if you really care enough you can figure it out. Here are the results it gave on my linux box:

     max benchmark        ns
       2  Forwards         4
       2 Backwards         3
      20  Forwards         9
      20 Backwards        20
    2000  Forwards      1007
    2000 Backwards      1011
20000000  Forwards   9757363
20000000 Backwards  10303707

Does looping backwards look like a win to anyone?

唐婉 2024-08-16 03:06:59

您是否确定提出此类问题的面试官期望得到直接答案(“第一更快”或“第二更快”),或者如果提出这个问题是为了引发讨论,就像人们的答案中所发生的那样在这里给予?

一般来说,不可能说哪一个更快,因为这在很大程度上取决于Java编译器、JRE、CPU和其他因素。在程序中使用其中之一只是因为您认为两者之一更快,而不了解最低级别的细节是 迷信编程。即使一个版本在您的特定环境中比另一个版本更快,但差异很可能很小,以至于无关紧要。

编写清晰的代码而不是试图变得聪明。

Are you sure that the interviewer who asks such a question expects a straight answer ("number one is faster" or "number two is faster"), or if this question is asked to provoke a discussion, as is happening in the answers people are giving here?

In general, it's impossible to say which one is faster, because it heavily depends on the Java compiler, JRE, CPU and other factors. Using one or the other in your program just because you think that one of the two is faster without understanding the details to the lowest level is superstitious programming. And even if one version is faster than the other on your particular environment, then the difference is most likely so small that it's irrelevant.

Write clear code instead of trying to be clever.

千紇 2024-08-16 03:06:59

这些问题都是基于旧的最佳实践建议。
一切都与比较有关:已知与 0 比较速度更快。几年前,这可能被认为非常重要。如今,尤其是使用 Java,我宁愿让编译器和 VM 完成他们的工作,并且我会专注于编写易于维护和理解的代码。

除非有其他理由这样做。请记住,Java 应用程序并不总是在 HotSpot 和/或快速硬件上运行。

Such questions have their base on old best-practice recommendations.
It's all about comparison: comparing to 0 is known to be faster. Years ago this might have been seen as quite important. Nowadays, especially with Java, I'd rather let the compiler and the VM do their job and I'd focus on writing code that is easies to maintain and understand.

Unless there are reasons for doing it otherwise. Remember that Java apps don't always run on HotSpot and/or fast hardware.

二手情话 2024-08-16 03:06:59

关于 JVM 中的零测试:显然可以使用 ifeq 而测试其他任何内容都需要 if_icmpeq< /a> 这还涉及在堆栈上放置一个额外的值。

测试 > 0,如问题中所示,可以使用 完成ifgt,而测试 <; 100001 需要 if_icmplt

With regards for testing for zero in the JVM: it can apparently be done with ifeq whereas testing for anything else requires if_icmpeq which also involves putting an extra value on the stack.

Testing for > 0, as in the question, might be done with ifgt, whereas testing for < 100001 would need if_icmplt.

九局 2024-08-16 03:06:59

这是我见过的最愚蠢的问题。循环体是空的。如果编译器足够好,它根本不会生成任何代码。它不执行任何操作,不能引发异常,也不会修改其范围之外的任何内容。

假设你的编译器不是那么聪明,或者你实际上没有空循环体:
“向后循环计数器”参数对于某些汇编语言是有意义的(对于java字节码也可能有意义,我具体不知道)。但是,编译器通常能够将循环转换为使用递减计数器。除非循环体中显式使用 i 的值,否则编译器可以执行此转换。所以你经常会发现没有什么区别。

This is about the dumbest question I have ever seen. The loop body is empty. If the compiler is any good it will just emit no code at all. It doesn't do anything, can't throw an exception and doesn't modify anything outside of its scope.

Assuming your compiler isn't that smart, or that you actually didn't have an empty loop body:
The "backwards loop counter" argument makes sense for some assembly languages (it may make sense to the java byte code too, I don't know it specifically). However, the compiler will very often have the ability to transform your loop to use decrementing counters. Unless you have loop body in which the value of i is explicitly used, the compiler can do this transformation. So again you often see no difference.

赠我空喜 2024-08-16 03:06:59

我决定咬断线并使其坏死。

这两个循环都被 JVM 视为无操作而忽略。所以本质上即使一个循环直到 10,另一个循环直到 10000000,也不会有什么区别。

循环回到零是另一回事(对于 jne 指令,但同样,它不是这样编译的),链接的站点很奇怪(而且是错误的)。

这种类型的问题不适合任何 JVM(也不适合任何其他可以优化的编译器)。

I decided to bite and necro back the thread.

both of the loops are ignored by the JVM as no-ops. so essentially even one of the loops were till 10 and the other till 10000000, there would have been no difference.

Looping back to zero is another thing (for jne instruction but again, it's not compiled like that), the linked site is plain weird (and wrong).

This type of a question doesn't fit any JVM (nor any other compiler that can optimize).

反差帅 2024-08-16 03:06:59

这些循环是相同的,除了一个关键部分:

i > 1。 0;

我< 100001;

大于零检查是通过检查计算机的NZP(俗称条件码或负零或正位)位来完成的。

每当进行加载、AND、加法等操作时,NZP 位就会被置位。被执行。

大于检查不能直接利用该位(因此需要更长的时间...)一般解决方案是将其中一个值设为负值(通过按位 NOT 然后加 1),然后将其添加到比较值中。如果结果为零,则它们相等。正数,则第二个值(不是负数)更大。负数,则第一个值(负数)更大。此检查比直接 nzp 检查花费的时间稍长。

虽然我不能 100% 确定这就是其背后的原因,但这似乎是一个可能的原因......

The loops are identical, except for one critical part:

i > 0;
and
i < 100001;

The greater than zero check is done by checking the NZP (Commonly known as condition code or Negative Zero or Positive bit) bit of the computer.

The NZP bit is set whenever operation such as load, AND, addition ect. are performed.

The greater than check cannot directly utilize this bit (and therefore takes a bit longer...) The general solution is to make one of the values negative (by doing a bitwise NOT and then adding 1) and then adding it to the compared value. If the result is zero, then they're equal. Positive, then the second value (not the neg) is greater. Negative, then the first value (neg) is greater. This check takes a slightly longer than the direct nzp check.

I'm not 100% certain that this is the reason behind it though, but it seems like a possible reason...

挽容 2024-08-16 03:06:59

答案是(正如您可能在网站上发现的那样)

我认为原因是 i > > 0 终止循环的条件测试速度更快。

The answer is a (as you probably found out on the website)

I think the reason is that the i > 0 condition for terminating the loop is faster to test.

月亮邮递员 2024-08-16 03:06:59

最重要的是,对于任何非性能关键型应用程序,差异可能无关紧要。正如其他人指出的那样,有时使用 ++i 而不是 i++ 可能会更快,但是,特别是在 for 循环中,任何现代编译器都应该优化这种区别。

也就是说,差异可能与为比较生成的底层指令有关。测试某个值是否等于 0 只是一个 NAND NOR 门。而测试一个值是否等于任意常量需要将该常量加载到寄存器中,然后比较两个寄存器。 (这可能需要一两个额外的门延迟。)也就是说,对于流水线和现代 ALU,如果这种区别一开始就很重要,我会感到惊讶。

The bottom line is that for any non-performance critical application, the difference is probably irrelevant. As others have pointed out there are times when using ++i instead of i++ could be faster, however, especially in for loops any modern compiler should optimize that distinction away.

That said, the difference probably has to do with the underlying instructions that get generated for the comparison. Testing if a value is equal to 0 is simply a NAND NOR gate. Whereas testing if a value is equal to an arbitrary constant requires loading that constant into a register, and then comparing the two registers. (This probably would require an extra gate delay or two.) That said, with pipelining and modern ALUs I'd be surprised if the distinction was significant to begin with.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文