按位运算还实用吗?
维基百科是唯一真正的知识来源,它指出:
在大多数较旧的微处理器上,按位 操作速度略快于 加法和减法运算 并且通常明显快于 乘法和除法 运营。论现代建筑, 情况并非如此:按位 操作大体相同 速度作为加法(尽管仍然更快 比乘法)。
学习按位运算技巧是否有实际原因,或者现在只是出于理论和好奇心而学习的东西?
Wikipedia, the one true source of knowledge, states:
On most older microprocessors, bitwise
operations are slightly faster than
addition and subtraction operations
and usually significantly faster than
multiplication and division
operations. On modern architectures,
this is not the case: bitwise
operations are generally the same
speed as addition (though still faster
than multiplication).
Is there a practical reason to learn bitwise operation hacks or it is now just something you learn for theory and curiosity?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(9)
位运算值得研究,因为它们有很多应用。它们不是主要用途是替代算术运算。密码学、计算机图形学、哈希函数、压缩算法和网络协议只是位运算非常有用的一些示例。
您从维基百科文章中引用的内容只是试图提供一些有关按位运算速度的线索。不幸的是,这篇文章未能提供一些很好的应用示例。
Bitwise operations are worth studying because they have many applications. It is not their main use to substitute arithmetic operations. Cryptography, computer graphics, hash functions, compression algorithms, and network protocols are just some examples where bitwise operations are extremely useful.
The lines you quoted from the Wikipedia article just tried to give some clues about the speed of bitwise operations. Unfortunately the article fails to provide some good examples of applications.
位运算仍然有用。例如,它们可用于使用单个变量创建“标志”,并节省用于指示各种条件的变量数量。关于算术运算的性能,最好让编译器进行优化(除非您是某种专家)。
Bitwise operations are still useful. For instance, they can be used to create "flags" using a single variable, and save on the number of variables you would use to indicate various conditions. Concerning performance on arithmetic operations, it is better to leave the compiler do the optimization (unless you are some sort of guru).
它们对于理解二进制如何“工作”很有用;否则,不行。事实上,我想说,即使按位破解在给定架构上更快,利用这一事实也是编译器的工作,而不是你的。写下你的意思。
They're useful for getting to understand how binary "works"; otherwise, no. In fact, I'd say that even if the bitwise hacks are faster on a given architecture, it's the compiler's job to make use of that fact — not yours. Write what you mean.
使用它们的唯一情况是您实际上将数字用作位向量。例如,如果您正在对某种硬件进行建模,并且变量代表寄存器。
如果要执行算术运算,请使用算术运算符。
The only case where it makes sense to use them is if you're actually using your numbers as bitvectors. For instance, if you're modeling some sort of hardware and the variables represent registers.
If you want to perform arithmetic, use the arithmetic operators.
取决于你的问题是什么。如果您正在控制硬件,您需要一些方法来设置整数中的单个位。
购买 OGD1 PCI 板(开放显卡)并使用 libpci 与其通信。 http://en.wikipedia.org/wiki/Open_Graphics_Project
Depends what your problem is. If you are controlling hardware you need ways to set single bits within an integer.
Buy an OGD1 PCI board (open graphics card) and talk to it using libpci. http://en.wikipedia.org/wiki/Open_Graphics_Project
确实,在大多数情况下,当您将整数乘以恰好是 2 的幂的常量时,编译器会对其进行优化以使用位移位。但是,当移位也是变量时,编译器无法推导它,除非您显式使用移位操作。
It is true that in most cases when you multiply an integer by a constant that happens to be a power of two, the compiler optimises it to use the bit-shift. However, when the shift is also a variable, the compiler cannot deduct it, unless you explicitly use the shift operation.
有趣的是,没有人认为适合提及 C/C++ 中的 ctype[] 数组 - 也在 Java 中实现。这个概念在语言处理中非常有用,特别是在使用不同的字母表或解析句子时。
ctype[]是一个由256个短整数组成的数组,每个整数中都有代表不同字符类型的位。例如,ctype[;A'] - ctype['Z'] 具有设置位以显示它们是字母表中的大写字母; ctype['0']-ctype['9'] 设置了位以显示它们是数字。要查看字符 x 是否是字母数字,您可以编写类似 'if (ctype[x] & (UC | LC | NUM))' 的内容,这比编写 'if ('A' = x <= 'Z' || ....'。
一旦你开始按位思考,你就会发现很多地方可以使用它。例如,我将一个文本缓冲区写入另一个缓冲区,替换了所有出现的 FINDstring。和然后,对于下一个查找替换对,我只是切换了缓冲区索引,因此我总是从 buffer[in] 写入到 buffer[out],“in”从 0 开始,“out”从 1 开始。 替换后,我只是将 buffer[out] 写入磁盘,而不需要知道当时“out”是什么。
完成复制后,我简单地写了“in ^= 1; out ^= 1;”。在处理完所有 您认为这是低级的,考虑到某些心理错误,例如似曾相识及其孪生似曾相识,是由大脑位错误引起的!
Funny nobody saw fit to mention the ctype[] array in C/C++ - also implemented in Java. This concept is extremely useful in language processing, especially when using different alphabets, or when parsing a sentence.
ctype[] is an array of 256 short integers, and in each integer, there are bits representing different character types. For example, ctype[;A'] - ctype['Z'] have bits set to show they are upper-case letters of the alphabet; ctype['0']-ctype['9'] have bits set to show they are numeric. To see if a character x is alphanumeric, you can write something like 'if (ctype[x] & (UC | LC | NUM))' which is somewhat faster and much more elegant than writing 'if ('A' = x <= 'Z' || ....'.
Once you start thinking bitwise, you find lots of places to use it. For instance, I had two text buffers. I wrote one to the other, replacing all occurrences of FINDstring with REPLACEstring as I went. Then for the next find-replace pair, I simply switched the buffer indices, so I was always writing from buffer[in] to buffer[out]. 'in' started as 0, 'out' as 1. After completing a copy I simply wrote 'in ^= 1; out ^= 1;'. And after handling all the replacements I just wrote buffer[out] to disk, not needing to know what 'out' was at that time.
If you think this is low-level, consider that certain mental errors such as deja-vu and its twin jamais-vu are caused by cerebral bit errors!
当然(对我来说)答案是肯定的:学习它们可能有实际的理由。事实上,现在,例如,典型处理器上的
add
指令与or
/xor
或一样快and
只是意味着:add
与or
在那些处理器上的速度一样快。加法、除法等指令速度的提高仅仅意味着现在在这些处理器上您可以使用它们并且不用担心性能影响;但现在和过去一样,您通常不会将每个
add
更改为按位运算来实现add
。也就是说,在某些情况下,它可能取决于哪些黑客:可能现在某些黑客必须被认为是教育性的,不再实用了;其他的仍然可以有实际应用。Of course (to me) the answer is yes: there can be practical reasons to learn them. The fact that nowadays, e.g., an
add
instruction on typical processors is as fast as anor
/xor
or anand
just means that: anadd
is as fast as, say, anor
on those processors.The improvements in speed of instructions like add, divide, and so on, just means that now on those processors you can use them and being less worried about performance impact; but it is true now as in the past that you usually won't change every
add
s to bitwise operations to implement anadd
. That is, in some cases it may depend on which hacks: likely some hack now must be considered educational and not practical anymore; others could have still their practical application.使用 IPv4 地址经常需要位操作来发现对等点的地址是否位于可路由网络内或是否必须转发到网关,或者对等点是否属于防火墙规则允许或拒绝的网络的一部分。需要位操作来发现网络的广播地址。
使用 IPv6 地址需要相同的基本位级操作,但由于它们太长,我不确定它们是如何实现的。我敢打赌,它们仍然是在数据上使用位运算符来实现的,并且大小适合架构。
Working with IPv4 addresses frequently requires bit-operations to discover if a peer's address is within a routable network or must be forwarded onto a gateway, or if the peer is part of a network allowed or denied by firewall rules. Bit operations are required to discover the broadcast address of a network.
Working with IPv6 addresses requires the same fundamental bit-level operations, but because they are so long, I'm not sure how they are implemented. I'd wager money that they are still implemented using the bit operators on pieces of the data, sized appropriately for the architecture.