如何从文件读取/写入 C# BigIntegers?
在我的一门课程中,我有一个例程读取和写入 Decimal 类型的数组(使用 BinaryReader
/ BinaryWriter
的 ReadDecimal()
和 Write()
方法,即:
BinaryReader inputReader = new BinaryReader(File.OpenRead(BaseFilePath));
for (int x = 0; x < 6; x++) {
for (int y = 0; y < m_Codes[x].GetLength(0); y++) {
for (int z = 0; z < m_Codes[x].GetLength(1); z++) {
m_Codes[x][y, z] = inputReader.ReadDecimal();
}
}
}
并且
for (int x = 0; x < 6; x++) {
for (int y = 0; y < m_Codes[x].GetLength(0); y++) {
for (int z = 0; z < m_Codes[x].GetLength(1); z++) {
outputWriter.Write(m_Codes[x][y, z]);
}
}
}
..如您所见,只有第一个维度在设计时已知,其他维度在运行时有所不同。
在完美的世界中,我会替换。 ReadDecimal()
和 ReadBigInteger()
以及类似的写入方法,但这似乎在 Stream 类中不受支持,我猜这是因为 BigInteger; 写入
我能想到的最好的办法是“手动编码”BigInteger,将其转换为 byte[]
数组,然后写入该数组的长度,然后 数组本身中的每个字节(并反向阅读)
两个问题:
1)这是更好的方法吗?
2) 我的主要动机是提高绩效; BigInteger 的性能甚至比 Decimal 好得多,如果有的话?
in one of my classes, I have a routine that reads and writes an array of type Decimal (using BinaryReader
/ BinaryWriter
's ReadDecimal()
and Write()
methods, to wit:
BinaryReader inputReader = new BinaryReader(File.OpenRead(BaseFilePath));
for (int x = 0; x < 6; x++) {
for (int y = 0; y < m_Codes[x].GetLength(0); y++) {
for (int z = 0; z < m_Codes[x].GetLength(1); z++) {
m_Codes[x][y, z] = inputReader.ReadDecimal();
}
}
}
and
for (int x = 0; x < 6; x++) {
for (int y = 0; y < m_Codes[x].GetLength(0); y++) {
for (int z = 0; z < m_Codes[x].GetLength(1); z++) {
outputWriter.Write(m_Codes[x][y, z]);
}
}
}
.. as you can see, only the first dimension is known at design time, the others vary on runtime.
In a perfect world, I would replace ReadDecimal()
with ReadBigInteger()
and something similar for the writing methods, but that does not seem to be supported in the Stream classes; I'm guessing this is because BigInteger can be of any length.
About the best thing I can think of is to "hand code" the BigInteger, by converting it to a byte[]
array, then writing the length of that array, then writing each byte in the array itself (and doing the reverse to read it in)
Two questions:
1) Is this a better way?
2) I'm primarily motivated by a desire to increase performance; boes BigInteger even perform that much better than Decimal, if at all?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
有一种相当简单的方法:调用
BigDecimal.ToByteArray
进行序列化,以及BigDecimal(byte[])
反序列化时的构造函数。诚然,这最终会复制数据,但我仍然希望它相当快。您更关心什么:序列化性能还是算术性能?
至于
BigInteger
和decimal
之间的任何速度差异 - 您应该针对您实际想要执行的操作对其进行测试,并意识到它们的行为会有所不同(例如,将 3 除以 2)显然会对每种类型给出不同的答案)。There's one fairly simple approach: Call
BigDecimal.ToByteArray
to serialize, and theBigDecimal(byte[])
constructor when deserializing. Admittedly that ends up copying the data, but I'd still expect it to be reasonably fast.What's of more concern to you: serialization performance or arithmetic performance?
As for any speed differences between
BigInteger
anddecimal
- you should test it for the operations you actually want to perform, being aware that they will behave differently (e.g. dividing 3 by 2 will obviously give a different answer for each type).您可以转换为字符串 (
BigInteger.ToSting ()
),然后写入该字符串(因为BinaryReader
和BinaryWriter
这避免了需要做的事情自己进行任何编码/解码)。然后使用
BigInteger.Parse
。
为了解决性能问题:我认为您需要衡量您感兴趣的案例。
当值相对较小时(例如,abs(value) <2128),我期望
BigInteger
的性能在几个订单之内long
性能的幅度(即不超过 500 倍慢)。但随着 BigInteger 实例变得越来越大,操作将花费更长的时间(必须操作更多位)。另一方面,decimal
在所有尺度上应该具有相当一致的性能,但对于其范围交集内的数字来说,它可能比long
慢得多(decimal
是一种更复杂的表示:比例因子并通过计算保留实际有效数字;但对这种复杂性的影响没有直观的了解)。请记住:
BigDecimal
是精确的 - 它永远不会四舍五入;decimal
是近似值 - 数据可能会从末尾掉落并被丢弃。任何一个业务问题似乎都不可能支持其中任何一个。You could convert to a string (
BigInteger.ToSting()
) and then write that string (as strings are directly supported withBinaryReader
andBinaryWriter
this avoids needing to do any encoding/decoding yourself).Then convert it back with
BigInteger.Parse
.To address the performance: I think you'll need to measure for the cases you are interested.
When relatively small values (say abs(value) < 2128) I would expect
BigInteger
's performance to be within a couple of orders of magnitude oflong
's performance (ie. no more than ~500 times slower). But asBigInteger
instances get larger operations will take longer (more bits have to be manipulated). On the other handdecimal
should have reasonably consistent performance at all scales, but it could be very much slower thanlong
for numbers in the intersection of their ranges (decimal
is a much more complex representation: scale factors and retaining actual significant digits through calculations; but no intuition of the effect of this complexity).And remember:
BigDecimal
is exact – it never rounds;decimal
is approximate – data can fall off the end and be thrown away. It seems unlikely that any one business problem would support either.