如何在 C++ 中将双精度数转换为 C# 小数?
鉴于我有十进制的表示 - 你可以找到它 这里例如--,我尝试以这种方式转换双精度数:
explicit Decimal(double n)
{
DoubleAsQWord doubleAsQWord;
doubleAsQWord.doubleValue = n;
uint64 val = doubleAsQWord.qWord;
const uint64 topBitMask = (int64)(0x1 << 31) << 32;
//grab the 63th bit
bool isNegative = (val & topBitMask) != 0;
//bias is 1023=2^(k-1)-1, where k is 11 for double
uint32 exponent = (((uint64)(val >> 31) >> 21) & 0x7FF) - 1023;
//exclude both sign and exponent (<<12, >>12) and normalize mantissa
uint64 mantissa = ((uint64)(0x1 << 31) << 21) | (val << 12) >> 12;
// normalized mantissa is 53 bits long,
// the exponent does not care about normalizing bit
uint8 scale = exponent + 11;
if (scale > 11)
scale = 11;
else if (scale < 0)
scale = 0;
lo_ = ((isNegative ? -1 : 1) * n) * std::pow(10., scale);
signScale_ = (isNegative ? 0x1 : 0x0) | (scale << 1);
// will always be 0 since we cannot reach
// a 128 bits precision with a 64 bits double
hi_ = 0;
}
DoubleAsQWord 类型用于从双精度数“转换”为其 uint64 表示形式:
union DoubleAsQWord
{
double doubleValue;
uint64 qWord;
};
我的 Decimal 类型具有这些字段:
uint64 lo_;
uint32 hi_;
int32 signScale_;
所有这些东西封装在我的 Decimal 类中。您可以注意到,即使我没有使用尾数,我也提取了尾数。我还在想办法能准确地猜出尺度。
这纯粹是实用的,并且似乎在压力测试的情况下有效:
BOOST_AUTO_TEST_CASE( convertion_random_stress )
{
const double EPSILON = 0.000001f;
srand(time(0));
for (int i = 0; i < 10000; ++i)
{
double d1 = ((rand() % 10) % 2 == 0 ? -1 : 1)
* (double)(rand() % 1000 + 1000.) / (double)(rand() % 42 + 2.);
Decimal d(d1);
double d2 = d.toDouble();
double absError = fabs(d1 - d2);
BOOST_CHECK_MESSAGE(
absError <= EPSILON,
"absError=" << absError << " with " << d1 << " - " << d2
);
}
}
无论如何,您将如何从 double
转换为这种 decimal
表示形式?
Given the reprensentation of decimal I have --you can find it here for instance--, I tried to convert a double this way:
explicit Decimal(double n)
{
DoubleAsQWord doubleAsQWord;
doubleAsQWord.doubleValue = n;
uint64 val = doubleAsQWord.qWord;
const uint64 topBitMask = (int64)(0x1 << 31) << 32;
//grab the 63th bit
bool isNegative = (val & topBitMask) != 0;
//bias is 1023=2^(k-1)-1, where k is 11 for double
uint32 exponent = (((uint64)(val >> 31) >> 21) & 0x7FF) - 1023;
//exclude both sign and exponent (<<12, >>12) and normalize mantissa
uint64 mantissa = ((uint64)(0x1 << 31) << 21) | (val << 12) >> 12;
// normalized mantissa is 53 bits long,
// the exponent does not care about normalizing bit
uint8 scale = exponent + 11;
if (scale > 11)
scale = 11;
else if (scale < 0)
scale = 0;
lo_ = ((isNegative ? -1 : 1) * n) * std::pow(10., scale);
signScale_ = (isNegative ? 0x1 : 0x0) | (scale << 1);
// will always be 0 since we cannot reach
// a 128 bits precision with a 64 bits double
hi_ = 0;
}
The DoubleAsQWord type is used to "cast" from double to its uint64 representation:
union DoubleAsQWord
{
double doubleValue;
uint64 qWord;
};
My Decimal type has these fields:
uint64 lo_;
uint32 hi_;
int32 signScale_;
All this stuff is encapsulated in my Decimal class. You can notice I extract the mantissa even if I'm not using it. I'm still thinking of a way to guess the scale accurately.
This is purely practical, and seems to work in the case of a stress test:
BOOST_AUTO_TEST_CASE( convertion_random_stress )
{
const double EPSILON = 0.000001f;
srand(time(0));
for (int i = 0; i < 10000; ++i)
{
double d1 = ((rand() % 10) % 2 == 0 ? -1 : 1)
* (double)(rand() % 1000 + 1000.) / (double)(rand() % 42 + 2.);
Decimal d(d1);
double d2 = d.toDouble();
double absError = fabs(d1 - d2);
BOOST_CHECK_MESSAGE(
absError <= EPSILON,
"absError=" << absError << " with " << d1 << " - " << d2
);
}
}
Anyway, how would you convert from double
to this decimal
representation?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(5)
我想你们会对英特尔十进制浮点数学库的 C++ 包装器的实现感兴趣:
C++ 十进制包装类
英特尔 DFP
I think you guys will be interested in an implementation of a C++ wrapper to the Intel Decimal Floating-Point Math Library:
C++ Decimal Wrapper Class
Intel DFP
使用 VarR8FromDec 函数怎么样?
编辑:此函数仅在 Windows 系统上声明。然而,WINE 提供了等效的 C 实现,此处:http://source.winehq.org/source/dlls/oleaut32/vartype.c
What about using VarR8FromDec Function ?
EDIT: This function is declared on Windows system only. However an equivalent C implementation is available with WINE, here: http://source.winehq.org/source/dlls/oleaut32/vartype.c
也许您正在寻找
System::Convert::ToDecimal()
http://msdn.microsoft.com/en -us/library/a69w9ca0%28v=vs.80%29.aspx
或者,您可以尝试将 Double 重新转换为 Decimal。
来自 MSDN 的示例。
http://msdn.microsoft.com/en -us/library/aa326763%28v=vs.71%29.aspx
Perhaps you are looking for
System::Convert::ToDecimal()
http://msdn.microsoft.com/en-us/library/a69w9ca0%28v=vs.80%29.aspx
Alternatively you could try recasting the Double as a Decimal.
An example from the MSDN.
http://msdn.microsoft.com/en-us/library/aa326763%28v=vs.71%29.aspx
如果您无法访问 .Net 例程,那么这就很棘手。我自己为我的十六进制编辑器完成了此操作(以便用户可以使用“属性”对话框显示和编辑 C# 十进制值) - 请参阅 http: //www.hexedit.com 了解更多信息。此外,HexEdit 的源代码是免费提供的 - 请参阅我的文章 http://www.codeproject。 com/KB/cpp/HexEdit.aspx。
实际上,我的例程在十进制和字符串之间进行转换,但您当然可以使用 sprintf 首先将双精度数转换为字符串。 (另外,当您谈论 double 时,我认为您明确指的是 IEEE 64 位浮点格式,尽管这是当今大多数编译器/系统使用的格式。)
请注意,如果您想精确处理所有有效的 Decimal 值,则会遇到一些问题对于任何无法转换的值返回错误,因为格式没有详细记录。 (十进制格式确实很棒,例如相同的数字可以有多种表示形式。)
这是我将字符串转换为十进制的代码。请注意,它使用 GNU 多精度算术库(以 mpz_ 开头的函数)。如果由于某种原因(例如值太大)而失败,String2Decimal 函数显然会返回 false。参数'presult'必须指向至少16字节的缓冲区,以存储结果。
If you do not have access to the .Net routines then this is tricky. I have done this myself for my hex editor (so that users can display and edit C# Decimal values using the Properties dialog) - see http://www.hexedit.com for more information. Also the source for HexEdit is freely available - see my article at http://www.codeproject.com/KB/cpp/HexEdit.aspx.
Actually my routines convert between Decimal and strings but you can of course use sprintf to convert the double to a string first. (Also when you talk about double I think you explicitly mean IEEE 64-bit floating point format, though this is what most compilers/systems use nowadays.)
Note that there are a few gotchas if you want to handle precisely all valid Decimal values and return an error for any value that cannot be converted, since the format is not well documented. (The Decimal format is aweful really, eg the same number can have many representations.)
Here is my code that converts a string to a Decimal. Note that it uses the the GNU Multiple Precision Arithmetic Library (functions that start with mpz_). The String2Decimal function obviously returns false if it fails for some reason, such as the value being too big. The parameter 'presult' must point to a buffer of at least 16 bytes, to store the result.
怎么样:
1)sprintf数字进入s
2)找到小数点(strchr),存储在idx中
3) atoi = 轻松获取整数部分,使用 union 分隔高/低
4)使用 strlen - idx 获取点 sprintf 后的位数
可能会很慢,但您将在输入 2 分钟内得到解决方案......
How about this:
1) sprintf number into s
2) find decimal point (strchr), store in idx
3) atoi = obtain integer part easily, use union to separate high/lo
4) use strlen - idx to obtain number of digits after point
sprintf may be slow but you´ll get the solution under 2 minutes of typing...