十六进制和二进制运算

发布于 2024-11-19 00:00:15 字数 204 浏览 0 评论 0原文

我有问题要解决,但不知道该怎么做。我的程序从串口接收十六进制值的字符串(如 DFF7DF)。我需要将其转换为二进制形式,丢弃前四位,将第五位作为符号位,将接下来的 12 位作为值。

我需要获取正常 INT 的值。

我能够在 MATLAB 中编写这样的程序,但我需要 C++ 才能在我的 Linux Arm 板上运行它。

预先感谢您的帮助! 马尔钦

I have problem to solve and have no idea how to do that. My program receives from serial port string with hex value (like DFF7DF). I need to convert it to binary form, discard first four bits, take fifth bit as sign bit and next 12 bits as a value.

I need to get value as normal INT.

I was able to make such program in MATLAB, but I need C++ to be able to run it on my linux arm board.

Thanks in advance for help!
Marcin

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

攒眉千度 2024-11-26 00:00:15

你可以这样做:

unsigned long value = strtoul("DFF7DF", NULL, 16);
value >>= 4; // discard first four bits
printf("Minus sign: %s\n", value & 1 ? "yes" : "no");
printf("Value: %lu\n", (value & 0x1FFF) >> 1);

long newvalue = (value & 1 ? -1 : 1) * ((value & 0x1FFF) >> 1);

You could do something like:

unsigned long value = strtoul("DFF7DF", NULL, 16);
value >>= 4; // discard first four bits
printf("Minus sign: %s\n", value & 1 ? "yes" : "no");
printf("Value: %lu\n", (value & 0x1FFF) >> 1);

long newvalue = (value & 1 ? -1 : 1) * ((value & 0x1FFF) >> 1);
阳光下慵懒的猫 2024-11-26 00:00:15

正确的答案取决于一些约定 - 十六进制字符串是大端还是小端?您是从最高有效位开始计数还是从最低有效位开始计数?总是有 6 个十六进制字符(24 位)吗?

无论如何,这是一种大尾数法的解决方案,始终为 24 位,从最高有效位开始计数。我相信如果我的某些假设是错误的,您将能够调整它。

int HexToInt(char *hex)
{
    int result = 0;
    for(;*hex;hex++)
    {
        result <<= 4;
        if ( *hex >= '0' && *hex <= '9' )
            result |= *hex-'0';
        else
            result |= *hex-'A';
    }
    return result;
}

char *data = GetDataFromSerialPortStream();
int rawValue = HexToInt(data);
int sign = rawValue & 0x10000;
int value = (sign?-1:1) * ((rawValue >> 4) & 0xFFF);

The correct answer depends on a few conventions - is the hex string big-endian or little-endian? Do you start counting bits from the most significant or the least significat bit? Will there always be exactly 6 hex characters (24 bits)?

Anyways, here's one solution for a big-endian, always-24-bits, counting from most significant bit. I'm sure you'll be able to adapt it if some of my assumptions are wrong.

int HexToInt(char *hex)
{
    int result = 0;
    for(;*hex;hex++)
    {
        result <<= 4;
        if ( *hex >= '0' && *hex <= '9' )
            result |= *hex-'0';
        else
            result |= *hex-'A';
    }
    return result;
}

char *data = GetDataFromSerialPortStream();
int rawValue = HexToInt(data);
int sign = rawValue & 0x10000;
int value = (sign?-1:1) * ((rawValue >> 4) & 0xFFF);
灯角 2024-11-26 00:00:15

问题被标记为 C++,但每个人都在使用 C 字符串。以下是如何使用 C++ STL 字符串执行此操作

std::string s("DFF7DF");  

int val;
std::istringstream iss(s);
iss >> std::setbase(16) >> val;

int result = val & 0xFFF;  // take bottom 12 bits

if (val & 0x1000)    // assume sign + magnitude encoding
  result = - result;

(您的问题中的第二个“位摆弄”部分不清楚。如果您澄清的话,我将更新答案。)

The question is tagged C++ but everyone is using C strings. Here's how to do it with a C++ STL string

std::string s("DFF7DF");  

int val;
std::istringstream iss(s);
iss >> std::setbase(16) >> val;

int result = val & 0xFFF;  // take bottom 12 bits

if (val & 0x1000)    // assume sign + magnitude encoding
  result = - result;

(The second "bit-fiddling" part isn't clear from your question. I'll update the answer if you clarify it.)

他夏了夏天 2024-11-26 00:00:15

您必须检查机器类型的字节顺序,但这基本上就是这个想法。

const char * string = "DFF7DF";
const unsigned char second_nibble = hex_to_int (string[1]);
const unsigned char third_nibble  = hex_to_int (string[2));
const unsigned char fourth_nibble = hex_to_int (string[2));

int sign = second_nibble & (1<<3) ? -1 : 1;

unsigned value = unsigned (second_nibble & ~(1<<3)) << 12-3; // Next three bits are in second nibble
value |= (unsigned(third_nibble)<<1) | (fourth_nibble&1); // Next 9 bits are in the next two nibbles.

确保对无符号类型执行移位运算符。

You have to check your machine type for endian-ness but this is basically the idea.

const char * string = "DFF7DF";
const unsigned char second_nibble = hex_to_int (string[1]);
const unsigned char third_nibble  = hex_to_int (string[2));
const unsigned char fourth_nibble = hex_to_int (string[2));

int sign = second_nibble & (1<<3) ? -1 : 1;

unsigned value = unsigned (second_nibble & ~(1<<3)) << 12-3; // Next three bits are in second nibble
value |= (unsigned(third_nibble)<<1) | (fourth_nibble&1); // Next 9 bits are in the next two nibbles.

Make sure you perform your bit-shift operators on unsigned types.

2024-11-26 00:00:15

这是一个需要遵循的模式:

const char* s = "11";
istringstream in(string(s, 3));
unsigned i=0;
in >> hex >> i;
cout << "i=" << dec << i << endl;

其余的只是位移位。

Here's a pattern to follow:

const char* s = "11";
istringstream in(string(s, 3));
unsigned i=0;
in >> hex >> i;
cout << "i=" << dec << i << endl;

The rest is just bit-shifting.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文