Java 中 UTF-16 到 ASCII 的转换

发布于 2024-08-06 06:03:23 字数 198 浏览 7 评论 0原文

一直忽略它,我目前正在强迫自己更多地了解 Java 中的 unicode。我需要做一个关于将 UTF-16 字符串转换为 8 位 ASCII 的练习。有人可以告诉我如何在 Java 中做到这一点吗?我知道你不能用 ASCII 表示所有可能的 unicode 值,所以在这种情况下,我希望无论如何都只是添加一个超过 0xFF 的代码(坏数据也应该默默地添加)。

谢谢!

Having ignored it all this time, I am currently forcing myself to learn more about unicode in Java. There is an exercise I need to do about converting a UTF-16 string to 8-bit ASCII. Can someone please enlighten me how to do this in Java? I understand that you can't represent all possible unicode values in ASCII, so in this case I want a code which exceeds 0xFF to be merely added anyway (bad data should also just be added silently).

Thanks!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

猫卆 2024-08-13 06:03:23

您可以使用 java.nio 来获得简单的解决方案:

// first encode the utf-16 string as a ByteBuffer
ByteBuffer bb = Charset.forName("utf-16").encode(CharBuffer.wrap(utf16str));
// then decode those bytes as US-ASCII
CharBuffer ascii = Charset.forName("US-ASCII").decode(bb);

You can use java.nio for an easy solution:

// first encode the utf-16 string as a ByteBuffer
ByteBuffer bb = Charset.forName("utf-16").encode(CharBuffer.wrap(utf16str));
// then decode those bytes as US-ASCII
CharBuffer ascii = Charset.forName("US-ASCII").decode(bb);
暗喜 2024-08-13 06:03:23

怎么样:

String input = ... // my UTF-16 string
StringBuilder sb = new StringBuilder(input.length());
for (int i = 0; i < input.length(); i++) {
    char ch = input.charAt(i);
    if (ch <= 0xFF) {
        sb.append(ch);
    }
}

byte[] ascii = sb.toString().getBytes("ISO-8859-1"); // aka LATIN-1

这可能不是对大字符串进行转换的最有效方法,因为我们复制字符两次。然而,它的优点是简单。

顺便说一句,严格来说,不存在 8 位 ASCII 这样的字符集。 ASCII 是一个 7 位字符集。 LATIN-1 是最接近“8 位 ASCII”字符集(Unicode 的块 0 相当于 LATIN-1),所以我假设这就是您的意思。

编辑:根据问题的更新,解决方案更加简单:

String input = ... // my UTF-16 string
byte[] ascii = new byte[input.length()];
for (int i = 0; i < input.length(); i++) {
    ascii[i] = (byte) input.charAt(i);
}

这个解决方案更有效。由于我们现在知道需要多少字节,因此我们可以预先分配字节数组并复制(截断的)字符,而无需使用 StringBuilder 作为中间缓冲区。

但是,我不相信以这种方式处理不良数据是明智的。

编辑2:还有一个更晦涩的“陷阱”。 Unicode 实际上将代码点(字符)定义为“大约 21 位”值...0x000000 到 0x10FFFF...并使用代理来表示代码> 0x00FFFF。换句话说,Unicode 代码点 > 0x00FFFF 实际上在 UTF-16 中表示为两个“字符”。我的回答或其他任何人都没有考虑到这一点(诚然是深奥的)。事实上,处理代码点>一般来说,Java 中的 0x00FFFF 相当棘手。这是因为“char”是 16 位类型,而 String 是根据“char”定义的。

编辑 3:对于处理未转换为 ASCII 的意外字符,也许更明智的解决方案是将它们替换为标准替换字符:

String input = ... // my UTF-16 string
byte[] ascii = new byte[input.length()];
for (int i = 0; i < input.length(); i++) {
    char ch = input.charAt(i);
    ascii[i] = (ch <= 0xFF) ? (byte) ch : (byte) '?';
}

How about this:

String input = ... // my UTF-16 string
StringBuilder sb = new StringBuilder(input.length());
for (int i = 0; i < input.length(); i++) {
    char ch = input.charAt(i);
    if (ch <= 0xFF) {
        sb.append(ch);
    }
}

byte[] ascii = sb.toString().getBytes("ISO-8859-1"); // aka LATIN-1

This is probably not the most efficient way to do this conversion for large strings since we copy the characters twice. However, it has the advantage of being straightforward.

BTW, strictly speaking there is no such character set as 8-bit ASCII. ASCII is a 7-bit character set. LATIN-1 is the nearest thing there is to an "8-bit ASCII" character set (and block 0 of Unicode is equivalent to LATIN-1) so I'll assume that's what you mean.

EDIT: in the light of the update to the question, the solution is even simpler:

String input = ... // my UTF-16 string
byte[] ascii = new byte[input.length()];
for (int i = 0; i < input.length(); i++) {
    ascii[i] = (byte) input.charAt(i);
}

This solution is more efficient. Since we now know how many bytes to expect, we can preallocate the byte array and in copy the (truncated) characters without using a StringBuilder as intermediate buffer.

However, I'm not convinced that dealing with bad data in this way is sensible.

EDIT 2: there is one more obscure "gotcha" with this. Unicode actually defines code points (characters) to be "roughly 21 bit" values ... 0x000000 to 0x10FFFF ... and uses surrogates to represent codes > 0x00FFFF. In other words, a Unicode codepoint > 0x00FFFF is actually represented in UTF-16 as two "characters". Neither my answer or any of the others take account of this (admittedly esoteric) point. In fact, dealing with codepoints > 0x00FFFF in Java is rather tricky in general. This stems from the fact that 'char' is a 16 bit type and String is defined in terms of 'char'.

EDIT 3: maybe a more sensible solution for dealing with unexpected characters that don't convert to ASCII is to replace them with the standard replacement character:

String input = ... // my UTF-16 string
byte[] ascii = new byte[input.length()];
for (int i = 0; i < input.length(); i++) {
    char ch = input.charAt(i);
    ascii[i] = (ch <= 0xFF) ? (byte) ch : (byte) '?';
}
腻橙味 2024-08-13 06:03:23

Java 内部以 UTF-16 表示字符串。如果您开始使用 String 对象,则可以使用 String.getBytes(Charset c),您可以在其中指定 US-ASCII(可以映射代码点 0x00-0x7f)或 ISO-8859-1(它可以映射代码点 0x00-0xff,并且可能就是您所说的“8 位 ASCII”)。

至于添加“坏数据”... ASCII 或 ISO-8859-1 字符串根本无法表示超出特定范围的值。我相信 getBytes 只会删除它无法在目标字符集中表示的字符。

Java internally represents strings in UTF-16. If a String object is what you are starting with, you can encode using String.getBytes(Charset c), where you might specify US-ASCII (which can map code points 0x00-0x7f) or ISO-8859-1 (which can map code points 0x00-0xff, and may be what you mean by "8-bit ASCII").

As for adding "bad data"... ASCII or ISO-8859-1 strings simply can't represent values outside of a certain range. I believe getBytes will simply drop characters it's not able to represent in the destination character set.

许你一世情深 2024-08-13 06:03:23

由于这是一个练习,听起来您需要手动实现。您可以将编码(例如 UTF-16 或 ASCII)视为将字节序列与逻辑字符(代码点)相匹配的查找表。

Java 使用 UTF-16 字符串,这意味着任何给定的代码点都可以用一两个 char 变量表示。您是否想要处理两个char代理项对取决于您认为您的应用程序遇到它们的可能性(请参阅用于检测它们的字符类)。 ASCII 仅使用八位字节(字节)的前 7 位,因此有效范围值的范围是 0 到 127。UTF-16 对此范围使用相同的值(只是更宽)。这可以通过以下代码确认:

Charset ascii = Charset.forName("US-ASCII");
byte[] buffer = new byte[1];
char[] cbuf = new char[1];
for (int i = 0; i <= 127; i++) {
  buffer[0] = (byte) i;
  cbuf[0] = (char) i;
  String decoded = new String(buffer, ascii);
  String utf16String = new String(cbuf);
  if (!utf16String.equals(decoded)) {
    throw new IllegalStateException();
  }
  System.out.print(utf16String);
}
System.out.println("\nOK");

因此,您可以通过将 char 转换为 byte 将 UTF-16 转换为 ASCII。

您可以此处阅读有关 Java 字符编码的更多信息< /a>.

Since this is an exercise, it sounds like you need to implement this manually. You can think of an encoding (e.g. UTF-16 or ASCII) as a lookup table that matches a sequence of bytes to a logical character (a codepoint).

Java uses UTF-16 strings, which means that any given codepoint can be represented in one or two char variables. Whether you want to handle the two-char surrogate pairs depends on how likely you think your application is to encounter them (see the Character class for detecting them). ASCII only uses the first 7 bits of an octet (byte), so the valid range of values is 0 to 127. UTF-16 uses identical values for this range (they're just wider). This can be confirmed with this code:

Charset ascii = Charset.forName("US-ASCII");
byte[] buffer = new byte[1];
char[] cbuf = new char[1];
for (int i = 0; i <= 127; i++) {
  buffer[0] = (byte) i;
  cbuf[0] = (char) i;
  String decoded = new String(buffer, ascii);
  String utf16String = new String(cbuf);
  if (!utf16String.equals(decoded)) {
    throw new IllegalStateException();
  }
  System.out.print(utf16String);
}
System.out.println("\nOK");

Therefore, you can convert UTF-16 to ASCII by casting a char to a byte.

You can read more about Java character encoding here.

假扮的天使 2024-08-13 06:03:23

只是为了优化已接受的答案,并且如果字符串已经全部是 ascii 字符,则无需支付任何惩罚,这是优化版本。谢谢@stephen-c

public static String toAscii(String input) {
  final int length = input.length();
  int ignoredChars = 0;
  byte[] ascii = null;
  for (int i = 0; i < length; i++) {
    char ch = input.charAt(i);
    if (ch > 0xFF) {
      //-- ignore this non-ascii character
      ignoredChars++;
      if (ascii == null) {
        //-- first non-ascii character. Create a new ascii array with all ascii characters
        ascii = new byte[input.length() - 1];  //-- we know, the length will be at less by at least 1
        for (int j = 0; j < i-1; j++) {
          ascii[j] = (byte) input.charAt(j);
        }
      }
    } else if (ascii != null) {
      ascii[i - ignoredChars] = (byte) ch;
    }
  }
  //-- (ignoredChars == 0) is the same as (ascii == null) i.e. no non-ascii characters found
  return ignoredChars == 0 ? input : new String(Arrays.copyOf(ascii, length - ignoredChars));
}

Just to optimize on the accepted answer and not pay any penalty if the string is already all ascii characters, here is the optimized version. Thanks @stephen-c

public static String toAscii(String input) {
  final int length = input.length();
  int ignoredChars = 0;
  byte[] ascii = null;
  for (int i = 0; i < length; i++) {
    char ch = input.charAt(i);
    if (ch > 0xFF) {
      //-- ignore this non-ascii character
      ignoredChars++;
      if (ascii == null) {
        //-- first non-ascii character. Create a new ascii array with all ascii characters
        ascii = new byte[input.length() - 1];  //-- we know, the length will be at less by at least 1
        for (int j = 0; j < i-1; j++) {
          ascii[j] = (byte) input.charAt(j);
        }
      }
    } else if (ascii != null) {
      ascii[i - ignoredChars] = (byte) ch;
    }
  }
  //-- (ignoredChars == 0) is the same as (ascii == null) i.e. no non-ascii characters found
  return ignoredChars == 0 ? input : new String(Arrays.copyOf(ascii, length - ignoredChars));
}
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文