从 5.0.27 转换到 5.1.41,出现重复键错误 (1062)
我有一个当前在 5.0.27 服务器上运行的数据库。我想迁移到新的 5.1.41 服务器。
我 mysqldump 了所有文件。恢复时,我收到错误,
ERROR 1062 (23000) at line 21: Duplicate entry 'weiÃ' for key 'title'
我已将故障范围缩小到此脚本,我可以运行该脚本,但它失败了:
--
-- Table structure for table `word`
--
set names utf8;
DROP TABLE IF EXISTS `word`;
CREATE TABLE `word`
(
`wordid` int (10) unsigned NOT NULL auto_increment,
`title` char (50) NOT NULL default '',
PRIMARY KEY (`wordid`),
UNIQUE KEY `title` (`title`)
) ENGINE=MyISAM AUTO_INCREMENT=280707 DEFAULT CHARSET=utf8 COLLATE=utf8_general_ci;
--
-- Dumping data for table `word`
--
LOCK TABLES `word` WRITE;
INSERT INTO `word` VALUES
(198036,'weis'),
(241473, unhex('776569C39F'));
UNLOCK TABLES;
编辑 - 更改为 UNHEX。
我已经检查并重新检查了两个服务器之间的所有字符集和排序规则变量,它们看起来相同。即使不是,我也会自己指定排序规则。
关于我在这里做错了什么的任何线索吗?
命令:
mysqldump --add-drop-table --add-locks --disable-keys --lock-tables --quick -uusername -ppassword database > filename
编辑:这是我用来转储数据库和加载的
mysql -D$MYSQL_DB -u$MYSQL_USER -p$MYSQL_PASSWD < filename
如何检查客户端连接的排序规则?
I have a database that's currently running on a 5.0.27 server. I want to move to a new 5.1.41 server.
I mysqldump'd all the files. When restoring, I get an error
ERROR 1062 (23000) at line 21: Duplicate entry 'weiÃ' for key 'title'
I've narrowed the failure down to this script, which I can run and it fails:
--
-- Table structure for table `word`
--
set names utf8;
DROP TABLE IF EXISTS `word`;
CREATE TABLE `word`
(
`wordid` int (10) unsigned NOT NULL auto_increment,
`title` char (50) NOT NULL default '',
PRIMARY KEY (`wordid`),
UNIQUE KEY `title` (`title`)
) ENGINE=MyISAM AUTO_INCREMENT=280707 DEFAULT CHARSET=utf8 COLLATE=utf8_general_ci;
--
-- Dumping data for table `word`
--
LOCK TABLES `word` WRITE;
INSERT INTO `word` VALUES
(198036,'weis'),
(241473, unhex('776569C39F'));
UNLOCK TABLES;
EDIT - changed to UNHEX.
I've checked and rechecked all the charset and collation variables between the two servers, and they look identical. Even if they weren't, I'm specifying the collation myself.
Any clues as to what I'm doing wrong here?
EDIT: here's the command I'm using to dump the database:
mysqldump --add-drop-table --add-locks --disable-keys --lock-tables --quick -uusername -ppassword database > filename
and to load
mysql -D$MYSQL_DB -u$MYSQL_USER -p$MYSQL_PASSWD < filename
How can I check the collations for the client connections?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
具体问题是,在 utf8_general_ci 中,'weis' 和 'weiß' 是等效的。如果您希望 'weiß' 等于 'weiss',那么您应该使用 utf8_unicode_ci。这将解决导入方面的问题(除非数据库中有“weiss”,但实际上确实有重复项)。
据猜测,原始表设置了 utf8_unicode_ci,而您没有注意到其中的差异。如果这不是真的,我不知道你的表是如何进入它现在的状态的——但是切换到正确的排序规则应该可以解决你的问题。
The specific problem is that in utf8_general_ci, 'weis' and 'weiß' are equivalent. If you want 'weiß' to be equal to 'weiss', then you should use utf8_unicode_ci. That will fix the problem on the import side (unless you have 'weiss' in the database, but then you really do have a duplicate).
At a guess, the original table has utf8_unicode_ci set, and you didn't notice the difference. If that's not true, I have no idea how your table got into the state it's in -- but switching to the right collation should solve your problem.
从 LiveJournal 上的一个朋友那里,我发现这是 5.0 和 5.1 之间的一个错误“修复”:他们更改了排序规则。如果您阅读了错误报告,就会发现它们实际上破坏它(weis 和 weiss 不应该是等价的)。但他们不会打破它。因此,我要么必须更改排序规则(如戴夫·奥尔建议的那样),要么手动编辑我的数据。
http://bugs.mysql.com/bug.php?id=27877
From a buddy on LiveJournal, I found out it's a bug "fix" between 5.0 and 5.1: They changed the collation. If you read the bug report, they actually broke it (weis and weiss should not be equivalent). But they're not going to unbreak it. So I will either have to change the collation (as Dave Orr suggests), or manually edit my data.
http://bugs.mysql.com/bug.php?id=27877
您是否使用
mysqldump
&mysql
从 5.1 到 5.0?你可以尝试不同的组合。
sourceDB 的“唯一”列中是否包含重复值?
删除“UNIQUE KEY”约束并检查目标数据库中哪些记录是重复的。
这可以让我们对问题有一些了解。
Are you using the
mysqldump
&mysql
from 5.1 of 5.0?you could try different combo's.
Could the sourceDB contain duplicate values in the "unique" column?
Remove the "UNIQUE KEY" constraint and check which records are duplicate in the targetDB.
This could give some insight into the problem.
通过
--default-character-set
选项指定字符集。这很重要。Specify the character set by
--default-character-set
option. It's important.