批量插入带有回车(Enters)文本字段的文件
我在批量插入包含数据类型文本字段的文件时遇到问题,并且它保存输入,字段的分隔符是管道“|”行终止符是“|\n”
我收到文本字段旁边的字段截断错误。我认为批量插入认为第二个字段中的输入是下一行。
DECLARE @sql varchar(2000)
PRINT 'xyz table'
SET @sql =
'BULK INSERT xyz
FROM ''\\' + @@servername + '\Import\xyz.txt''
WITH
(
DATAFILETYPE = ''char'',
FIELDTERMINATOR = ''|'',
ROWTERMINATOR = ''|\n'',
TABLOCK,
ROWS_PER_BATCH = 2000,
MAXERRORS = 1000
)
'
PRINT @sql
EXECUTE (@sql)
PRINT ''
GO
这是数据示例
467507**|**08-20-09\
[8:55:03 AM] *** miked@wkaiser-6793 is Not Available [Ext. away]\(CR)
[9:00:57 AM] *** miked@wkaiser-6793 is Online [Online]\(CR)
[9:01:00 AM] <jeffp> Howdy. Time slip update game! Update TS#467493 & 467507 with a (CR)comment and see if you win! [9:01:30 AM] *** miked@wkaiser-6793 is Away [Away]\
\(CR)
08-18-09\(CR)
[10:13:15 AM] *** miked@wkaiser-6793 is Online [Online]\(CR)
[10:13:59 AM] <jeffp> Howdy; welcome back from lunch.. Just pinging you for an update in TS#467493 & 467507. Since 467493 is pri9, want a stock e-mail to go out to the customer to get them moving?\(CR)
[10:14:47 AM] <miked@wkaiser-6793> thats ok i got it\(CR)
[10:14:53 AM] <jeffp> Aiight.**|**2009-08-18 00:00:00**|**2009-08-20 00:00:00**|**JDP**|**JDP**|**
表架构是:
create table xyz
(
xyz_id VARCHAR(200), --INT TO VARCHAR
notes text,
create_date varchar(32), --DATETIME
create_user varchar(12),
modify_date varchar(32), --DATETIME
modify_user varchar(12)
)
正如您所看到的(好吧,看不到)有输入(用 CR 标记)并且批量插入将它们与新行混淆。
任何帮助将不胜感激。
谢谢
I´m having trouble with bulk insert a file that contains a field in data type text, and it holds enters and the delimiter for the fields are pipes "|" and the row terminator is "|\n"
I get an error of truncation for the fields next to the text field. I think that the bulk insert thinks that the enters in the second field are the next rows.
DECLARE @sql varchar(2000)
PRINT 'xyz table'
SET @sql =
'BULK INSERT xyz
FROM ''\\' + @@servername + '\Import\xyz.txt''
WITH
(
DATAFILETYPE = ''char'',
FIELDTERMINATOR = ''|'',
ROWTERMINATOR = ''|\n'',
TABLOCK,
ROWS_PER_BATCH = 2000,
MAXERRORS = 1000
)
'
PRINT @sql
EXECUTE (@sql)
PRINT ''
GO
This is the data example
467507**|**08-20-09\
[8:55:03 AM] *** miked@wkaiser-6793 is Not Available [Ext. away]\(CR)
[9:00:57 AM] *** miked@wkaiser-6793 is Online [Online]\(CR)
[9:01:00 AM] <jeffp> Howdy. Time slip update game! Update TS#467493 & 467507 with a (CR)comment and see if you win! [9:01:30 AM] *** miked@wkaiser-6793 is Away [Away]\
\(CR)
08-18-09\(CR)
[10:13:15 AM] *** miked@wkaiser-6793 is Online [Online]\(CR)
[10:13:59 AM] <jeffp> Howdy; welcome back from lunch.. Just pinging you for an update in TS#467493 & 467507. Since 467493 is pri9, want a stock e-mail to go out to the customer to get them moving?\(CR)
[10:14:47 AM] <miked@wkaiser-6793> thats ok i got it\(CR)
[10:14:53 AM] <jeffp> Aiight.**|**2009-08-18 00:00:00**|**2009-08-20 00:00:00**|**JDP**|**JDP**|**
The table schema is:
create table xyz
(
xyz_id VARCHAR(200), --INT TO VARCHAR
notes text,
create_date varchar(32), --DATETIME
create_user varchar(12),
modify_date varchar(32), --DATETIME
modify_user varchar(12)
)
As you can see (well, not can see) there are enters (marked with CR) and the bulk insert is confusing them with a new row.
Any help will be appreciated.
Thanks
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
编辑:您不需要格式文件:
您可能需要使用
**|**\n
(LF) 与**|** \r\n
(CRLF) 与**|**\r
(CR),取决于文件是 Unix、DOS 还是类 MAC。/EDIT
没有格式文件就无法完成。因此,创建一个格式文件:
然后:
或者,在 SQL 2005+ 中:
EDIT: you don't need a format file:
You may need to play with
**|**\n
(LF) vs**|**\r\n
(CRLF) vs**|**\r
(CR), depending on whether the file is Unix, DOS or MAC-like./EDIT
It can't be done without a format file. So, create a format file:
Then:
Or, in SQL 2005+:
数据不一致。您有一些以
\(cr)
结尾的行,其中一个字段,其他(cr)
以 | 结尾。列分隔符 要么是列分隔符不一致,要么是
\(cr)
,有些是|
。如果是这样,那么您将需要一个格式文件 处理每个单独的“列”。
The data is inconsistent. You have some rows ending
\(cr)
with one field, other(cr)
ending with | column separatorsEither that or you have inconsistent column separators, some
\(cr)
, some|
.If so, then you'll need a format file to deal with each separate "column".