SQL Server:批量加载失败。数据文件中第 1 行第 1 列的列太长

发布于 2024-12-21 02:08:12 字数 2536 浏览 4 评论 0原文

请有人在这里帮助我。已经看了几个小时了,但没有任何结果。

我使用以下脚本在 SQL Express 2008 R2 中创建了一个表:

CREATE TABLE Features
(
ID int not null identity(1,1 ),
StopID varchar(10), 
Code int,
Name varchar(100),
Summary varchar(200),
Lat real,
Lon real,
street varchar(100),
city varchar(50),
region varchar(50),
postcode varchar(10),
country varchar(20),
zone_id varchar(20),
the_geom geography


 CONSTRAINT [PK_Features] PRIMARY KEY CLUSTERED 
(
    [ID] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
) ON [PRIMARY]

然后,我使用 bcp 工具创建了针对我的数据库表创建的以下格式文件:

10.0
12
1       SQLCHAR             2       100     ","    2     StopID               Latin1_General_CI_AS
2       SQLINT              1       4       ","    3     Code                 ""
3       SQLCHAR             2       100     ","    4     Name                 Latin1_General_CI_AS
4       SQLCHAR             2       200     ","    5     Summary              Latin1_General_CI_AS
5       SQLFLT4             1       4       ","    6     Lat                  ""
6       SQLFLT4             1       4       ","    7     Lon                  ""
7       SQLCHAR             2       100     ","    8     street               Latin1_General_CI_AS
8       SQLCHAR             2       50      ","    9     city                 Latin1_General_CI_AS
9       SQLCHAR             2       50      ","    10    region               Latin1_General_CI_AS
10      SQLCHAR             2       10      ","    11    postcode             Latin1_General_CI_AS
11      SQLCHAR             2       20      ","    12    country              Latin1_General_CI_AS
12      SQLCHAR             2       20      "\r\n"    13    zone_id              Latin1_General_CI_AS

该文件已被修改以删除 ID 和 the_geom 字段,因为这些字段不在我的数据中文件。

然后我尝试批量插入包含以下内容的 1 行 csv:

a,8,S,,45.439869,-75.695839,,,,,,

以及我得到的所有内容:

Msg 4866, Level 16, State 7, Line 35
The bulk load failed. The column is too long in the data file for row 1, column 1. Verify that the field terminator and row terminator are specified correctly.
Msg 7399, Level 16, State 1, Line 35
The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.
Msg 7330, Level 16, State 2, Line 35
Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".

任何指针都会在这里有所帮助,因为我无法弄清楚这一点。

Someone please help me here. Been looking at this for a couple of hours now but leading to nowhere.

I created a table in SQL Express 2008 R2 using the following script:

CREATE TABLE Features
(
ID int not null identity(1,1 ),
StopID varchar(10), 
Code int,
Name varchar(100),
Summary varchar(200),
Lat real,
Lon real,
street varchar(100),
city varchar(50),
region varchar(50),
postcode varchar(10),
country varchar(20),
zone_id varchar(20),
the_geom geography


 CONSTRAINT [PK_Features] PRIMARY KEY CLUSTERED 
(
    [ID] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
) ON [PRIMARY]

Then I created the following format file created against my database table using the bcp tool:

10.0
12
1       SQLCHAR             2       100     ","    2     StopID               Latin1_General_CI_AS
2       SQLINT              1       4       ","    3     Code                 ""
3       SQLCHAR             2       100     ","    4     Name                 Latin1_General_CI_AS
4       SQLCHAR             2       200     ","    5     Summary              Latin1_General_CI_AS
5       SQLFLT4             1       4       ","    6     Lat                  ""
6       SQLFLT4             1       4       ","    7     Lon                  ""
7       SQLCHAR             2       100     ","    8     street               Latin1_General_CI_AS
8       SQLCHAR             2       50      ","    9     city                 Latin1_General_CI_AS
9       SQLCHAR             2       50      ","    10    region               Latin1_General_CI_AS
10      SQLCHAR             2       10      ","    11    postcode             Latin1_General_CI_AS
11      SQLCHAR             2       20      ","    12    country              Latin1_General_CI_AS
12      SQLCHAR             2       20      "\r\n"    13    zone_id              Latin1_General_CI_AS

This file has been modified to remove the ID and the_geom fields as these are not in my data file.

Then I tried to bulk insert a 1 line csv with the following content:

a,8,S,,45.439869,-75.695839,,,,,,

and all I get:

Msg 4866, Level 16, State 7, Line 35
The bulk load failed. The column is too long in the data file for row 1, column 1. Verify that the field terminator and row terminator are specified correctly.
Msg 7399, Level 16, State 1, Line 35
The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.
Msg 7330, Level 16, State 2, Line 35
Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".

Any pointers will help here as I cannot figure this one out.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

俯瞰星空 2024-12-28 02:08:12

问题是由我的格式文件中的默认前缀长度设置引起的。我导入的数据文件不是使用 bcp 创建的,因此我必须将所有字段的前缀长度设置为 0,如下所示:

0.0
12
1       SQLCHAR             0       100     ","    2     StopID               Latin1_General_CI_AS
2       SQLINT              0       4       ","    3     Code                 ""
3       SQLCHAR             0       100     ","    4     Name                 Latin1_General_CI_AS
4       SQLCHAR             0       200     ","    5     Summary              Latin1_General_CI_AS
5       SQLFLT4             0       4       ","    6     Lat                  ""
6       SQLFLT4             0       4       ","    7     Lon                  ""
7       SQLCHAR             0       100     ","    8     street               Latin1_General_CI_AS
8       SQLCHAR             0       50      ","    9     city                 Latin1_General_CI_AS
9       SQLCHAR             0       50      ","    10    region               Latin1_General_CI_AS
10      SQLCHAR             0       10      ","    11    postcode             Latin1_General_CI_AS
11      SQLCHAR             0       20      ","    12    country              Latin1_General_CI_AS
12      SQLCHAR             0       20      "\r\n"    13    zone_id              Latin1_General_CI_AS

通过此更改,导入成功。

Problem was caused by the default prefix length settings in my format file. The data file I am importing from was not created using bcp so I had to set the prefix length of all the fields to 0 as follows:

0.0
12
1       SQLCHAR             0       100     ","    2     StopID               Latin1_General_CI_AS
2       SQLINT              0       4       ","    3     Code                 ""
3       SQLCHAR             0       100     ","    4     Name                 Latin1_General_CI_AS
4       SQLCHAR             0       200     ","    5     Summary              Latin1_General_CI_AS
5       SQLFLT4             0       4       ","    6     Lat                  ""
6       SQLFLT4             0       4       ","    7     Lon                  ""
7       SQLCHAR             0       100     ","    8     street               Latin1_General_CI_AS
8       SQLCHAR             0       50      ","    9     city                 Latin1_General_CI_AS
9       SQLCHAR             0       50      ","    10    region               Latin1_General_CI_AS
10      SQLCHAR             0       10      ","    11    postcode             Latin1_General_CI_AS
11      SQLCHAR             0       20      ","    12    country              Latin1_General_CI_AS
12      SQLCHAR             0       20      "\r\n"    13    zone_id              Latin1_General_CI_AS

With this change, the import was succesful.

謸气贵蔟 2024-12-28 02:08:12

试试这个,

ROWTERMINATOR = '0x0a'

Try this,

ROWTERMINATOR = '0x0a'
所有深爱都是秘密 2024-12-28 02:08:12

我今天遇到了这个问题,但仅限于文本值超过 8000 个字符的特定行和列。不管我的 FMT 文件是否为指示最大值的 SQLCHAR 0 0,但在管道中的某个位置,最大值为 8000,

我正在使用 AZURE SQL 并尝试读取 Azure Blob 容器中的 CSV。

i encountered this problem today, but only for specific ROWS with COLUMNS whose text value exceeds 8000 characters. Regardless if my FMT file was SQLCHAR 0 0 which indicated max, but somewhere along the pipeline, there is a max of 8000

I'm using AZURE SQL and trying to read CSVs in an Azure Blob container.

悲凉≈ 2024-12-28 02:08:12

无论如何,我遇到了同样的问题,因为 CSV 中日期字段的预期格式和实际格式之间存在冲突。我更改了 CSV 中的日期格式,并且成功了。

For what it's worth, I had the same problem because a conflict between the expected format and actual format of the date field in my CSV. I changed the date format in my CSV, and the it worked.

会傲 2024-12-28 02:08:12

如果您使用 bcp 生成格式文件,请注意 -n 和 -c 标志。

我的问题是我指定了 -n (本机)标志,而我的数据文件仅与 -c (字符)标志兼容。

格式文件生成
bcp <数据库>.<架构>.<表>格式 nul -t ; -T -c -S-f 文件名.fmt
数据文件生成
bcp“从表中选择*”查询输出“文件名.dat”-c -C 65001 -t ; -S-d <数据库> -T
批量导入

BULK INSERT dbo.table 
FROM 'filename.dat'
WITH (FORMATFILE = 'filename.fmt');

If you are generating your format file useing bcp be aware of the -n and -c flags.

My problem was that i had the -n (native) flag specified while my data file was only compatible with the -c (character) flag.

format file generation
bcp <DATABASE>.<SCHEMA>.<TABLE> format nul -t ; -T -c -S <IP> -f filename.fmt
data file generation
bcp "select * from table" queryout "filename.dat" -c -C 65001 -t ; -S <IP> -d <DATABASE> -T
bulk import

BULK INSERT dbo.table 
FROM 'filename.dat'
WITH (FORMATFILE = 'filename.fmt');
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文