当上一行的列较少时,来自CSV的错误插入

发布于 2025-02-06 03:51:02 字数 1384 浏览 2 评论 0原文

我正在尝试插入以下代码插入表:

DROP TABLE #temp_FeirasLivres

CREATE TABLE #temp_FeirasLivres
(
ID INT null,
LONG BIGINT null,
LAT  BIGINT null,
SETCENS BIGINT null,
AREAP BIGINT null,
CODDIST INT null,
DISTRITO NVARCHAR(100) null,
CODSUBPREF INT null,
SUBPREFE NVARCHAR(200) null,
REGIAO5 NVARCHAR(200) null,
REGIAO8 NVARCHAR(200) null,
NOME_FEIRA NVARCHAR(200) null,
REGISTRO NVARCHAR(50) null,
LOGRADOURO NVARCHAR(100) null,
NUMERO NVARCHAR(200) null default('S/N'),
BAIRRO NVARCHAR(50) null default(''),
REFERENCIA NVARCHAR(100) null
)


BULK INSERT #temp_FeirasLivres
FROM 'DEINFO_AB_FEIRASLIVRES_2014.csv'
WITH
(
FORMAT = 'CSV',
FirstRow = 1
);

文件的内容有880行,但我在这里显示足够的验证我的意思:

879,-46610849,-23609187,3550308270078,35503080050444,27,Cursino,13,Ipiranga,ipiranga,sul,Sul,Sul,Sul 1,Cerracao,Cerracao,4025-8,Rua Lino Guedes,Rua Lino Guedes,109.000000000000000000000000,MOINHO VELA,MOIINHOVELAWO,MOINHOVELAA,1A.RAA,1A.RAA,1A. 880,-46450426,-23602582,355030833000022,3550308005274,32,IGUATEMI,30,SAO MATEUS,SAO MATEUS,LESTE,LESTE,LESTE,LESTE 2,JD.BOA ESPERANCA,JD.BOA ESPERANCA,5171-3,5171-3,5171-3,5171-3,ua upiara,jupiara,s/ess/s/s/eash jd boa s/s/>

boa最后一行的列少于其他行(在上一个值之后没有)。

如果我在Boa Esperanca之后放置了“”,它可以正常工作,但是我想知道是否可以在源上做任何事情,可以节省时间,从始终打开和修复CSV文件。

PS:最后一行在其之后有一个断路器,并且我在散装选项上尝试了Rowternator,但可以重试。

I'm trying to bulk insert a table with the code below:

DROP TABLE #temp_FeirasLivres

CREATE TABLE #temp_FeirasLivres
(
ID INT null,
LONG BIGINT null,
LAT  BIGINT null,
SETCENS BIGINT null,
AREAP BIGINT null,
CODDIST INT null,
DISTRITO NVARCHAR(100) null,
CODSUBPREF INT null,
SUBPREFE NVARCHAR(200) null,
REGIAO5 NVARCHAR(200) null,
REGIAO8 NVARCHAR(200) null,
NOME_FEIRA NVARCHAR(200) null,
REGISTRO NVARCHAR(50) null,
LOGRADOURO NVARCHAR(100) null,
NUMERO NVARCHAR(200) null default('S/N'),
BAIRRO NVARCHAR(50) null default(''),
REFERENCIA NVARCHAR(100) null
)


BULK INSERT #temp_FeirasLivres
FROM 'DEINFO_AB_FEIRASLIVRES_2014.csv'
WITH
(
FORMAT = 'CSV',
FirstRow = 1
);

The content of file has 880 rows, but I'll show here enough to validate what I'm saying:

879,-46610849,-23609187,355030827000078,3550308005044,27,CURSINO,13,IPIRANGA,Sul,Sul 1,CERRACAO,4025-8,RUA LINO GUEDES,109.000000,MOINHO VELHO,ALTURA DA VERGUEIRO 7450
880,-46450426,-23602582,355030833000022,3550308005274,32,IGUATEMI,30,SAO MATEUS,Leste,Leste 2,JD.BOA ESPERANCA,5171-3,RUA IGUPIARA,S/N,JD BOA ESPERANCA

The error is about the last row has fewer columns than the other rows (there is no, after the previous value).

If I put a "," after BOA ESPERANCA, it works, but I want to know if there is anything I can do on source to save time from always opening and fixing the CSV file.

PS: The last row has a line breaker after it, and I've tried with rowterminator on bulk options, but can try again.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

娇纵 2025-02-13 03:51:03

正如@larnu在评论中所说的那样:

SQL Server期望该文件形成良好;这意味着它必须在每行中具有相同数量的列。如果文件畸形(似乎是),则需要首先修复文件,然后插入。

因此,这是最好的答案。

As @Larnu saids in comments:

SQL Server expects the file to be well formed; that means that it has to have the same amount of columns in every row. if the file is malformed (which is appears to be), you'll need to fix the file first, and then BULK INSERT it.

So, it's the best answer.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文