使用数据工厂从SQL Server加载数据到雪花

发布于 2025-02-11 06:48:08 字数 877 浏览 1 评论 0 原文

我在SQL Server中有下表,我试图使用Azure Data Factory将其加载到雪花上:

它具有7列:

ID, StartDate, Assigner, Priority, Operation, OldValue, NewValue (THIS IS CREATING iSSUE)

“在此处输入图像说明”

newValue列有双引号,因此可能是我的。可能是将一列视为多列。不确定确切的问题是什么。

我会遇到以下错误:

> ErrorCode=UserErrorOdbcOperationFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=ERROR [22000] Found character 'C' instead of record delimiter '\r\n'
      File '93fcbeba-41f7-4824-b8f6-eda4c2965e00/SnowflakeImportCopyCommand/data_93fcbeba-41f7-4824-b8f6-eda4c2965e00_fa1577fb-8544-44a6-8a60-65bd8b2419c5.txt', line 48, character 39
      Row 1 starts at line 2, column "myTable"["$7":7]

有人熟悉此错误吗?赞赏指导。谢谢。

I have the following table in SQL Server which I am trying to load into Snowflake using Azure Data Factory:

It has 7 columns:

ID, StartDate, Assigner, Priority, Operation, OldValue, NewValue (THIS IS CREATING iSSUE)

enter image description here

NewValue column has double quotes in it might me because of that. Might be its considering one column as multiple columns.. Not sure what is the exact issue.

I am getting following error:

> ErrorCode=UserErrorOdbcOperationFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=ERROR [22000] Found character 'C' instead of record delimiter '\r\n'
      File '93fcbeba-41f7-4824-b8f6-eda4c2965e00/SnowflakeImportCopyCommand/data_93fcbeba-41f7-4824-b8f6-eda4c2965e00_fa1577fb-8544-44a6-8a60-65bd8b2419c5.txt', line 48, character 39
      Row 1 starts at line 2, column "myTable"["$7":7]

Anyone familiar with this error? Guidance is appreciated. Thanks.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

只怪假的太真实 2025-02-18 06:48:08

请参阅以下链接:

来自该

参考(源 - >分期斑点)使用CSV格式,格式序列化器未能在数据中逃脱逃生字符,最终导致数据无效。

您可以在尽头尝试以下解决方法。

手动编辑管道的JSON有效负载,转到属性 - >活动 - > {您的复制活动} - > “ typeproperties”,添加一个标志“逃生引号”:true

See the following link:
https://learn.microsoft.com/en-us/answers/questions/559372/delimiter-error-in-copy-actibity-of-adf.html

From that reference:

The issue could be because of the first stage (Source -> staging blob) use csv format and the format serializer failed to escape the escape char in the data eventually causing the data to be invalid.

You could try the below workaround at your end.

Manually edit JSON payload of your pipeline, go to properties -> activities -> {your copy activity} -> "typeProperties", add a flag "escapeQuoteEscaping": true

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文