SqlBulkCopy 不起作用

发布于 2024-07-11 12:48:35 字数 2423 浏览 8 评论 0原文

我有一个从 Excel 工作表填充的 DataSet。 我想使用 SQLBulk Copy 在 Lead_Hdr 表中插入记录,其中 LeadId 是 PK。

我在执行以下代码时遇到以下错误:

给定的 ColumnMapping 与中的任何列都不匹配 来源或目的地

string ConStr=ConfigurationManager.ConnectionStrings["ConStr"].ToString();

using (SqlBulkCopy s = new SqlBulkCopy(ConStr,SqlBulkCopyOptions.KeepIdentity))
{
    if (MySql.State==ConnectionState.Closed)
    {
        MySql.Open();
    }

    s.DestinationTableName = "PCRM_Lead_Hdr";
    s.NotifyAfter = 10000;

    #region Comment
    s.ColumnMappings.Clear();

    #region ColumnMapping
    s.ColumnMappings.Add("ClientID", "ClientID");
    s.ColumnMappings.Add("LeadID", "LeadID");
    s.ColumnMappings.Add("Company_Name", "Company_Name");
    s.ColumnMappings.Add("Website", "Website");
    s.ColumnMappings.Add("EmployeeCount", "EmployeeCount");
    s.ColumnMappings.Add("Revenue", "Revenue");
    s.ColumnMappings.Add("Address", "Address");
    s.ColumnMappings.Add("City", "City");

    s.ColumnMappings.Add("State", "State");
    s.ColumnMappings.Add("ZipCode", "ZipCode");
    s.ColumnMappings.Add("CountryId", "CountryId");

    s.ColumnMappings.Add("Phone", "Phone");
    s.ColumnMappings.Add("Fax", "Fax");
    s.ColumnMappings.Add("TimeZone", "TimeZone");
    s.ColumnMappings.Add("SicNo", "SicNo");
    s.ColumnMappings.Add("SicDesc", "SicDesc");

    s.ColumnMappings.Add("SourceID", "SourceID");
    s.ColumnMappings.Add("ResearchAnalysis", "ResearchAnalysis");
    s.ColumnMappings.Add("BasketID", "BasketID");
    s.ColumnMappings.Add("PipeLineStatusId", "PipeLineStatusId");

    s.ColumnMappings.Add("SurveyId", "SurveyId");
    s.ColumnMappings.Add("NextCallDate", "NextCallDate");
    s.ColumnMappings.Add("CurrentRecStatus", "CurrentRecStatus");
    s.ColumnMappings.Add("AssignedUserId", "AssignedUserId");
    s.ColumnMappings.Add("AssignedDate", "AssignedDate");
    s.ColumnMappings.Add("ToValueAmt", "ToValueAmt");
    s.ColumnMappings.Add("Remove", "Remove");
    s.ColumnMappings.Add("Release", "Release");

    s.ColumnMappings.Add("Insert_Date", "Insert_Date");
    s.ColumnMappings.Add("Insert_By", "Insert_By");
    s.ColumnMappings.Add("Updated_Date", "Updated_Date");
    s.ColumnMappings.Add("Updated_By", "Updated_By");

    #endregion
    #endregion

    s.WriteToServer(sourceTable);

    s.Close();

    MySql.Close();
}

I have a DataSet populated from Excel Sheet. I wanted to use SQLBulk Copy to Insert Records in Lead_Hdr table where LeadId is PK.

I am having following error while executing the code below:

The given ColumnMapping does not match up with any column in the
source or destination

string ConStr=ConfigurationManager.ConnectionStrings["ConStr"].ToString();

using (SqlBulkCopy s = new SqlBulkCopy(ConStr,SqlBulkCopyOptions.KeepIdentity))
{
    if (MySql.State==ConnectionState.Closed)
    {
        MySql.Open();
    }

    s.DestinationTableName = "PCRM_Lead_Hdr";
    s.NotifyAfter = 10000;

    #region Comment
    s.ColumnMappings.Clear();

    #region ColumnMapping
    s.ColumnMappings.Add("ClientID", "ClientID");
    s.ColumnMappings.Add("LeadID", "LeadID");
    s.ColumnMappings.Add("Company_Name", "Company_Name");
    s.ColumnMappings.Add("Website", "Website");
    s.ColumnMappings.Add("EmployeeCount", "EmployeeCount");
    s.ColumnMappings.Add("Revenue", "Revenue");
    s.ColumnMappings.Add("Address", "Address");
    s.ColumnMappings.Add("City", "City");

    s.ColumnMappings.Add("State", "State");
    s.ColumnMappings.Add("ZipCode", "ZipCode");
    s.ColumnMappings.Add("CountryId", "CountryId");

    s.ColumnMappings.Add("Phone", "Phone");
    s.ColumnMappings.Add("Fax", "Fax");
    s.ColumnMappings.Add("TimeZone", "TimeZone");
    s.ColumnMappings.Add("SicNo", "SicNo");
    s.ColumnMappings.Add("SicDesc", "SicDesc");

    s.ColumnMappings.Add("SourceID", "SourceID");
    s.ColumnMappings.Add("ResearchAnalysis", "ResearchAnalysis");
    s.ColumnMappings.Add("BasketID", "BasketID");
    s.ColumnMappings.Add("PipeLineStatusId", "PipeLineStatusId");

    s.ColumnMappings.Add("SurveyId", "SurveyId");
    s.ColumnMappings.Add("NextCallDate", "NextCallDate");
    s.ColumnMappings.Add("CurrentRecStatus", "CurrentRecStatus");
    s.ColumnMappings.Add("AssignedUserId", "AssignedUserId");
    s.ColumnMappings.Add("AssignedDate", "AssignedDate");
    s.ColumnMappings.Add("ToValueAmt", "ToValueAmt");
    s.ColumnMappings.Add("Remove", "Remove");
    s.ColumnMappings.Add("Release", "Release");

    s.ColumnMappings.Add("Insert_Date", "Insert_Date");
    s.ColumnMappings.Add("Insert_By", "Insert_By");
    s.ColumnMappings.Add("Updated_Date", "Updated_Date");
    s.ColumnMappings.Add("Updated_By", "Updated_By");

    #endregion
    #endregion

    s.WriteToServer(sourceTable);

    s.Close();

    MySql.Close();
}

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(7

愁杀 2024-07-18 12:48:36

我发现表中的列和输入中的列必须至少匹配。 您可以在表中包含更多列,并且输入仍会加载。 如果您的数量较少,您将收到错误。

What I have found is that the columns in the table and the columns in the input must at least match. You can have more columns in the table and the input will still load. If you have less you'll receive the error.

天邊彩虹 2024-07-18 12:48:36

想了很久要不要回答...
即使列名大小写相同,如果数据类型不同
你会得到同样的错误。 因此,请检查列名称及其数据类型。

PS:暂存表是确定的导入方式。

Thought a long time about answering...
Even if column names are case equally, if the data type differs
you get the same error. So check column names and their data type.

P.S.: staging tables are definitive the way to import.

请远离我 2024-07-18 12:48:36

我会同意分阶段的想法,但是这是我处理区分大小写性质的方法。 很高兴收到对我的 linq

using (SqlConnection connection = new SqlConnection(conn_str))
{
        connection.Open();
        using (SqlBulkCopy bulkCopy = new SqlBulkCopy(connection))
        {
            bulkCopy.DestinationTableName = string.Format("[{0}].[{1}].[{2}]", targetDatabase, targetSchema, targetTable);
            var targetColumsAvailable = GetSchema(conn_str, targetTable).ToArray();
            foreach (var column in dt.Columns)
            {
                if (targetColumsAvailable.Select(x => x.ToUpper()).Contains(column.ToString().ToUpper()))
                {
                    var tc = targetColumsAvailable.Single(x => String.Equals(x, column.ToString(), StringComparison.CurrentCultureIgnoreCase));
                    bulkCopy.ColumnMappings.Add(column.ToString(), tc);
                }
            }

            // Write from the source to the destination.
            bulkCopy.WriteToServer(dt);
            bulkCopy.Close();
        }
}

和辅助方法的批评

private static IEnumerable<string> GetSchema(string connectionString, string tableName)
        {



   using (SqlConnection connection = new SqlConnection(connectionString))
        using (SqlCommand command = connection.CreateCommand())
        {
            command.CommandText = "sp_Columns";
            command.CommandType = CommandType.StoredProcedure;

            command.Parameters.Add("@table_name", SqlDbType.NVarChar, 384).Value = tableName;

            connection.Open();
            using (var reader = command.ExecuteReader())
            {
                while (reader.Read())
                {
                    yield return (string)reader["column_name"];
                }
            }
        }
    }

I would go with the staging idea, however here is my approach to handling the case sensitive nature. Happy to be critiqued on my linq

using (SqlConnection connection = new SqlConnection(conn_str))
{
        connection.Open();
        using (SqlBulkCopy bulkCopy = new SqlBulkCopy(connection))
        {
            bulkCopy.DestinationTableName = string.Format("[{0}].[{1}].[{2}]", targetDatabase, targetSchema, targetTable);
            var targetColumsAvailable = GetSchema(conn_str, targetTable).ToArray();
            foreach (var column in dt.Columns)
            {
                if (targetColumsAvailable.Select(x => x.ToUpper()).Contains(column.ToString().ToUpper()))
                {
                    var tc = targetColumsAvailable.Single(x => String.Equals(x, column.ToString(), StringComparison.CurrentCultureIgnoreCase));
                    bulkCopy.ColumnMappings.Add(column.ToString(), tc);
                }
            }

            // Write from the source to the destination.
            bulkCopy.WriteToServer(dt);
            bulkCopy.Close();
        }
}

and the helper method

private static IEnumerable<string> GetSchema(string connectionString, string tableName)
        {



   using (SqlConnection connection = new SqlConnection(connectionString))
        using (SqlCommand command = connection.CreateCommand())
        {
            command.CommandText = "sp_Columns";
            command.CommandType = CommandType.StoredProcedure;

            command.Parameters.Add("@table_name", SqlDbType.NVarChar, 384).Value = tableName;

            connection.Open();
            using (var reader = command.ExecuteReader())
            {
                while (reader.Read())
                {
                    yield return (string)reader["column_name"];
                }
            }
        }
    }
魂归处 2024-07-18 12:48:36

马克的答案是我的建议(关于使用临时表)。 这确保了如果您的来源不发生变化,您将来的导入问题将会减少。

但是,根据我的经验,您可以检查以下问题:

源和表中的列名称匹配
列类型匹配

如果您认为您这样做了但仍然没有成功。 您可以尝试以下操作。

1 - 允许表中所有列出现空值
2 - 注释掉所有列映射
3 - 重新运行一次添加一列,直到找到问题所在,

这应该会引发错误

The answer by Marc would be my recomendation (on using staging table). This ensures that if your source doesn't change, you'll have fewer issues importing in the future.

However, in my experience, you can check the following issues:

Column names match in source and table
That the column types match

If you think you did this and still no success. You can try the following.

1 - Allow nulls in all columns in your table
2 - comment out all column mappings
3 - rerun adding one column at a time until you find where your issue is

That should bring out the bug

┈┾☆殇 2024-07-18 12:48:36

原因之一是 :SqlBukCOpy 区分大小写。 请遵循步骤:

  1. 在这种情况下,首先您必须通过以下方式在源表中找到您的列
    在 C# 中使用“包含”方法。
  2. 一旦您的目标列与源列匹配,就会获得索引
    该列并在 SqlBukCOpy 中给出其列名称。

例如:`

//Get Column from Source table 
  string sourceTableQuery = "Select top 1 * from sourceTable";
   DataTable dtSource=SQLHelper.SqlHelper.ExecuteDataset(transaction, CommandType.Text, sourceTableQuery).Tables[0];// i use sql helper for executing query you can use corde sw

 for (int i = 0; i < destinationTable.Columns.Count; i++)
                        {    //check if destination Column Exists in Source table
                            if (dtSource.Columns.Contains(destinationTable.Columns[i].ToString()))//contain method is not case sensitive
                            {
                                int sourceColumnIndex = dtSource.Columns.IndexOf(destinationTable.Columns[i].ToString());//Once column matched get its index
                                bulkCopy.ColumnMappings.Add(dtSource.Columns[sourceColumnIndex].ToString(), dtSource.Columns[sourceColumnIndex].ToString());//give coluns name of source table rather then destination table so that it would avoid case sensitivity
                            }

                        }
                        bulkCopy.WriteToServer(destinationTable);
                        bulkCopy.Close();

One of the reason is that :SqlBukCOpy is case sensitive . Follow steps:

  1. In that Case first you have to find your column in Source Table by
    using "Contain" method in C#.
  2. Once your Destination column matched with source column get index of
    that column and give its column name in SqlBukCOpy .

For Example:`

//Get Column from Source table 
  string sourceTableQuery = "Select top 1 * from sourceTable";
   DataTable dtSource=SQLHelper.SqlHelper.ExecuteDataset(transaction, CommandType.Text, sourceTableQuery).Tables[0];// i use sql helper for executing query you can use corde sw

 for (int i = 0; i < destinationTable.Columns.Count; i++)
                        {    //check if destination Column Exists in Source table
                            if (dtSource.Columns.Contains(destinationTable.Columns[i].ToString()))//contain method is not case sensitive
                            {
                                int sourceColumnIndex = dtSource.Columns.IndexOf(destinationTable.Columns[i].ToString());//Once column matched get its index
                                bulkCopy.ColumnMappings.Add(dtSource.Columns[sourceColumnIndex].ToString(), dtSource.Columns[sourceColumnIndex].ToString());//give coluns name of source table rather then destination table so that it would avoid case sensitivity
                            }

                        }
                        bulkCopy.WriteToServer(destinationTable);
                        bulkCopy.Close();
送君千里 2024-07-18 12:48:35

我在将数据从访问复制到 SQLSERVER 2005 时遇到了同样的问题,我发现无论数据库的敏感性如何,两个数据源上的列映射都是区分大小写的。

I've encountered the same problem while copying data from access to SQLSERVER 2005 and i found that the column mappings are case sensitive on both data sources regardless of the databases sensitivity.

静谧幽蓝 2024-07-18 12:48:35

嗯,是吗? 列名是否存在于两侧?

说实话,我从来没有为映射而烦恼过。 我喜欢让事情变得简单 - 我倾向于有一个类似于服务器上输入的临时表,然后我将 SqlBulkCopy 复制到临时表中,最后运行一个存储过程将表从暂存表到实际表中; 优点:

  • 如果导入在任何时候失败,都不会出现实时数据损坏的问题
  • 我可以在 SPROC 周围放置一个事务
  • 我可以让 bcp 工作而无需日志记录,因为知道 SPROC 将被记录,
  • 这很简单;-p(不要搞乱映射)

最后一个想法 - 如果您正在处理批量数据,则可以使用 IDataReader 获得更好的吞吐量(因为这是一个流式 API,其中 DataTable 是一个缓冲 API)。 例如,我倾向于使用 CsvReader 作为 CSV 导入的源SqlBulkCopy。 或者,我围绕 XmlReader 编写了垫片,将每个第一级元素呈现为 IDataReader 中的一行 - 速度非常快。

Well, is it right? Do the column names exist on both sides?

To be honest, I've never bothered with mappings. I like to keep things simple - I tend to have a staging table that looks like the input on the server, then I SqlBulkCopy into the staging table, and finally run a stored procedure to move the table from the staging table into the actual table; advantages:

  • no issues with live data corruption if the import fails at any point
  • I can put a transaction just around the SPROC
  • I can have the bcp work without logging, safe in the knowledge that the SPROC will be logged
  • it is simple ;-p (no messing with mappings)

As a final thought - if you are dealing with bulk data, you can get better throughput using IDataReader (since this is a streaming API, where-as DataTable is a buffered API). For example, I tend to hook CSV imports up using CsvReader as the source for a SqlBulkCopy. Alternatively, I have written shims around XmlReader to present each first-level element as a row in an IDataReader - very fast.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文