C# BulkCopy、DBF 错误(Timout、&Provider 无法确定...)

发布于 2025-01-02 12:41:57 字数 3767 浏览 2 评论 0原文

我编写了一个小型控制台应用程序,我将其指向包含 DBF/FoxPo 文件的文件夹。

然后,它根据每个 dbf 表在 SQL 中创建一个表,然后执行批量复制以将数据插入到 SQL 中。除了一些障碍之外,它在大多数情况下工作得很好。

1) 一些 FoxPro 表包含 5000000 多条记录,并且连接在插入完成之前到期。

这是我的连接字符串:

<add name="SQL" connectionString="data source=source_source;persist security info=True;user id=DBFToSQL;password=DBFToSQL;Connection Timeout=20000;Max Pool Size=200" providerName="System.Data.SqlClient" />

错误消息: “超时已过。操作完成之前超时时间已过,或者服务器未响应。”

代码:

using (SqlConnection SQLConn = new SqlConnection(SQLString))
using (OleDbConnection FPConn = new OleDbConnection(FoxString))
{
    ServerConnection srvConn = new Microsoft.SqlServer.Management.Common.ServerConnection(SQLConn);
    try
    {
        FPConn.Open();                       
        string dataString = String.Format("Select * from {0}", tableName);

        using (OleDbCommand Command = new OleDbCommand(dataString, FPConn))
        using (OleDbDataReader Reader = Command.ExecuteReader(CommandBehavior.SequentialAccess))
        {                       
            tbl = new Table(database, tableName, "schema");

            for (int i = 0; i < Reader.FieldCount; i++)
            {                           
                col = new Column(tbl, Reader.GetName(i), ConvertTypeToDataType(Reader.GetFieldType(i)));
                col.Nullable = true;
                tbl.Columns.Add(col);                       
            }

            tbl.Create();                       
            BulkCopy(Reader, tableName);
        }                   
    }
    catch (Exception ex)
    {
       // LogText(ex, @"C:\LoadTable_Errors.txt", tableName);
        throw ex;
    }
    finally
    {
        SQLConn.Close();
        srvConn.Disconnect();
    }
}

private DataType ConvertTypeToDataType(Type type)
{
    switch (type.ToString())
    {
        case "System.Decimal":
            return DataType.Decimal(18, 38);
        case "System.String":
            return DataType.NVarCharMax;
        case "System.Int32":
            return DataType.Int;
        case "System.DateTime":
            return DataType.DateTime;
        case "System.Boolean":
            return DataType.Bit;
        default:
            throw new NotImplementedException("ConvertTypeToDataType Not implemented for type : " + type.ToString());
    }
}

 private void BulkCopy(OleDbDataReader reader, string tableName)
{
    using (SqlConnection SQLConn = new SqlConnection(SQLString))
    {       
        SQLConn.Open();
        SqlBulkCopy bulkCopy = new SqlBulkCopy(SQLConn);

        bulkCopy.DestinationTableName = "schema." + tableName;

        try
        {
            bulkCopy.WriteToServer(reader);         
        }
        catch (Exception ex)
        {           
            //LogText(ex, @"C:\BulkCopy_Errors.txt", tableName);
        }
        finally
        {
            SQLConn.Close();
            reader.Close();
        }
    }
}

我的第二个和第二个第三个错误如下:

我了解问题是什么,但我不太确定如何纠正它们

2)“提供程序无法确定十进制值。例如,该行只是创建后,Decimal 列的默认值不可用,并且使用者尚未设置新的 Decimal 值。”

3) SqlDateTime 溢出。必须在 1/1/1753 12:00:00 AM 和 12/31/9999 11:59:59 PM 之间。

我在 google 上找到了一个表明问题所在的结果:[A]...以及一个可能的解决办法 [B] (但我想将我的十进制值保留为十进制,并且日期作为日期,因为我将对数据进行进一步的计算)

我想要做的解决方案

1.)要么增加连接时间,(但我不认为我可以增加它比我更重要有),或者是否可以拆分 OleDbDataReader 的结果并进行增量批量插入?

2.)我在想是否可以批量复制来忽略有错误的结果,或者将错误记录记录到csv文件或某种程度的文件中?

I've written a small console app that I point to a folder containing DBF/FoxPo files.

It then creates a table in SQL based on each dbf table, then does a bulk copy to insert the data into SQL. It works quite well for the most part, except for a few snags..

1) Some of the FoxPro tables contain 5000000+ records and the connection expries before the insert completes..

Here is my connection string:

<add name="SQL" connectionString="data source=source_source;persist security info=True;user id=DBFToSQL;password=DBFToSQL;Connection Timeout=20000;Max Pool Size=200" providerName="System.Data.SqlClient" />

Error message:
"Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding."

CODE:

using (SqlConnection SQLConn = new SqlConnection(SQLString))
using (OleDbConnection FPConn = new OleDbConnection(FoxString))
{
    ServerConnection srvConn = new Microsoft.SqlServer.Management.Common.ServerConnection(SQLConn);
    try
    {
        FPConn.Open();                       
        string dataString = String.Format("Select * from {0}", tableName);

        using (OleDbCommand Command = new OleDbCommand(dataString, FPConn))
        using (OleDbDataReader Reader = Command.ExecuteReader(CommandBehavior.SequentialAccess))
        {                       
            tbl = new Table(database, tableName, "schema");

            for (int i = 0; i < Reader.FieldCount; i++)
            {                           
                col = new Column(tbl, Reader.GetName(i), ConvertTypeToDataType(Reader.GetFieldType(i)));
                col.Nullable = true;
                tbl.Columns.Add(col);                       
            }

            tbl.Create();                       
            BulkCopy(Reader, tableName);
        }                   
    }
    catch (Exception ex)
    {
       // LogText(ex, @"C:\LoadTable_Errors.txt", tableName);
        throw ex;
    }
    finally
    {
        SQLConn.Close();
        srvConn.Disconnect();
    }
}

private DataType ConvertTypeToDataType(Type type)
{
    switch (type.ToString())
    {
        case "System.Decimal":
            return DataType.Decimal(18, 38);
        case "System.String":
            return DataType.NVarCharMax;
        case "System.Int32":
            return DataType.Int;
        case "System.DateTime":
            return DataType.DateTime;
        case "System.Boolean":
            return DataType.Bit;
        default:
            throw new NotImplementedException("ConvertTypeToDataType Not implemented for type : " + type.ToString());
    }
}

 private void BulkCopy(OleDbDataReader reader, string tableName)
{
    using (SqlConnection SQLConn = new SqlConnection(SQLString))
    {       
        SQLConn.Open();
        SqlBulkCopy bulkCopy = new SqlBulkCopy(SQLConn);

        bulkCopy.DestinationTableName = "schema." + tableName;

        try
        {
            bulkCopy.WriteToServer(reader);         
        }
        catch (Exception ex)
        {           
            //LogText(ex, @"C:\BulkCopy_Errors.txt", tableName);
        }
        finally
        {
            SQLConn.Close();
            reader.Close();
        }
    }
}

My 2nd & 3rd errors are the following:

I understand what the issues are, but how to rectify them i'm not so sure

2) "The provider could not determine the Decimal value. For example, the row was just created, the default for the Decimal column was not available, and the consumer had not yet set a new Decimal value."

3) SqlDateTime overflow. Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM.

I found a result on google that indicated what the issue is : [A]... and a possible work around [B] (but I'd like to keep my decimal values as decimal, and dates as date, as I'll be doing further calculations against the data)

What I'm wanting to do as a solution

1.) Either increase the connection time, (but i dont think i can increase it any more than i have), or alternatively is it possible to split the OleDbDataReader's results and do in incremental bulk insert?

2.)I was thinking if its possible to have bulk copy to ignore results with errors, or have the records that do error out log to a csv file or something to that extent?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

£冰雨忧蓝° 2025-01-09 12:41:57

因此,在您执行“for”语句的地方,我可能会将其分解为一次需要这么多:

int i = 0;
int MaxCount = 1000;

while (i < Reader.FieldCount)
{
    var tbl = new Table(database, tableName, "schema"); 

    for (int j = i; j < MaxCount; j++) 
    {                            
        col = new Column(tbl, Reader.GetName(j), ConvertTypeToDataType(Reader.GetFieldType(j))); 
        col.Nullable = true; 
        tbl.Columns.Add(col);
        i++;                      
    } 

    tbl.Create();                        
    BulkCopy(Reader, tableName); 
}

因此,“i”跟踪总体计数,“j”跟踪增量计数(即您的最大值为 1)时间计数),当您创建“批次”时,您将创建表并批量复制它。

这看起来像您所期望的那样吗?

干杯,
克里斯.

So where you do the "for" statement I would probably break it up to take so many at a time :

int i = 0;
int MaxCount = 1000;

while (i < Reader.FieldCount)
{
    var tbl = new Table(database, tableName, "schema"); 

    for (int j = i; j < MaxCount; j++) 
    {                            
        col = new Column(tbl, Reader.GetName(j), ConvertTypeToDataType(Reader.GetFieldType(j))); 
        col.Nullable = true; 
        tbl.Columns.Add(col);
        i++;                      
    } 

    tbl.Create();                        
    BulkCopy(Reader, tableName); 
}

So, "i" keeps track of the overall count, "j" keeps track of the incremental count (ie your max at one time count) and when you have created your 'batch', you create the table and Bulk Copy it.

Does that look like what you would expect?

Cheers,
Chris.

大姐,你呐 2025-01-09 12:41:57

这是我目前对批量复制方法的尝试,我不适用于大约 90% 的表,但是我得到了 OutOfMemory 异常,对于较大的表...我想将读者的数据分成较小的部分,不必先将其传递到 DataTable 并将其存储在内存中(这是较大结果集上出现 OutOfMemory 异常的原因)

更新

修改了下面的代码以了解其外观我的解决方案..它不漂亮..但它有效。我肯定会进行一些重构,并再次更新我的答案。

    private void BulkCopy(OleDbDataReader reader, string tableName, Table table)
    {
        Console.WriteLine(tableName + " BulkCopy Started.");
        try
        {
            DataTable tbl = new DataTable();
            List<Type> typeList = new List<Type>();
            foreach (Column col in table.Columns)
            {
                tbl.Columns.Add(col.Name, ConvertDataTypeToType(col.DataType));
                typeList.Add(ConvertDataTypeToType(col.DataType));
            }

            int batch = 1;
            int counter = 0;

            DataRow tblRow = tbl.NewRow();

            while (reader.Read())
            {
                counter++;
                int colcounter = 0;
                foreach (Column col in table.Columns)
                {
                    try
                    {
                        tblRow[colcounter] = reader[colcounter];
                    }
                    catch (Exception)
                    {
                        tblRow[colcounter] = GetDefault(typeList[0]);
                    }
                    colcounter++;
                }

                tbl.LoadDataRow(tblRow.ItemArray, true);

                if (counter == BulkInsertIncrement)
                {
                    Console.WriteLine(tableName + " :: Batch >> " + batch);
                    counter = PerformInsert(tableName, tbl, batch);
                    batch++;
                }
            }

            if (counter > 0)
            {
                Console.WriteLine(tableName + " :: Batch >> " + batch);
                PerformInsert(tableName, tbl, counter);
            }

            tbl = null;
            Console.WriteLine("BulkCopy Success!");
        }
        catch (Exception ex)
        {
            Console.WriteLine("BulkCopy Fail!");
            SharedLogger.Write(ex, @"C:\BulkCopy_Errors.txt", tableName);
            Console.WriteLine(ex.Message);
        }
        finally
        {
            reader.Close();
            reader.Dispose();

        }
        Console.WriteLine(tableName + " BulkCopy Ended.");
        Console.WriteLine("*****");
        Console.WriteLine("");
    }

This is my current attemt at the bulk copy method, I't works for about 90% of the tables, but i get an OutOfMemory exeption, with the bigger tables... I'd like to split the reader's data into smaller secions, without having to pass it into a DataTable and store it in memory first (which is the cause of the OutOfMemory exception on the bigger result sets)

UPDATE

Imodified the code below as to how it looks in my solution.. It aint pretty.. but it works. I'll def do some refactoring, and update my answer again.

    private void BulkCopy(OleDbDataReader reader, string tableName, Table table)
    {
        Console.WriteLine(tableName + " BulkCopy Started.");
        try
        {
            DataTable tbl = new DataTable();
            List<Type> typeList = new List<Type>();
            foreach (Column col in table.Columns)
            {
                tbl.Columns.Add(col.Name, ConvertDataTypeToType(col.DataType));
                typeList.Add(ConvertDataTypeToType(col.DataType));
            }

            int batch = 1;
            int counter = 0;

            DataRow tblRow = tbl.NewRow();

            while (reader.Read())
            {
                counter++;
                int colcounter = 0;
                foreach (Column col in table.Columns)
                {
                    try
                    {
                        tblRow[colcounter] = reader[colcounter];
                    }
                    catch (Exception)
                    {
                        tblRow[colcounter] = GetDefault(typeList[0]);
                    }
                    colcounter++;
                }

                tbl.LoadDataRow(tblRow.ItemArray, true);

                if (counter == BulkInsertIncrement)
                {
                    Console.WriteLine(tableName + " :: Batch >> " + batch);
                    counter = PerformInsert(tableName, tbl, batch);
                    batch++;
                }
            }

            if (counter > 0)
            {
                Console.WriteLine(tableName + " :: Batch >> " + batch);
                PerformInsert(tableName, tbl, counter);
            }

            tbl = null;
            Console.WriteLine("BulkCopy Success!");
        }
        catch (Exception ex)
        {
            Console.WriteLine("BulkCopy Fail!");
            SharedLogger.Write(ex, @"C:\BulkCopy_Errors.txt", tableName);
            Console.WriteLine(ex.Message);
        }
        finally
        {
            reader.Close();
            reader.Dispose();

        }
        Console.WriteLine(tableName + " BulkCopy Ended.");
        Console.WriteLine("*****");
        Console.WriteLine("");
    }
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文