使用PowerShell将大型Oracle表复制到SQL Server-内存问题
我们正在尝试使用PowerShell和SQLBulkCopy将数据从大型Oracle表(约50m行)复制到SQL Server。该特定的Oracle表的问题是它包含一个clob字段,并且似乎与其他表负载不同,该表越来越多,最终占用了位于同一服务器上的SQL Server,在哪个PowerShell上正在运行。 Oracle是外部的,并且正在通过网络发送数据。 clob的最大尺寸为640万字节,而平均大小为2000。
这是使用的代码段。似乎批处理与正在发生的事情没有任何关系
: $ sourceconnection = new-object oracle.manageddataaccess.client.oracleconnection($ sourceconnectionnectionnectionstring) $ sourceconnection.open()
$SourceCmd = $SourceConnection.CreateCommand()
$SourceCmd.CommandType = "text"
$SourceCmd.CommandText = $queryStatment
$bulkCopy = New-Object Data.SqlClient.SqlBulkCopy($targetConnectionString, [System.Data.SqlClient.SqlBulkCopyOptions]::UseInternalTransaction)
$bulkcopy.DestinationTableName = $destTable
$bulkcopy.bulkcopyTimeout = 0
$bulkcopy.batchsize = 500
$SourceReader = $SourceCmd.ExecuteReader()
Start-Sleep -Seconds 2
$bulkCopy.WriteToServer($SourceReader)
`
我们尝试了不同的批量大小,较小和较大,结果相同。 尝试启用1/0 尝试使用内部事务(在上面的代码示例中)或仅使用默认选项,但仍指定批处理大小...
我们可以尝试做的其他事情以避免内存压力?
先感谢您!
We are trying to copy data from a large Oracle table (about 50M rows) to SQL Server, using Powershell and SQLBulkCopy. The issue with this particular Oracle table is that it contains a CLOB field and it seems that unlike other table loads, this one is taking up more and more OS memory, eventually overpowering SQL Server, which is located on the same server, on which Powershell is running. Oracle is external and data is being sent via a network. Max size of CLOB is 6.4M bytes, whereas the average size is 2000.
Here is a snip of code being used. Seems that batchsize does not have any bearing on what's happening:
`
$SourceConnection = New-Object Oracle.ManagedDataAccess.Client.OracleConnection($SourceConnectionnectionstring)
$SourceConnection.Open()
$SourceCmd = $SourceConnection.CreateCommand()
$SourceCmd.CommandType = "text"
$SourceCmd.CommandText = $queryStatment
$bulkCopy = New-Object Data.SqlClient.SqlBulkCopy($targetConnectionString, [System.Data.SqlClient.SqlBulkCopyOptions]::UseInternalTransaction)
$bulkcopy.DestinationTableName = $destTable
$bulkcopy.bulkcopyTimeout = 0
$bulkcopy.batchsize = 500
$SourceReader = $SourceCmd.ExecuteReader()
Start-Sleep -Seconds 2
$bulkCopy.WriteToServer($SourceReader)
`
We tried different batch sizes, smaller and larger, with same result.
Tried enableStreaming 1 / 0
tried using Internal Transaction (in the code sample above) or just using default options, but still specifying batch size...
Anything else we can try to do to avoid the memory pressure?
Thank you in advance!
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
事实证明,经过广泛的研究,一个晦涩的Oracle命令属性负责发送clobs的数据,这是使内存饱和的。
InitialLobFetchSize
此属性指定了oracledatareader最初为LOB列获取的数据量。它被默认为0,这意味着“整个clob”。
我将其设置为1M字节,这很大,并且该过程永远不会将其纳入内存。
Turned out, after extensive research, an obscure Oracle command attribute is responsible for sending data for CLOBs, which is what was saturating memory.
InitialLOBFetchSize
This property specifies the amount of data that the OracleDataReader initially fetches for LOB columns. It is defaulted to 0, which means "the entire CLOB".
I set it to 1M bytes, which is plenty, and the process never ate into memory.