SqlBulkCopy 和实体框架

发布于 2024-08-27 02:19:43 字数 236 浏览 4 评论 0原文

我当前的项目由 3 个标准层组成:数据、业务和表示。我想使用数据实体来满足我的所有数据访问需求。该应用程序的部分功能是需要将平面文件中的所有数据复制到数据库中。文件不是很大,所以我可以使用 SqlBulkCopy。我发现了几篇关于 .NET 中 SqlBulkCopy 类的使用的文章。然而,所有文章都使用 DataTables 来回移动数据。

有没有办法将数据实体与 SqlBulkCopy 一起使用,或者我必须使用 DataTables?

My current project consists of 3 standard layers: data, business, and presentation. I would like to use data entities for all my data access needs. Part of the functionality of the app will that it will need to copy all data within a flat file into a database. The file is not so big so I can use SqlBulkCopy. I have found several articles regarding the usage of SqlBulkCopy class in .NET. However, all the articles are using DataTables to move data back and forth.

Is there a way to use data entities along with SqlBulkCopy or will I have to use DataTables?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(7

橘虞初梦 2024-09-03 02:19:43

您需要将实体转换为 IDataReader 或 DataTable。

有一个小助手类旨在协助:
http://archive.msdn.microsoft.com/LinqEntityDataReader/Release /ProjectReleases.aspx?ReleaseId=389

编辑:msdn 链接已损坏,可以在此处找到替代副本:
https://github.com/matthewschrager/Repository/blob/master /Repository.EntityFramework/EntityDataReader.cs

然后你可以像这样使用 SqlBulkCopy :

var sbCopy= new SqlBulkCopy(connectionString);
sbCopy.DestinationTableName = "TableName";
sbCopy.WriteToServer(entitiesList.AsDataReader()); 

You'll need to convert the Entities to a IDataReader or DataTable.

There is a small helper class designed to assist with with:
http://archive.msdn.microsoft.com/LinqEntityDataReader/Release/ProjectReleases.aspx?ReleaseId=389

EDIT: msdn link is broken, alt copy can be found here:
https://github.com/matthewschrager/Repository/blob/master/Repository.EntityFramework/EntityDataReader.cs

Then you can use SqlBulkCopy like so:

var sbCopy= new SqlBulkCopy(connectionString);
sbCopy.DestinationTableName = "TableName";
sbCopy.WriteToServer(entitiesList.AsDataReader()); 
萌无敌 2024-09-03 02:19:43

在使用 EF 进行批量插入时,我们尝试并测试了多种方法,并最终使用表值参数来在一定范围的行大小下获得最佳性能。我没有手头的数字,但我知道性能bcp/BULK INSERT 与表值参数的比较 是一个指导因素。

我们最初使用 SqlBulkCopy 以及一个适配器,该适配器采用 IEnumerable 并创建 IDataReader。它还为 SqlBulkCopy 生成相关元数据。这里的优点是导入只是代码。 @davehogan 发布的代码被用作此的基础。

表值参数需要存储过程和数据库中定义的表类型。如果您使用代码优先,则可以执行 SQL 来创建这些作为创建脚本的一部分。虽然这需要更多工作,但我们发现数据库中的行吞吐量明显更加一致且更快。

另外,值得考虑批量插入到主表中。我们使用临时堆表,并在导入数据后向其中添加聚集索引。然后,我们在临时表和主表之间执行MERGE。这样做的好处是在插入时不锁定主表的索引并提高并发性。使用此方法插入的每个 CPU 的速度往往高达 2500 行/秒。

如果您需要更多信息,请告诉我。

We've tried and tested a couple of approaches when bulk inserting using EF and eventually went with table-valued parameters for getting the best performance at a range of row sizes. I don't have the numbers to hand but I know this Performance of bcp/BULK INSERT vs. Table-Valued Parameters was a guiding factor.

We originally used SqlBulkCopy coupled with an adapter that took an IEnumerable<T> and created an IDataReader. It also generated the relevant metadata for SqlBulkCopy. Advantage here was that the import is a code only thing. The code that @davehogan posted was used as a basis for this.

Table-valued parameters require a stored procedure and a table-type defined in the database. If you're using code-first you can execute SQL to create these as part of your creation script. Whilst this is more work we found that we got a significantly more consistent and faster throughput of rows into the database.

Also, it's worth considering not bulk inserting into your main table. We use a temp heap table and add a clustered index to it once the data is imported. We then perform a MERGE between the temp table and the main table. This has the benefit of not locking the main table's index while inserting and improves concurrency. We tend to get upwards of 2500 rows/sec per CPU inserted using this method.

Let me know if you want more info.

゛清羽墨安 2024-09-03 02:19:43

您可以使用 批量包 库。
Bulk Insert 1.0.0 版本用于 Entity Framework >=6.0.0 的项目。
更多描述可以在下面的链接中找到 -
批量操作源码

You may use Bulk package library.
Bulk Insert 1.0.0 version is used in projects having Entity framework >=6.0.0 .
More description can be found in below link-
Bulkoperation source code

陌路终见情 2024-09-03 02:19:43

对于 EFCore,这里是 BulkExtensions(插入、InsertOrUpdate 更新、删除):
链接:https://github.com/borisdj/EFCore.BulkExtensions
也可以通过 Nuget 安装

For EFCore here are BulkExtensions (Insert, InsertOrUpdate Update, Delete):
Link: https://github.com/borisdj/EFCore.BulkExtensions
Can also be installed via Nuget

不一样的天空 2024-09-03 02:19:43

SqlBulkCopy 在调用 WriteToServer 方法时使用 IDataReader,因此您应该能够基于 IEnumerable 集合实现 IDataReader。这将允许您接收实体集并使用 IDataReader 实现调用 SqlBulkCopy。

SqlBulkCopy uses an IDataReader when calling the WriteToServer method so you should be able to implement IDataReader based on a collection that is IEnumerable. This would allow you to take in an entity set and call SqlBulkCopy using your IDataReader implementation.

祁梦 2024-09-03 02:19:43

您可以将数据集视为数据实体的序列化。然而一般来说,我认为 SqlBulkCopy 是一个表到表的东西,因此是数据表的原因。

You can consider a Dataset to be a serialisation of the data entity. However generally speaking I think SqlBulkCopy is a table to table thing, hence the reason For datatables.

2024-09-03 02:19:43

SqlBulkCopy 是从客户端到 SQL Server 的行数据直接、几乎类似于字节数组的传输。它无疑是将数据导入 SQL Server 的最有效方法。

然而,它的性能在于真正的“批量”操作。数百或数千行不一定高到足以证明使用的合理性。数万到数百万行时,SqlBulkCopy 的性能才会真正大放异彩。最后,我们真正讨论的只是将数据发送到服务器

将一组行放入生产数据库的表中还存在其他重大挑战。重新索引、重新排序(如果有聚集索引)、外键验证,所有这些都会增加插入时间,并且可能会锁定表和索引。

此外,TVP 数据会写入磁盘(作为临时表数据),然后可以访问并放入表中。 SqlBulkCopy 能够直接进入您的表...在这种情况下,性能明显更快,但是,必须平衡并发速度。

我认为总体规则是,如果您有少量行需要处理,请考虑 TVP,如果您有数千行,请考虑通过 SqlBulkCopy 尽快将其传输到 SQL Server。

SqlBulkCopy is a direct, almost byte-array-like transfer of row data from client to SQL Server. It is easily the most efficient way to get data into SQL Server.

Its performance lies in truly "bulk" operations, however. Hundreds or thousands of rows isn't necessarily high enough to justify use. Tens of thousands to millions of rows are were SqlBulkCopy's performance will truly shine. And, in the end, all we're really talking about is getting data to the server.

There are other significant challenges in getting a set of rows into a production database's table. Reindexing, reordering (if there is a clustered index), foreign key validation, all these kinds of things add time to your insert and are potentially table- and index-locking.

Also, TVP data is written to disk (as temp table data), and then is accessible to put into your tables. SqlBulkCopy is capable of going directly at your table... performance in that case is significantly faster, however, one must balance speed for concurrency.

I think the overall rule is, if you have a handful of rows to deal with, think TVPs, and if you have many thousands of rows, consider getting it to SQL Server as quickly as possibly via SqlBulkCopy.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文