使用 Entity Framework 4 将大量行插入 SQL CE 4.0(性能问题)
我有一个小型 SQL CE 4.0 数据库,其中包含多个表,使用实体框架 4 进行映射。
这是我的代码
foreach (String designation in newItemDesignations)
{
ParameterData defaultValue = PDM.GetDefaultParameterData(designation);
// Fill the ItemParameterDBO object with the data
ItemParameterDBO dbParam = new ItemParameterDBO();
dbParam.ItemID = dbItem.ID;
dbParam.Designation = designation;
dbParam.High = defaultValue.High;
dbParam.Low = defaultValue.Low;
database.ItemParameterDBOes.AddObject(dbParam);
}
database.SaveChanges();
此代码发生 24 次
,每次 newItemDesignations
列表包含正好 525 个元素
。总共需要添加 12600 行。
整个过程持续509秒
。我想这对于 12600
行来说太多了。
我知道我调用了 SaveChanges 24 次
。目前,应用程序设计不允许我将所有插入放入单个事务中(使用 SaveChanges)。然而,看看单笔交易会发生什么。 509 / 24 = 21 秒
,或每行 40 毫秒
。
- 40 毫秒是通过 EF4 插入行的正常(平均)时间吗?
我已经检查了我的其他代码(除了添加到数据库和保存更改之外)。所有 12600 行总共需要 100 毫秒。这是完整时间的 0.01%,所以这显然不是问题。 99.99% 的处理时间花在 EF4 AddObject
和 SaveChanges
上。
我还知道我正在设置作为外键的 ItemID 属性。这是一个整数,所以我想它应该没什么关系。但我不知道。
另请注意:任何表上都没有设置索引(主键/外键除外)
- 我在这里做错了什么,为什么这么慢?
- 这是插入那么多行所需的正常时间还是与 SQL CE 4 相关的某种性能限制?
I have a small SQL CE 4.0 database with several tables, mapped using Entity Framework 4.
Here is the code I have
foreach (String designation in newItemDesignations)
{
ParameterData defaultValue = PDM.GetDefaultParameterData(designation);
// Fill the ItemParameterDBO object with the data
ItemParameterDBO dbParam = new ItemParameterDBO();
dbParam.ItemID = dbItem.ID;
dbParam.Designation = designation;
dbParam.High = defaultValue.High;
dbParam.Low = defaultValue.Low;
database.ItemParameterDBOes.AddObject(dbParam);
}
database.SaveChanges();
This code happens 24 times
and each time the newItemDesignations
list contains exactly 525 elements
. That's a total of 12600 rows to add.
The complete process lasts 509 seconds
. I guess that's too much for 12600
rows.
I am aware that I am calling SaveChanges 24 times
. At the moment, the application design does not allow me to put all inserts into a single transaction (with SaveChanges
). However, take a look at what happens with the single transaction. 509 / 24 = 21 seconds
, or a 40 ms per row
.
- Is 40 ms the normal (avg) time for a row to be inserted via EF4?
I've checked my other code (other than adding to the database and saving changes). It takes total of 100 ms for all 12600 rows. That's 0.01% of complete time, so that's obviously not the problem. The 99.99% of the processing time is spent in EF4 AddObject
and SaveChanges
.
I am also aware that I am setting the ItemID property which is a foreign key. This is an integer so I guess it shouldn't matter much. But I wouldn't know.
Also note: there are no indexes set on any of the tables (except for primary/foreign keys)
- What am I doing wrong here, why is this so slow?
- Is this the normal time needed for inserting that much rows or is this some kind of performance restrictions related to SQL CE 4?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
由于示例很少,这里是我测试过的代码,它运行得很好。感谢
ErikEJ 的 SqlCeBulkCopy
库。必须有。插入我的 (OP) 12600 行花费了不到 2 秒的时间。
这个例子很糟糕,它不是强类型的,但它描述了如何手动创建数据表并使用
SqlCeBulkCopy
库将其插入数据库(请参阅接受链接的答案)。Since the examples are scarce, here is the code I tested and it worked flawlessly. Thanks to
ErikEJ's SqlCeBulkCopy
library. A must have.Inserting my (OP) 12600 rows took less than 2 seconds.
This example is bad, it's not strongly typed, but it describes how to manually create a datatable and insert it into a database using
SqlCeBulkCopy
library (see the accepted answer for a link).您可以考虑使用我的 SqlCeBulkCopy 库来绕过 EF http://sqlcebulkcopy.codeplex.com
You could consider using my SqlCeBulkCopy library, to bypass EF http://sqlcebulkcopy.codeplex.com
为了进一步了解 ErikEJ 的答案和您自己的示例,您可以使用列表上的
IDataReader
实现将数据流式传输到WriteToServer
中,而不是通过DataTable< 复制值。 /代码>。看到这个问题:
从类型化列表中获取 IDataReader
我实现了这个在工作一次时,它似乎并没有提高性能,但似乎减少了内存消耗。
To further ErikEJ's answer and your own sample, you can use the implementation of
IDataReader
over lists to stream the data intoWriteToServer
instead of duplicating the values via aDataTable
. See this question:Get an IDataReader from a typed List
I implemented this at work once, it doesn't appear to improve performance, but it appeared to reduce memory consumption.