使用SQL Server存储数据

发布于 2024-10-18 10:34:40 字数 1056 浏览 3 评论 0原文

我现在在我的项目中使用 SQL Server 2008 来存储和获取数据。到目前为止一切都很完美。我可以在不到 50 毫秒(JSON)的时间内获取 20000 条记录。但面临着插入东西的问题。在我的项目中,我需要能够每分钟插入 100000 条记录。这对于 SQL Server 来说似乎非常慢。
我尝试使用另一个数据库(NOSQL DB),例如 mongoDB,与 SQLServer(270s)相比,它在存储数据(5s)方面非常快,但在获取数据方面不如 sql 快(20000 => 180ms)。
所以我在这里问是否有任何方法可以让SQL在存储方面更快。或者使 mongoDB 的获取速度更快(我不是 mongoDB 的专家,我知道它的基本知识)。

public static void ExecuteNonQuery(string sql)
{
    SqlConnection con = GetConnection();
    con.Open();
    SqlCommand cmd = new SqlCommand(sql, con);
    try
    {
        cmd.ExecuteNonQuery();
    }
    finally
    {
        con.Close();
    }
}

SQL 的 Insert 函数

public IEnumerable<T> GetRecords<T>(System.Linq.Expressions.Expression<Func<T, bool>> expression, int from, int to) where T : class, new()
{
    return _db.GetCollection<T>(collectionName).Find<T>(expression).Skip(from).Limit(to).Documents;
}

Mongo 的 Select 函数 ( MongoDB 1.6 )

更新 :数据结构:(int)Id,(string)数据

I'm right now using SQL server 2008 in my project to store and fetch data . this is going perfect till now . I can fetch 20000 records in less than 50ms (JSON) . but facing a problem with inserts stuff . in my project I need to be able to insert something like 100000 records every minute . and this is seems to be very slow with SQL server .
I've tried to use another database (NOSQL DB) like mongoDB which are very fast in storing data (5s) comparing to SQLServer(270s) but not fast as sql in fetching data(20000 => 180ms) .

So I'm asking here if there any way to make SQL faster in storing . or to make mongoDB faster in fetching ( I'm not an expert in mongoDB I know the very basic things about it ) .

public static void ExecuteNonQuery(string sql)
{
    SqlConnection con = GetConnection();
    con.Open();
    SqlCommand cmd = new SqlCommand(sql, con);
    try
    {
        cmd.ExecuteNonQuery();
    }
    finally
    {
        con.Close();
    }
}

SQL's Insert function

public IEnumerable<T> GetRecords<T>(System.Linq.Expressions.Expression<Func<T, bool>> expression, int from, int to) where T : class, new()
{
    return _db.GetCollection<T>(collectionName).Find<T>(expression).Skip(from).Limit(to).Documents;
}

Mongo's Select function ( MongoDB 1.6 )

Update
: data structure : (int) Id , (string) Data

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

瀟灑尐姊 2024-10-25 10:34:40

我猜想您正在自己的事务中执行每个插入(如果您没有显式提供事务,则可能已创建隐式事务)。由于SQL Server需要确保事务提交到硬盘驱动器,所以每个事务都有非常大的开销。

为了让事情进展得更快,请尝试在一次 ExecuteNonQuery() 调用中执行多次插入(尝试大约一千次)。也不要打开和关闭,而是保持连接打开(因此处于同一事务中)以进行多次插入。

I guess that you are executing each insert in a transaction of its own (an implicit transaction might have been created if you do not provide one explicitly). As SQL server needs to ensure that the transaction is committed to the hard drive each transaction has a overhead that is very significant.

To get things to go faster, try to perform many inserts (try with a thousand or so) in a single ExecuteNonQuery() call. Also do not open and close, but keep the connection open (thus being in the same transaction) for several inserts.

独闯女儿国 2024-10-25 10:34:40

MongoDB 的读取和写入速度非常快。在商用硬件上每秒 50k 的读取和写入是可行的 - 取决于数据大小。除此之外,您始终可以选择通过分片和副本集进行扩展,但如上所述:每秒 20k 操作
对于 MongoDB 来说什么都不是。

MongoDB is very fast on reads and writes. 50k reads and writes per second is doable on commodity hardware - depending on the data size. In addition to that you always have the option to scale out with sharding and replica sets but as said: 20k operations per seconds
with MongoDB is nothing.

不离久伴 2024-10-25 10:34:40

通常,将数据插入数据库的速度是操作复杂性的函数。

如果您的插入速度明显慢,则表明插入存在优化问题。准确识别您的程序正在生成哪些 SQL 插入语句,然后使用数据库 EXPLAIN 函数找出底层数据库正在使用哪些操作。这通常会为您提供有关如何更改设置以提高这些操作速度的线索。

这可能意味着您必须更改数据库,或者可能意味着将插入批量到单个调用中,而不是单独插入每个项目。

我看到您每次都在设置和关闭连接..这本身就需要花费大量时间。尝试使用持久连接。

Generally the speed on inserting data into the database is a function on the complexity of the operation.

If your inserts are significantly slow, then it points to optimisation problems with the inserts. Identify exaxtly what SQL insert statements your program is generating and then use the database EXPLAIN function to figure out what operations the underlying database is using. This often gives you a clue as to how you need to change your setup to increase the speed of these operations.

It might mean you have to change your database, or it might mean batching your inserts into a single call rather than inserting each item separately.

I see you are setting up and closing the connection each time.. this takes a significant time in itself. Try using a persistent connection.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文