SQL 查询结果大小 - 这很重要吗?

发布于 2024-09-28 22:10:38 字数 433 浏览 3 评论 0原文

我有一个返回数据集的 sql 查询(MyQSL DB,使用.Net 的 SqlClient)。如果数据集的大小为 4000 行或 20 行,运行时是否存在显着差异?
另外,与填充结果表的时间相比,查询的执行时间有多少用于打开连接等?

谢谢。

澄清(编辑):
4000 行中的每一行代表一个文件夹的名称。我想过滤它们,以便用户不必看到全部它们,而只能看到他们感兴趣的那些。
我不确定什么更好:在从数据库获取名称之前进行过滤(位于另一台计算机上),这可能会使我的查询更加复杂,或者仅过滤视图(用户看到)。 这 4,000 行可能会变成 40,000 行,但我不确定它是否与此问题相关。 再次感谢。

再次编辑:
数据库位于网络上,但连接速度相当快,比如说 100Mbit。

I have an sql query (MyQSL DB, using .Net's SqlClient) that returns a dataset. If the dataset's size is 4000 rows long or it's 20, is there a significant difference in runtime?
Also, how much of the query's execution time is spent in opening a connection etc. comparing to the time to fill the results' table?

Thanks.

Clarification (edit):
Each of the 4000 rows represent a folder's name. I want to filter them so that users won't have to see them all but only those of interest for them.
I'm not sure what's better: Filtering before getting the names from the DB (it is on a different computer), which might make my query more complicated, or only filter the view (the tree that the user sees).
Those 4,000 rows might turn to 40,000, but I'm not sure it will be relevant for this issue.
Thanks again.

Editing again:
The DB is on network, but the connection is quite fast, let's say 100Mbit.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

少女七分熟 2024-10-05 22:10:38

不,没有显着差异。选择 4000 行不会花费选择 20 行的持续时间的 200 倍 - 差异很小,我想你无论如何也不会注意到它。

但是:如果您的网络连接速度太慢,您可能会遇到问题,因此您应该检查一下。

no, there's not an significant difference. selecting 4000 rows wouldn't take 200 times the duration of selecting 20 rows - the difference is minimal, i think you won't notice it anyway.

but: you could get problems if your network connection is too slow, so you should take a look at that.

行至春深 2024-10-05 22:10:38

打开连接或建立连接所花费的执行时间可以忽略不计。此外,我认为不会花费任何特别大量的时间来按数千条顺序获取记录,因为数据库经过高度优化,可以为数百万条记录提供服务。然而,随着数据集大小的增加,应用程序的运行时会出现一些性能滞后,而不是查询提取时间。

The execution time spent on opening a connection or rather establishing a connection is negligible. Also I do not think there will be any particular considerable time spent on fetching records in order of thosands, because the databases are highly optimized to service millions of records for example. However as the dataset size increases, there will be some performance lags in runtime in your application rather than the query fetch time.

一个人的旅程 2024-10-05 22:10:38

是或否。忘记执行吧——那里根本就没有执行。它与网络传输速度(如果数据库位于另一台计算机上)和您自己的本地行为有关。与其说是数据集本身,不如说是你用它做什么。浏览器渲染包含 4000 行的表格所需的时间比渲染包含 20 行等的表格要花费更长的时间。

在这些限制内,影响应该很低。如果 4000 变成 40.000,那就更有趣了。

Yes or not. Forget execution - there it is none. It is about network transfer speed (if the database is on another computer) and your own local behavior. Not so much the dataset itself, but what you do with it. A browser takes a longer time to render a table with 4000 rows than with 20, etc.

WIthin THOSE limits the impact should be low. It gets more interesting if 4000 turn into 40.000.

故事↓在人 2024-10-05 22:10:38

问问你自己这个。如果您必须从 Web 服务器获取 200 倍的大文件,会需要更长的时间吗?我认为数据库差异取决于数据库设计,但不会成为问题。将数据放入客户端的行中的代码性能几乎不是问题。然而,问题在于通过网络传输数据。

当然,您的网络连接可能非常快,因此它仍然微不足道。

Ask yourself this. If you have to get 200 times a big file from a web server will it take longer? I think the the db difference is dependant in the db design but won't be an issue. The code performance on putting the data into rows on the client side will be pretty much not an issue. However the issue will be transferring the data over the network.

Of course your network connection may be super fast so that it's still insignificant.

酷炫老祖宗 2024-10-05 22:10:38

首先,网络 I/O 成本与结果的大小完全呈线性关系。

然后还有诸如缓冲区大小、数据包大小和缓存大小之类的因素,一旦查询达到一定大小,它们就会使查询从非线性悬崖上掉下来。

然后,如果您的表可以快速找到前 20 行匹配的行,而第 21 行仅在漫长的扫描过程后出现,那么将结果限制为 20 行将产生更快的结果。

您需要做的是分析您的特定设置。这是唯一可以确定的方法。

First, there is network I/O cost, which is exactly linear to the size of the result.

Then there are things like buffer sizes and packet sizes and cache sizes, which can make a query fall off a non-linear cliff once it reaches a certain size.

And then, if your table is such that the first 20 matching rows can be quickly found, whereas the 21st row only appears after a lengthy scanning process, then limiting the result to 20 rows will yield much faster results.

What you need to do is profile your particular setup. That is the only way to be sure.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文