MSSQL 数据库中的表越多,速度是否会变慢?

发布于 2024-11-29 12:07:02 字数 660 浏览 0 评论 0原文

MSSQL 数据库的性能是否会随着其中的表越多而下降?即使这些表包含很少或没有数据(即可能只有 100 行),但是有数百个表,可能每天会添加数千个表?

所使用的 MSSQL 特定版本是 2008 标准版。

这里的逻辑是,在网站上注册的每个用户都会获得自己的一组表,用于将其数据与所有其他用户分开存储。这将允许用户帐户在服务器之间移动,并在需要时完全删除,几乎不留下任何痕迹。此外,让用户更加放心地知道他们的数据是完全分段的,因此更难获取。

另一个问题是,用户帐户数据存储在类似于垂直数据库的表中,其中有一个内容数据表,并且其中只有几个字段,但这些条目可以适用于任何类型的数据,从小到大帖子、日历条目、私人消息以及用户撰写的大型多页文章。平均一条内容可能会向该表添加 10 到 15 行。因此,在我想象的那种情况下,为所有用户提供一个大表似乎性能非常差。

数据只能通过为站点制作的 API 进行访问,该 API 会提取特定于用户的数据并验证每次调用的数据访问权限。将来可以创建任何类型的数据,这就是为什么用户有一个内容表以及其他几个用于设置等的原因。每个用户大约有 15 个表,用于站点的各个管理方面。例如,API 允许您以用户 z 的身份向用户 y 请求内容块 x。它验证您的安全性并从表中提取块 x 的所有数据字段。

那么,不断增加的大量表最终会大大减慢系统速度吗?或者在单个表中包含如此大量的数据是否会导致更大的速度减慢?

Does performance of a MSSQL database drop with the more tables that are inside of it? Even if those tables contain little to no data (ie maybe only 100 rows), however there being hundreds of tables, possibly thousands added per day?

The particular version of MSSQL in question being used is 2008 standard edition.

The logic here is that each user signing up on a website gets it's own set of tables for storing their data separate from all other users. This would allow the user account to be moved between servers and completely deleted if need be leaving little to no traces behind. Plus providing the user added comfort in knowing their data is completely segmented and thus harder to get at.

The other issue is that user account data is stored in the tables similair to a vertical database, in that there is a content data table and that has only a few fields in it, but those entries can apply for any type of data, from small posts, calendar entries, private message and to large multipage articles written by the user. A single piece of content might add 10 to 15 rows to that table on average. So having a single large table for all users would seem to be very poor performance in that scenario I would imagine.

The data is only accessible via an api made for the site, which pulls data specific to the user and verifies access of the data as well for each call. Any types of data can be made in the future which is why the users have a content table along with several others for settings and the such. Each user having roughly around 15 tables for various management aspects of the site. For example the api allows you to request content block x from user y as user z. It verifies your security and pulls from the table all the data fields for block x.

So ultimately would the ever increasing massive amounts of tables slow the system down substantially? Or would having such a large amount of data in a single table be a larger slow down?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

时间海 2024-12-06 12:07:02

不,没有任何相关的幅度。

不过,你的 lgoic 仍然有问题——你失去了很多 sql 功能,或者必须为跨用户查询创建大量的联合。我会解雇任何想出这样的概念的人,除非他有一个非常特殊的情况,而你不会有 99.99% 的机密性。类似的水平就像你今天为 Windows 开发 16 位软件一样。

因此,简而言之:

  • 不,性能没有显着下降。

  • 仍然是一个糟糕的设计决策。

No, not by any relevant margin.

Your lgoic, though, is still faulty- you loose a lot of sql functionality or have to make tons of unions for cross user queries. I would fire anyone coming up with a concept like that unless he had a VERY special case, which you wont have with 99.99% confidentiallity. Similar level like saying you develop 16 bit software for windows today.

So, in short:

  • No, no significant performance drop.

  • Still a bad design decision.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文