每秒存储 1000 条记录时使用 SQL Server 分区

发布于 2024-07-14 04:21:43 字数 134 浏览 6 评论 0原文

我读了你的文章(SQL Server分区:不是所有问题的答案) 是否对我的情况使用分区感到惊讶 我必须每秒存储大约 1000 条记录,这些数据与移动节点的位置有关,这些数据使我的数据库太大 你认为我是否必须对我的数据库进行分区(我将来有很多报告)。

I read your article(SQL Server partitioning: not the answer to everything)
and being amazing of use partitioning for my case or not
I must to store about 1000 record per a second this data is about location of mobile nodes, these data make my database too huge
do you think i must partitioning my database or not(I have so much reporting in future).

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

寒尘 2024-07-21 04:21:43

每秒 1000 次还不算多。

  • 是 24/7 的每一秒吗?
  • 在定义的窗口中?
  • 峰值是否为每秒 1000 个,但通常较少?

我们最近的系统以每月 2000 万行的速度增长(在整理了另外 50-8000 万行之后),并且我们没有考虑分区之类的事情。

1000 a second isn't that much.

  • Is it every second of 24/7?
  • In a defined window?
  • Is it a peak of 1000 per second but usually less?

We have a recent system growing at 20 million rows/month (after tidy ups of say another 50-80 million) and we're not thinking of anything like partitioning.

爱你不解释 2024-07-21 04:21:43

这是很多数据。

数据的生命周期是怎样的,即您是否只需要在有限的时间内存储记录? 例如,一个月后,也许某些数据可以存档或移动到数据仓库?

考虑到您打算使用的数据量,您可能会想要使用易于扩展的架构? 因此,您可能需要考虑在 Azure 平台上使用云类型服务,例如 Amazon Ec2 或 SQL 数据服务。

http://aws.amazon.com/ec2/

http://www.microsoft.com/azure/data.mspx

也许如果您提供有关您的内容的更具体详细信息如果您实际上想要做什么,即您希望支持什么业务流程,我们也许能够提供更具体的帮助。

如果没有这些详细信息,就无法确定 SQL Server 分区是否适合您的设计方法。

That's a lot of data.

What is the lifecycle of the data i.e. do you only need to store the records for a finite amount of time? For example after a month, perhaps certain data can be archived off or moved to a Data warehouse?

Given the volume of data that you intend to work with you are probably going to want to use an architecture that scales easily? For this reason you may want to look at using Cloud type services such as Amazon Ec2, or SQL Data Services on the Azure Platform.

http://aws.amazon.com/ec2/

http://www.microsoft.com/azure/data.mspx

Perhaps if you provide more specific details about what it is you are actually looking to do i.e. what business process you are looking to support, we may be able to provide more specific assistance.

Without such details it is not possible to ascertain whether or not SQL Server Partitioning would be an appropriate design approach for you.

烟燃烟灭 2024-07-21 04:21:43

您可能需要查看不同的 RDMS。 我会看一下 Vertica

You might need to look at a different RDMS. I would take a look at Vertica.

南巷近海 2024-07-21 04:21:43

假设所讨论的表已建立索引,那么当任何索引超出可用 RAM 时,肯定可以选择两个选项之一。 毫不奇怪,其中之一就是增加 RAM。 另一个当然是垂直分区。

gbn 的答案提供了一些您没有提到的值得考虑的好事情,例如每月(或每周或每天)添加多少条记录。 Richard 关于(平均)记录有多大的评论也很重要,特别是在索引的平均记录有多大方面,假设索引不包括表中的所有字段。

然而,gbn 的回答对我来说似乎也有点鲁莽。 每月增长 2000 万行,甚至没有“考虑分区之类的事情”。 如果没有上面提到的足够的指标,这可能会导致灾难。 在需要考虑更多 RAM 或分区之前,您至少应该考虑一下,即使只是为了确定您可以维持当前和/或预期增长率多长时间。

Presuming the table in question is indexed, then one of two options is certainly warranted when any of the indexes outgrow the available RAM. Not surprisingly, one of them is, increase RAM. The other of course is vertical partitioning.

gbn's answer provides some good things to consider which you have not mentioned, such as how many records per month (or week, or day) are being added. Richard's comment as to how big the (average) record is is also significant, particularly in terms of how big the average records for the indexes are, presuming the indexes do not include all the fields from the table.

gbn's answer however also seems a bit reckless to me. Growing at 20 million rows per month and not even "thinking of anything like partitioning". Without sufficient metrics as alluded to above, this is a possible recipe for disaster. You should at least be thinking about it, even it just to determine how long you can sustain your current and/or expected rate of growth, before needing to consider more RAM or partitioning.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文