sqlite 表中的最大行数

发布于 2024-08-07 22:15:01 字数 167 浏览 4 评论 0原文

给出一个简单的sqlite3表(create table data (key PRIMARY KEY,value)),键大小为256字节,值大小为4096字节,最大限制是多少(忽略磁盘空间限制)这个 sqlite3 表中有多少行?它们的限制与操作系统(win32、linux 或 Mac)相关吗

Give an simple sqlite3 table (create table data (key PRIMARY KEY,value)) with key size of 256 bytes and value size of 4096 bytes, what is the limit (ignoring disk space limits) on the maximum number of rows in this sqlite3 table? Are their limits associated with OS (win32, linux or Mac)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(7

〗斷ホ乔殘χμё〖 2024-08-14 22:15:01

截至 2017 年 1 月,sqlite3 限制页面 根据最大​​大小定义了此问题的实际限制数据库大小为 140 TB:

表中的最大行数

表中的理论最大行数为 2^64(18446744073709551616 或大约 1.8e+19)。由于首先会达到 140 TB 的最大数据库大小,因此无法达到此限制。一个 140 TB 的数据库最多只能容纳大约 1e+13 行,而且前提是没有索引并且每行包含的数据很少。

因此,如果数据库最大大小为 140 TB,那么您会幸运地获得约 1 万亿行,因为如果您实际上有一个包含数据的有用表,则行数将受到数据大小的限制。 140 TB 的数据库中可能有多达数十亿行。

As of Jan 2017 the sqlite3 limits page defines the practical limits to this question based on the maximum size of the database which is 140 terabytes:

Maximum Number Of Rows In A Table

The theoretical maximum number of rows in a table is 2^64 (18446744073709551616 or about 1.8e+19). This limit is unreachable since the maximum database size of 140 terabytes will be reached first. A 140 terabytes database can hold no more than approximately 1e+13 rows, and then only if there are no indices and if each row contains very little data.

So with a max database size of 140 terabytes you'd be lucky to get ~1 Trillion rows since if you actually had a useful table with data in it the number of rows would be constrained by the size of the data. You could probably have up to 10s of billions of rows in a 140 TB database.

撞了怀 2024-08-14 22:15:01

我有 3.3 GB 大小的 SQLite 数据库,存储了 2500 万行数字日志,并对它们进行计算,它运行得又快又好。

I have SQLite database 3.3 GB in size with 25million rows of stored numeric logs and doing calculations on them, it is working fast and well.

千里故人稀 2024-08-14 22:15:01

在 SQLite3 中,字段大小不固定。引擎将为每个单元格分配所需的空间。

对于文件限制,请参阅这个问题:
sqlite 的性能特征是什么非常大的数据库文件?

In SQLite3 the field size isn't fixed. The engine will commit as much space as needed for each cell.

For the file limits see this SO question:
What are the performance characteristics of sqlite with very large database files?

你的背包 2024-08-14 22:15:01

我有一个 7.5GB SQLite 数据库,存储 1050 万行。只要有正确的索引,查询就很快。为了让插入快速运行,您应该使用事务。另外,我发现最好在插入所有行之后创建索引。否则插入速度相当慢。

I have a 7.5GB SQLite database which stores 10.5 million rows. Querying is fast as long as you have correct indexes. To get the inserts to run quickly, you should use transactions. Also, I found it's better to create the indexes after all rows have been inserted. Otherwise the insert speed is quite slow.

夏有森光若流苏 2024-08-14 22:15:01

您想要的答案就在这里

您提到的每个操作系统都支持多种文件系统类型。实际限制将针对每个文件系统,而不是每个操作系统。很难总结 SO 上的约束矩阵,但是虽然某些文件系统对文件大小施加限制,但当今所有主要操作系统内核都支持具有极大文件的文件系统。

sqlite3 数据库的最大页面大小相当大,为 2^32768,尽管这需要一些配置。我认为索引必须指定页码,但结果可能是首先达到操作系统或环境限制。

The answer you want is right here.

Each OS you mentioned supports multiple file system types. The actual limits will be per-filesystem, not per-OS. It's difficult to summarize the constraint matrix on SO, but while some file systems impose limits on file sizes, all major OS kernels today support a file system with extremely large files.

The maximum page size of an sqlite3 db is quite large, 2^32768, although this requires some configuration. I presume an index must specify a page number but the result is likely to be that an OS or environment limit is reached first.

熊抱啵儿 2024-08-14 22:15:01

基本上没有真正的限制,

请参阅 http://www.sqlite.org/limits.html 了解详细信息

Essentially no real limits

see http://www.sqlite.org/limits.html for details

风和你 2024-08-14 22:15:01

没有限制,但基本上在某个点之后,sqlite 数据库将变得无用。 PostgreSQL 是迄今为止大型数据库中顶级的免费数据库。就我而言,在具有 8GB RAM 和 Raptor 硬盘的 Linux 64 位四核双处理器计算机上大约有 100 万行。 PostgreSQL 是无与伦比的,即使是经过调优的 MySQL 数据库也是如此。 (发表于 2011 年)。

No limits, but basically after a certain point the sqlite database will become useless. PostgreSQL is the top free database BY FAR for huge databases. In my case, it is about 1 million rows on my Linux 64-Bit Quad Core Dual Processor computer with 8GB RAM and Raptor hard disks. PostgreSQL is unbeatable, even by a tuned MySQL database. (Posted in 2011).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文