需要将整个postgreSQL数据库加载到RAM中

发布于 2024-07-12 04:28:14 字数 230 浏览 6 评论 0原文

如何将整个 PostgreSql 数据库放入 RAM 中以便更快地访问? 我有 8GB 内存,我想将 2GB 专门用于数据库。 我已阅读有关共享缓冲区设置的信息,但它仅缓存数据库中访问次数最多的片段。 我需要一个解决方案,将整个数据库放入 RAM 中,任何读取都将从 RAM DB 进行,任何写入操作都会首先写入 RAM DB,然后写入硬盘驱动器上的 DB。(例如默认的 fsync =在 postgresql 配置设置中使用共享缓冲区)。

How do I put my whole PostgreSql database into the RAM for a faster access?? I have 8GB memory and I want to dedicate 2 GB for the DB. I have read about the shared buffers settings but it just caches the most accessed fragment of the database. I needed a solution where the whole DB is put into the RAM and any read would happen from the RAM DB and any write operation would first write into the RAM DB and then the DB on the hard drive.(some thing like the default fsync = on with shared buffers in postgresql configuration settings).

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(7

萌吟 2024-07-19 04:28:14

一段时间以来,我一直在问自己同样的问题。 PostgreSQL 的缺点之一是它似乎不像 MySQL 那样支持内存存储引擎......

无论如何我遇到了 文章 几周前描述了如何做到这一点; 尽管它似乎只能在 Linux 上运行。 我真的不能保证它,因为我自己没有尝试过,但它似乎确实有意义,因为 PostgreSQL 表空间确实分配了一个安装的存储库。

然而,即使采用这种方法,我也不确定您是否也可以将索引放入 RAM 中; 我不认为 MySQL 强制 HASH 索引与其 IN MEMORY 表一起使用...

我也想做类似的事情来提高性能,因为我也在处理巨大的数据集。 我正在使用Python; 它们具有字典数据类型,基本上是 {key: value} 对形式的哈希表。 使用这些是非常高效和有效的。 基本上,为了将我的 PostgreSQL 表放入 RAM,我将其加载到这样一个 Python 字典中,使用它,并偶尔将其保存到数据库中; 如果用得好的话还是值得的。

如果您不使用 python,我很确定它们在您的语言中是类似的字典映射数据结构。

希望这可以帮助!

I have asked myself the same question for a while. One of the disadvantages of PostgreSQL is that it does not seem to support an IN MEMORY storage engines as MySQL does...

Anyway I ran in to an article couple of weeks ago describing how this could be done; although it only seems to work on Linux. I really can't vouch for it for I have not tried it myself, but it does seem to make sense since a PostgreSQL tablespace is indeed assigned a mounted repository.

However, even with this approach, I am not sure you could put your index(s) into RAM as well; I do not think MySQL forces HASH index use with its IN MEMORY table for nothing...

I also wanted to do a similar thing to improve performance for I am also working with huge data sets. I am using python; they have dictionary data types which are basically hash tables in the form of {key: value} pairs. Using these is very efficient and effective. Basically, to get my PostgreSQL table into RAM, I load it into such a python dictionary, work with it, and persist it into db once in a while; its worth it if it is used well.

If you are not using python, I am pretty sure their is a similar dictionary-mapping data structure in your language.

Hope this helps!

蓝梦月影 2024-07-19 04:28:14

如果您通过 id 提取数据,请使用 memcached - http://www.danga.com/memcached/ + postgresql.

if you are pulling data by id, use memcached - http://www.danga.com/memcached/ + postgresql.

宣告ˉ结束 2024-07-19 04:28:14

设置一个老式的 RAMdisk 并告诉 pg 将其数据存储在那里。

不过,请务必做好备份。

Set up an old-fashioned RAMdisk and tell pg to store its data there.

Be sure you back it up well though.

风向决定发型 2024-07-19 04:28:14

如果您使用的是 Java,也许类似于 Tangosol Coherence 缓存

Perhaps something like a Tangosol Coherence cache if you're using Java.

梦境 2024-07-19 04:28:14

只有 8GB 数据库,如果您已经优化了所有 SQL 活动并且准备好用硬件解决查询问题,我建议您遇到麻烦了。 从长远来看,这不是一个可扩展的解决方案。 您确定您无法在软件和数据库设计方面做出实质性改变吗?

With only an 8GB database, if you've already optimized all the SQL activity and you're ready solve query problems with hardware, I suggest you're in trouble. This is just not a scalable solution in the long term. Are you sure there is nothing you can do to make substantial differences on the software and database design side?

乞讨 2024-07-19 04:28:14

我自己还没有尝试过,但是:

理论上,应该可以将两者结合起来。

如果这样做,您可能还需要调整 seq_page_cost 和 random_page_cost 以反映相对存储成本。 请参阅 https://www.postgresql.org/docs/current/runtime -config-query.html

现有的查询优化和增加共享缓冲区的建议仍然有效。 如果您在这么小的数据库上遇到这些问题,那么简单地将其放入 RAM 中可能不是正确的解决方法。

I haven't tried this myself (yet) but:

Theoretically, it should be possible to combine the two.

If you do this, you might also want to tweak seq_page_cost and random_page_cost to reflect the relative storage costs. See https://www.postgresql.org/docs/current/runtime-config-query.html

The pre-existing advice for query optimization and increasing shared_buffers still stands though. The chances are that if you're having these problems on a database this small simply putting it into RAM probably isn't the right fix.

带上头具痛哭 2024-07-19 04:28:14

一种解决方案是使用 Fujistu 版本的 PostGreSQL,它支持内存列存储索引...
https://www.postgresql.fastware.com/in-memory- columnar-index-brochure

但它的成本很高......

或者运行带有内存表功能的MS SQL Server......即使是免费版本的express也有它!

One solution is to use Fujistu version of PostGreSQL that supports in memory columnstore indexes...
https://www.postgresql.fastware.com/in-memory-columnar-index-brochure

But it cost a lot....

Or run MS SQL Server with the In Memory tables features.... Even the free version express has it !

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文