如何在 PostgreSQL 事务中重用查询结果?
我需要在一个事务中的其他查询中重用大量嵌套 SELECT 查询的结果。是否可以?
-- 谢谢!
I need to reuse a result of massive nested SELECT query in other queries within one transaction. Is it possible?
--
Thanks!
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
在存储过程/函数中,您可以执行以下操作:
收集 PK 列表(使用大的慢速查询)并对它执行多个查询非常有用,特别是考虑到 plpgsql 函数可以向客户端返回多个结果集(RETURN SETOF refcursor )。
例如,我使用大型搜索查询(要点索引和地理定位)获取 50 个房地产列表 ID;查询包含许多列、连接、排序、散列,并带有最终的 LIMIT/OFFSET,并且不将所有列拖过所有这些,而是仅使用搜索中使用的列似乎要快得多,然后获取 id 列表,应用 LIMIT/OFFSET,然后返回获取所有列。
然后使用这个 ID 列表,我从其他表中获取信息,例如联系人、电话号码等。由于一个列表可以有多个电话号码或联系人,因此使用另一个光标单独返回这些信息并让应用程序放置它会更容易、更快捷重新组合在一起,而不是使用 array_agg() 之类的方法在每个结果行中返回电话号码列表。
好处是你可以选择是否使用预先准备好的语句,或者也可以使用 EXECUTE,这样如果你预计数组有时会非常大,postgres 可以在知道数组长度的情况下重新规划查询。
另一个解决方案是简单地
In a stored procedure/function you can do this :
That's pretty useful to gather a list of PKs (using a big slow query) and do several queries on it, especially considering plpgsql function can return several result sets to the client (RETURN SETOF refcursor).
For instance I grab 50 real estate listing ids using a big search query (gist indexes and geolocalization) ; the query includes many columns, joins, sorts, hashes, with a final LIMIT/OFFSET, and it seems to be quite a lot faster to not drag all the columns through all this, instead using only the columns that are used in the search, then grab a list of ids, apply LIMIT/OFFSET, and go back to grab all the columns.
Then using this list of ids, I grab info from other tables, like contacts, phone#, etc. Since one listing can have several phone# or contacts, it's easier and faster to return those separately using another cursor and let the application put it back together, than using something like array_agg() to return a list of phone# in each result line.
Good thing is you get to chose if you use pre-prepared statements, or you can also use EXECUTE so postgres can replan the queries knowing the length of the array if you expect it to sometimes be very large.
Another solution is simply to
您可以使用临时表:
关闭连接时临时表也会被删除。
You could use a temporary table:
The temporary table is also dropped when you close your connection.