发布于 2024-11-26 16:20:21 字数 8 浏览 0 评论 0原文

continue

I have created a table T , which has an index created on a column C (btree index) , but when i run the select query this index is not being used.

Ex:

Explain select * from T where C='xxx'

This searches in all the segments sequentially , without considering the index which i have created.

I have used the following flags

enable_seqscan = off
enable_bitmapscan = off
enable_indexscan = on

Am I missing anything?Kindly explain?

Thanks
Ganesh.R

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

桃扇骨 2024-12-03 16:20:21

无论出于何种原因,查询优化器都可能认为最好不要使用索引。此外,如果统计元数据已过期,您可能需要对表执行ANALYZE。请参阅这篇文章 (或其他类似它)了解更多详细信息。

It's possible that the query optimizer, for whatever reason, thinks that it's better not to use the index. Also, you may need to do an ANALYZE on the table if the statistical metadata is out of date. See this article (or others like it) for more detailed information.

纵性 2024-12-03 16:20:21

如果没有解释分析,很难说出原因,但有几点:

  • GP使用非常高的random_page_cost,并且seq_page_cost为1。random_page_cost的默认值为100,这完全阻止优化器使用索引扫描
  • < code>enable_seqscan = off 不会完全关闭 seq 扫描。 seq 扫描非常频繁
    如果表很小(100 - 10k 记录),则按
  • 顺序读取它并完全忽略索引可能会更快

w/o explain analyze it is quite hard to tell why, but few points:

  • GP use very high random_page_cost and seq_page_cost is 1. The default value for random_page_cost is 100 which completely discourages optimiser to use index scans
  • enable_seqscan = off doesn't turn off seq scan completely. seq scans are just heavily
    penalised
  • if table is small (100 - 10k records) it's might be faster to read it sequentially and ignore index at all
梦醒时光 2024-12-03 16:20:21

与传统的 RDBMS 不同,索引可能不是在 Greenplum 中访问数据的最佳方式。

Greenplum 开箱即用,更喜欢表扫描而不是索引扫描,您必须进行大量调整才能改变这一点。您可以设置其他参数来帮助 GP 优化器选择索引,包括 set enable_nestloop on、cpu_index_tuple_cost 等。请查看 GP 管理指南的附录 D,了解全套可调参数。

另外,您如何分发数据?这可以影响优化器如何选择处理您的查询。

Unlike traditional RDBMSs, an index may not be the preferable way to access data in Greenplum.

Greenplum is tuned out of the box to prefer table scans over index scans, and you have to do a lot of tweaking to change that. You can set additional parameters to help the GP optimizer choose indexes including set enable_nestloop on, cpu_index_tuple_cost and others. Check Appendix D of the GP Admin guide for a full set of tunable parameters.

Also, how do you have the data distributed? That can play a part in how the optimizer is choosing to process your query.

剑心龙吟 2024-12-03 16:20:21

如果您的表已分区,则还有一个可能的原因导致您的索引未被使用:您的表有索引,但部分或全部分区没有。您可以通过查看系统视图 pg_indexes 来检查这一点。是否有分区条目?

此问题的根本原因可能是 alter table TABLE add partition... 不会自动创建您为 TABLE 定义的索引。

有两种解决方案:

  • 添加分区后在 PARTITIONTABLENAME(ROWLIST..) 上创建索引 INDEXNAME。搜索系统视图pg_partitions并获取PARTITIONTABLENAME!它与 PARTITIONNAME 不同。
  • 延迟在表上创建索引,直到您添加了所需的所有分区。这是因为在表上创建索引会自动在所有现有分区上创建索引。

顺便说一句,删除表上的索引不会删除分区上的索引。

很抱歉,我无法为您提供任何 GP 管理指南的参考,因为我要么是盲目的,要么是错误的,或者管理指南完全忽略了这个问题。

If your table is partitioned, there is one more possible reason for your index not being used: Your table has an index, but some or all of your partitions do not. You can check this be looking at the system view pg_indexes. Are there entries for the partitions?

The root cause for this problem probably is that alter table TABLE add partition... does not automatically create the indexes that you have defined for TABLE.

There are two solutions:

  • create index INDEXNAME on PARTITIONTABLENAME(ROWLIST..) after adding a partition. Search the system view pg_partitions the get the PARTITIONTABLENAME! It's not the same as the PARTITIONNAME.
  • Delay create index on your table to the point where you have added all the partitions that you will need. That's because creating an index on a table automatically creates indices on all existing partitions.

By the way, dropping the index on the table does not drop the indexes on the partitions.

I'm sorry I cannot give you any references to the GP Admin Guide as I'm either blind or wrong or the Admin Guide ignores this matter completely.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文