优化Oracle 11g流程

发布于 2024-12-23 12:21:35 字数 1252 浏览 2 评论 0原文

我有一个程序可以在一个非常大的表中查找一系列交易的第一个、最后一个、最高和最低价格,该表按日期、对象名称和代码组织。我还需要交易数量的总和。表中约有 30 亿行,此过程需要很多天才能运行。我想尽可能地缩短这个时间。我在 trans 表中的不同字段上有一个索引,并且查看查询的选择部分的解释计划,正在使用该索引。我愿意接受有关替代方法的建议。我使用 Oracle 11g R2。谢谢。

    declare
    cursor c_iter is select distinct dt, obj, cd from trans;
    r_iter c_iter%ROWTYPE;
    v_fir number(15,8);
    v_las number(15,8);
    v_max number(15,8);
    v_min number(15,8);
    v_tot number;
    begin
    open c_iter;
    loop
        fetch c_iter into r_iter;
        exit when c_iter%NOTFOUND;

      select max(fir), max(las) into v_fir, v_las 
      from 
            ( select 
                first_value(prc) over (order by seq) as "FIR",
                first_value(prc) over (order by seq desc) as "LAS"
              from trans
              where dt = r_iter.DT and obj = r_iter.OBJ and cd = r_iter.CD );

            select max(prc), min(prc), sum(qty) into v_max, v_min, v_tot
            from trans
            where dt = r_iter.DT and obj = r_iter.OBJ and cd = r_iter.CD;

            insert into stats (obj, dt, cd, fir, las, max, min, tot )
            values (r_iter.OBJ, r_iter.DT, r_iter.CD, v_fir, v_las, v_max, v_min, v_tot);

            commit;
    end loop;
    close c_iter;
end;

I have a procedure to find the first, last, max and min prices for a series of transactions in a very large table which is organized by date, object name, and a code. I also need the sum of quantities transacted. There are about 3 billion rows in the table and this procedure takes many days to run. I would like to cut that time down as much as possible. I have an index on the distinct fields in the trans table, and looking at the explain plan on the select portion of the queries, the index is being used. I am open to suggestions on an alternate approach. I use Oracle 11g R2. Thank you.

    declare
    cursor c_iter is select distinct dt, obj, cd from trans;
    r_iter c_iter%ROWTYPE;
    v_fir number(15,8);
    v_las number(15,8);
    v_max number(15,8);
    v_min number(15,8);
    v_tot number;
    begin
    open c_iter;
    loop
        fetch c_iter into r_iter;
        exit when c_iter%NOTFOUND;

      select max(fir), max(las) into v_fir, v_las 
      from 
            ( select 
                first_value(prc) over (order by seq) as "FIR",
                first_value(prc) over (order by seq desc) as "LAS"
              from trans
              where dt = r_iter.DT and obj = r_iter.OBJ and cd = r_iter.CD );

            select max(prc), min(prc), sum(qty) into v_max, v_min, v_tot
            from trans
            where dt = r_iter.DT and obj = r_iter.OBJ and cd = r_iter.CD;

            insert into stats (obj, dt, cd, fir, las, max, min, tot )
            values (r_iter.OBJ, r_iter.DT, r_iter.CD, v_fir, v_las, v_max, v_min, v_tot);

            commit;
    end loop;
    close c_iter;
end;

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

莳間冲淡了誓言ζ 2024-12-30 12:21:35
alter session enable parallel dml;

insert /*+ append parallel(stats)*/
into stats(obj, dt, cd, fir, las, max, min, tot)
select /*+ parallel(trans) */ obj, dt, cd
    ,max(prc) keep (dense_rank first order by seq) fir
    ,max(prc) keep (dense_rank first order by seq desc) las
    ,max(prc) max, min(prc) min, sum(qty) tot
from trans
group by obj, dt, cd;

commit;
  • 单个 SQL 语句通常比多个 SQL 语句快得多。它们有时需要更多资源,例如更多临时表空间,但您的不同游标可能已经在磁盘上对整个表进行排序。
  • 您可能还希望启用并行 DML 和并行查询,尽管根据您的对象和系统设置,这可能已经发生。 (这不一定是一件好事,具体取决于您的资源,但它通常有助于大型查询。)
  • 如果 SQL 写入大量数据,并行写入和 APPEND 应该会提高性能,但这也意味着新表不会直到下一次备份为止都可以恢复。 (并行 DML 将自动使用直接路径写入,但我通常会包含 APPEND,以防并行性无法正常工作。)即使

对于如此小的查询,也有很多需要考虑的地方,但这就是我要开始的地方。

alter session enable parallel dml;

insert /*+ append parallel(stats)*/
into stats(obj, dt, cd, fir, las, max, min, tot)
select /*+ parallel(trans) */ obj, dt, cd
    ,max(prc) keep (dense_rank first order by seq) fir
    ,max(prc) keep (dense_rank first order by seq desc) las
    ,max(prc) max, min(prc) min, sum(qty) tot
from trans
group by obj, dt, cd;

commit;
  • A single SQL statement is usually significantly faster than multiple SQL statements. They sometimes require more resources, like more temporary tablespace, but your distinct cursor is probably already sorting the entire table on disk anyway.
  • You may want to also enable parallel DML and parallel query, although depending on your object and system settings this may already be happening. (And it may not necessarily be a good thing, depending on your resources, but it usually helps large queries.)
  • Parallel write and APPEND should improve performance if the SQL writes a lot of data, but it also means that the new table will not be recoverable until the next backup. (Parallel DML will automatically use direct path writes, but I usually include APPEND anyway just in case the parallelism doesn't work correctly.)

There's a lot to consider, even for such a small query, but this is where I'd start.

メ斷腸人バ 2024-12-30 12:21:35

这不是我想给出的可靠答案,但需要考虑一些事情:

第一个是使用 批量收集。然而,由于您使用的是 11g,希望这已经自动为您完成。

您真的需要在每次迭代后提交吗?我可能是错的,但我猜这是您最常消费的人之一。

最后,为乔尼尔斯的回答+1。 (我不确定是否能够将所有内容写入单个 SQL 查询中,但我也建议这样做。)

Not the solid answer I'd like to give, but a few things to consider:

The first would be using a bulk collect. However, since you're using 11g, hopefully this is already being done for you automatically.

Do you really need to commit after every single iteration? I could be wrong, but am guessing this is one of your top time consumers.

Finally, +1 for jonearles' answer. (I wasn't sure if I'd be able to write everything into a single SQL query, but I was going to suggest this as well.)

长伴 2024-12-30 12:21:35

您可以尝试使查询并行运行,有一个合理的 Oracle 白皮书 此处。这不是我自己必须使用过的 Oracle 功能,因此我没有第一手经验可以传授。您还需要 Oracle 服务器上有足够的可用资源,以便运行将创建的并行进程。

You could try and make the query run in parallel, there is a reasonable Oracle White Paper on this here. This isn't an Oracle feature that I've ever had to use myself so I've no first hand experience of it to pass on. You will also need to have enough resources free on the Oracle server to allow you run the parallel processes that this will create.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文