oracle sql plus 假脱机

发布于 2024-08-27 05:27:37 字数 433 浏览 5 评论 0原文

我正在使用 sql plus 执行查询(选择)并使用 spool 选项将结果转储到文件中。 我有大约 1400 万行,转储大约需要 12 分钟。 我想知道是否有什么东西可以让转储更快?

下面是我的 sql plus 选项:

whenever sqlerror exit sql.sqlcode
        set pagesize 0
        set linesize 410
        SET trimspool ON
        set heading on
        set feedback off
        set echo off
        set termout off
        spool file_to_dump_into.txt 
        select * from mytable;

谢谢。

I'm using sql plus to execute a query (a select) and dump the result into a file, using spool option.
I have about 14 millions lines, and it takes about 12 minutes to do the dump.
I was wondering if there is something to make the dump faster?

Here below my sql plus options:

whenever sqlerror exit sql.sqlcode
        set pagesize 0
        set linesize 410
        SET trimspool ON
        set heading on
        set feedback off
        set echo off
        set termout off
        spool file_to_dump_into.txt 
        select * from mytable;

Thanks.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(7

々眼睛长脚气 2024-09-03 05:27:37

您是否正在连接 &分隔列,还是导出固定宽度?

请参阅有关SQL*Plus 脚本调整。针对您的脚本,以下是一些可能的加速方法:

  1. 确保 LINESIZE 尽可能小。添加最大列长度(如果不是固定宽度,则加上分隔符)。这会对性能产生巨大影响,因为 SQL*Plus 会为每个导出的行分配该内存量。 410 不是那么大,但如果你能减少它会有帮助。根据我的经验,这产生了很大的变化。
  2. 不要打开 TRIMSPOOL。这也会产生很大的影响。然后,每行将被填充到 LINESIZE,但具有最佳行大小,并且根据您使用文件的方式,这可能是可以接受的。但是,如果您想完全消除尾随空格,则在导出后使用其他方法修剪它们通常会更快。
  3. 尝试使用 ARRAYSIZE。它可能会有所帮助(一点点)。它设置 SQL*Plus 的获取大小。默认为 15 行。增加到 100 可能会有所帮助,但太大可能会降低速度。

希望这有帮助!

Are you concatenating & delimiting your columns, or are you exporting fixed-width?

See this documentation on SQL*Plus Script Tuning. Specific to your script, here are a few possible ways to speed it up:

  1. Make sure LINESIZE is as small as possible. Add your max column lengths (plus delimiters if not fixed-width). This can have a dramatic effect on performance, as SQL*Plus allocates that amount of memory for every exported line. 410 isn't that big, but if you can decrease it that would help. This has made a big difference, in my experience.
  2. Don't turn TRIMSPOOL on. This can also have a big impact. Each line will then be padded out to LINESIZE, but with an optimal linesize, and depending on how you're using the file, that may be acceptable. However if you want to elminate trailing spaces entirely, it can often be faster to trim them using other methods post-export.
  3. Play around with ARRAYSIZE. It may help (a little). It sets the fetch size for SQL*Plus. Default is 15 rows. Bumping to, say, 100 may help, but going too large might decrease speed.

Hope this helps!

蓝眼泪 2024-09-03 05:27:37

您可能会发现使用 UTL_FILE 更快,但可能也没有那么快。

在我的测试中,它在处理大约 20k 行时稍微快一些,但超过 1400 万行时,它可能是值得的。

我相信,如果你想比这更快,那么要走的路就是 pro*c.. 但我还没有进入这个领域,所以不能给出真正的建议。

set pagesize 1000
set FLUSH OFF
drop user usera cascade;
create user usera default tablespace users identified by abc123;
grant create session to usera;
grant resource to usera;

create or replace directory testdir as '/tmp';
grant read,write on directory testdir to usera;
grant execute on UTL_FILE to usera;

connect usera/abc123;

set timing on

spool /tmp/spooltest.txt
select object_name from all_objects;
spool off

DECLARE
 v_file UTL_FILE.FILE_TYPE;
 TYPE t_col is table of all_objects.object_name%type index by PLS_INTEGER;
 v_object_names t_col;

BEGIN
  v_file := UTL_FILE.FOPEN('TESTDIR','utlfiletext.txt','w');

  select object_name BULK COLLECT INTO v_object_names
  from all_objects;

  for idx IN 1 .. v_object_names.COUNT LOOP
    UTL_FILE.PUT_LINE(v_file, v_object_names(idx), FALSE);
  END LOOP;

   UTL_FILE.FCLOSE(v_file);
END;
/

结果。顶部结果仅来自 sqlplus,底部使用 UTL_FILE

23931 rows selected.

Elapsed: 00:00:06.60

PL/SQL procedure successfully completed.

Elapsed: 00:00:05.45

You might find it quicker to use UTL_FILE, but probably not that much faster.

in my test it was slightly faster over about 20k of rows, blow that out over 14 million though and it might be worth it.

I believe if you want to get any quicker than this, the way to go would be pro*c.. but I haven't got into that, so can't really advise.

set pagesize 1000
set FLUSH OFF
drop user usera cascade;
create user usera default tablespace users identified by abc123;
grant create session to usera;
grant resource to usera;

create or replace directory testdir as '/tmp';
grant read,write on directory testdir to usera;
grant execute on UTL_FILE to usera;

connect usera/abc123;

set timing on

spool /tmp/spooltest.txt
select object_name from all_objects;
spool off

DECLARE
 v_file UTL_FILE.FILE_TYPE;
 TYPE t_col is table of all_objects.object_name%type index by PLS_INTEGER;
 v_object_names t_col;

BEGIN
  v_file := UTL_FILE.FOPEN('TESTDIR','utlfiletext.txt','w');

  select object_name BULK COLLECT INTO v_object_names
  from all_objects;

  for idx IN 1 .. v_object_names.COUNT LOOP
    UTL_FILE.PUT_LINE(v_file, v_object_names(idx), FALSE);
  END LOOP;

   UTL_FILE.FCLOSE(v_file);
END;
/

The results. The top result being from sqlplus only, the bottom using UTL_FILE

23931 rows selected.

Elapsed: 00:00:06.60

PL/SQL procedure successfully completed.

Elapsed: 00:00:05.45
入画浅相思 2024-09-03 05:27:37

对于典型的查询,14M 记录至少需要数百兆字节的数据才能从服务器中获取、通过连接传递并保存到磁盘。

鉴于此,12 分钟对我来说听起来并不算太多。

但是,您的查询仍然有可能得到优化。您可以将其发布到这里吗?

With a typical query, 14M records is at least several hundred megabytes of data to fetch out of the server, pass across the connection and save to the disk.

Given this, 12 minutes does not sound too much to me.

However, it is still possible that your query can be optimized. Could you please post it here?

转身泪倾城 2024-09-03 05:27:37

那么这是通过网络进行的还是您已登录到拥有数据库的盒子中?如果您有访问权限,也许您可​​以在数据库所在的机器上运行 sqlplus 会话并压缩文件,然后将文件发送到本地计算机。通过网络发送大文件可能比发送数百万条较小的记录更快。当然,这不会使它变得非常快,但可能会节省一些时间。

另外,有了这么多数据,您真的需要将其存储到文件中吗?你可以做出口吗?

So is this going over the wire or are you logged into the box that has the database? If you have access, maybe you can run your sqlplus session on the box where the database lives and zip the file up then send the file to your local machine. It might be faster to send a big file over the wire instead of sending millions of smaller records. Of course this won't make it super fast, but might shave some time off.

Also with that much data do you really need to spool it to file? Can you do an export instead?

瑶笙 2024-09-03 05:27:37

您可以通过添加到脚本来启用输出缓冲

SET FLUSH OFF

,但结果取决于您的操作系统。

You can enable output buffering by adding to you script

SET FLUSH OFF

But result depends on your OS.

嗫嚅 2024-09-03 05:27:37

当从 SQL*Plus 中的查询中获取大量结果时,我发现需要花费大量时间的一件事是数据的实际显示。如果您要将数据假脱机到文件中,则可以SET TERMOUT OFF,并且查询运行得更快,因为它不必花费时间将其写入屏幕。

When getting a lot of results from a query in SQL*Plus, I've found that one thing that takes a lot of time is the actual displaying of the data. If you're spooling the data to a file, you can SET TERMOUT OFF, and the query runs much faster since it doesn't have to spend the time to write it to the screen.

单身情人 2024-09-03 05:27:37

Tom Kyte 是一位真正的大师,可以提供一些选项。

Some options are available from Tom Kyte, who is a real guru.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文