SQLPlus - 从 PL/SQL 块假脱机到多个文件

发布于 2024-08-28 21:22:12 字数 493 浏览 10 评论 0原文

我有一个查询将大量数据返回到 CSV 文件中。事实上,行数太多,以至于 Excel 无法打开它。有没有办法控制 spool 每次处理 65000 行时假脱机到一个新文件?理想情况下,我希望将输出保存在按顺序命名的文件中,例如 large_data_1.csvlarge_data_2.csvlarge_data_3.csv等等...

我可以在 PL/SQL 块中使用 dbms_output 来控制输出多少行,但是我将如何切换文件,因为 spool 似乎没有可以从 PL/SQL 块访问吗?

(Oracle 10g)

更新:

我无权访问服务器,因此将文件写入服务器可能不起作用。

更新2:

某些字段包含自由格式文本,包括换行符,因此在写入文件后计算换行符并不像在返回数据时计算记录那么容易...

I have a query that returns a lot of data into a CSV file. So much, in fact, that Excel can't open it - there are too many rows. Is there a way to control spool to spool to a new file everytime 65000 rows have been processed? Ideally, I'd like to have my output in files named in sequence, such as large_data_1.csv, large_data_2.csv, large_data_3.csv, etc...

I could use dbms_output in a PL/SQL block to control how many rows are output, but then how would I switch files, as spool does not seem to be accessible from PL/SQL blocks?

(Oracle 10g)

UPDATE:

I don't have access to the server, so writing files to the server would probably not work.

UPDATE 2:

Some of the fields contain free-form text, including linebreaks, so counting line breaks AFTER the file is written is not as easy as counting records WHILE the data is being returned...

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

风流物 2024-09-04 21:22:13

utl_file 是您要查找的包。您可以编写一个游标并循环遍历行(将它们写出),当 mod(num_rows_writing,num_per_file) == 0 时,就可以开始一个新文件了。它在 PL/SQL 块中运行良好。

以下是 utl_file 的参考:
http://www.adp-gmbh.ch/ora/plsql/utl_file。 html

注意:
我在这里假设可以将文件写到服务器上。

utl_file is the package you are looking for. You can write a cursor and loop over the rows (writing them out) and when mod(num_rows_written,num_per_file) == 0 it's time to start a new file. It works fine within PL/SQL blocks.

Here's the reference for utl_file:
http://www.adp-gmbh.ch/ora/plsql/utl_file.html

NOTE:
I'm assuming here, that it's ok to write the files out to the server.

暗藏城府 2024-09-04 21:22:13

您是否考虑过在 Excel 中设置外部数据连接(假设生成的 CSV 文件仅用于 Excel)?您可以定义一个 Oracle 视图来限制返回的行,并在查询中添加一些参数以允许用户进一步限制结果集。 (无论如何,我从来不明白有人如何处理 Excel 中的 64K 行)。

我觉得这有点像黑客,但您也可以使用 UTL_MAIL 并生成附件以通过电子邮件发送给您的用户。附件的大小限制为 32K,因此您必须跟踪光标循环中的大小并在此基础上启动新附件。

Have you looked at setting up an external data connection in Excel (assuming that the CSV files are only being produced for use in Excel)? You could define an Oracle view that limits the rows returned and also add some parameters in the query to allow the user to further limit the result set. (I've never understood what someone does with 64K rows in Excel anyway).

I feel that this is somewhat of a hack, but you could also use UTL_MAIL and generate attachments to email to your user(s). There's a 32K size limit to the attachments, so you'd have to keep track of the size in the cursor loop and start a new attachment on this basis.

隱形的亼 2024-09-04 21:22:13

虽然您的问题询问如何将大量数据分解为 Excel 可以处理的块,但我会问是否有 Excel 操作的任何部分可以移动到 SQL(PL/SQL?)中,从而减少数据量。最终它必须被减少才能对任何人都有意义。数据库是完成这项工作的一个很好的引擎。

当您将数据减少到更可呈现的数量甚至最终结果时,将其转储到 Excel 中以进行最终演示。

这不是您正在寻找的答案,但我认为当完成工作变得困难时询问您是否使用了正确的工具总是好的。

While your question asks how to break the greate volume of data into chunks Excel can handle, I would ask if there is any part of the Excel operation that can be moved into SQL (PL/SQL?) that can reduce the volume of data. Ultimately it has to be reduced to be made meaningful to anyone. The database is a great engine to do that work on.

When you have reduced the data to more presentable volumes or even final results, dump it for Excel to make the final presentation.

This is not the answer you were looking for but I think it is always good to ask if you are using the right tool when it is getting difficult to get the job done.

無心 2024-09-04 21:22:12

得到了一个解决方案,不知道为什么我没有早点想到这一点...

基本思想是主 sqplplus 脚本生成一个中间脚本,它将输出拆分为多个文件。执行中间脚本将执行对 rownum 施加不同范围的多个查询,并为每个查询假脱机到不同的文件。

set termout off
set serveroutput on
set echo off
set feedback off
variable v_rowCount number;
spool intermediate_file.sql
declare
     i number := 0;
     v_fileNum number := 1;
     v_range_start number := 1;
     v_range_end number := 1;
     k_max_rows constant number := 65536;
begin
    dbms_output.enable(10000);
    select count(*) 
    into :v_err_count
    from ...
    /* You don't need to see the details of the query... */

    while i <= :v_err_count loop

          v_range_start := i+1;
          if v_range_start <= :v_err_count then
            i := i+k_max_rows;
            v_range_end := i;

            dbms_output.put_line('set colsep ,  
set pagesize 0
set trimspool on 
set headsep off
set feedback off
set echo off
set termout off
set linesize 4000
spool large_data_file_'||v_fileNum||'.csv
select data_string
from (select rownum rn, data_object
      from 
      /* Details of query omitted */
     )
where rn >= '||v_range_start||' and rn <= '||v_range_end||';
spool off');
          v_fileNum := v_fileNum +1;
         end if;
    end loop;
end;
/
spool off
prompt     executing intermediate file
@intermediate_file.sql;
set serveroutput off

Got a solution, don't know why I didn't think of this sooner...

The basic idea is that the master sqplplus script generates an intermediate script that will split the output to multiple files. Executing the intermediate script will execute multiple queries with different ranges imposed on rownum, and spool to a different file for each query.

set termout off
set serveroutput on
set echo off
set feedback off
variable v_rowCount number;
spool intermediate_file.sql
declare
     i number := 0;
     v_fileNum number := 1;
     v_range_start number := 1;
     v_range_end number := 1;
     k_max_rows constant number := 65536;
begin
    dbms_output.enable(10000);
    select count(*) 
    into :v_err_count
    from ...
    /* You don't need to see the details of the query... */

    while i <= :v_err_count loop

          v_range_start := i+1;
          if v_range_start <= :v_err_count then
            i := i+k_max_rows;
            v_range_end := i;

            dbms_output.put_line('set colsep ,  
set pagesize 0
set trimspool on 
set headsep off
set feedback off
set echo off
set termout off
set linesize 4000
spool large_data_file_'||v_fileNum||'.csv
select data_string
from (select rownum rn, data_object
      from 
      /* Details of query omitted */
     )
where rn >= '||v_range_start||' and rn <= '||v_range_end||';
spool off');
          v_fileNum := v_fileNum +1;
         end if;
    end loop;
end;
/
spool off
prompt     executing intermediate file
@intermediate_file.sql;
set serveroutput off
你在看孤独的风景 2024-09-04 21:22:12

尝试使用纯 SQL*Plus 解决方案...

set pagesize 0
set trimspool on  
set headsep off 
set feedback off
set echo off 
set verify off
set timing off
set linesize 4000

DEFINE rows_per_file = 50


-- Create an sql file that will create the individual result files
SET DEFINE OFF

SPOOL c:\temp\generate_one.sql

PROMPT COLUMN which_dynamic NEW_VALUE dynamic_filename
PROMPT

PROMPT SELECT 'c:\temp\run_#'||TO_CHAR( &1, 'fm000' )||'_result.txt' which_dynamic FROM dual
PROMPT /

PROMPT SPOOL &dynamic_filename

PROMPT SELECT *
PROMPT   FROM ( SELECT a.*, rownum rnum
PROMPT            FROM ( SELECT object_id FROM all_objects ORDER BY object_id ) a
PROMPT           WHERE rownum <= ( &2 * 50 ) )
PROMPT  WHERE rnum >= ( ( &3 - 1 ) * 50 ) + 1
PROMPT /

PROMPT SPOOL OFF

SPOOL OFF

SET DEFINE &


-- Define variable to hold number of rows
-- returned by the query
COLUMN num_rows NEW_VALUE v_num_rows

-- Find out how many rows there are to be
SELECT COUNT(*) num_rows
  FROM ( SELECT LEVEL num_files FROM dual CONNECT BY LEVEL <= 120 );


-- Create a master file with the correct number of sql files
SPOOL c:\temp\run_all.sql

SELECT '@c:\temp\generate_one.sql '||TO_CHAR( num_files )
                                   ||' '||TO_CHAR( num_files )
                                   ||' '||TO_CHAR( num_files ) file_name
  FROM ( SELECT LEVEL num_files 
           FROM dual 
        CONNECT BY LEVEL <= CEIL( &v_num_rows / &rows_per_file ) )
/

SPOOL OFF

-- Now run them all
@c:\temp\run_all.sql

Try this for a pure SQL*Plus solution...

set pagesize 0
set trimspool on  
set headsep off 
set feedback off
set echo off 
set verify off
set timing off
set linesize 4000

DEFINE rows_per_file = 50


-- Create an sql file that will create the individual result files
SET DEFINE OFF

SPOOL c:\temp\generate_one.sql

PROMPT COLUMN which_dynamic NEW_VALUE dynamic_filename
PROMPT

PROMPT SELECT 'c:\temp\run_#'||TO_CHAR( &1, 'fm000' )||'_result.txt' which_dynamic FROM dual
PROMPT /

PROMPT SPOOL &dynamic_filename

PROMPT SELECT *
PROMPT   FROM ( SELECT a.*, rownum rnum
PROMPT            FROM ( SELECT object_id FROM all_objects ORDER BY object_id ) a
PROMPT           WHERE rownum <= ( &2 * 50 ) )
PROMPT  WHERE rnum >= ( ( &3 - 1 ) * 50 ) + 1
PROMPT /

PROMPT SPOOL OFF

SPOOL OFF

SET DEFINE &


-- Define variable to hold number of rows
-- returned by the query
COLUMN num_rows NEW_VALUE v_num_rows

-- Find out how many rows there are to be
SELECT COUNT(*) num_rows
  FROM ( SELECT LEVEL num_files FROM dual CONNECT BY LEVEL <= 120 );


-- Create a master file with the correct number of sql files
SPOOL c:\temp\run_all.sql

SELECT '@c:\temp\generate_one.sql '||TO_CHAR( num_files )
                                   ||' '||TO_CHAR( num_files )
                                   ||' '||TO_CHAR( num_files ) file_name
  FROM ( SELECT LEVEL num_files 
           FROM dual 
        CONNECT BY LEVEL <= CEIL( &v_num_rows / &rows_per_file ) )
/

SPOOL OFF

-- Now run them all
@c:\temp\run_all.sql
倾城月光淡如水﹏ 2024-09-04 21:22:12

对生成的文件使用split

Use split on the resulting file.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文