Oracle BLOB 提取非常慢
从我管理的 Oracle 10gR2 10.2.05 数据库中提取 BLOBS 时,我遇到性能问题。我有大约 400 个存储为 BLOBS 的文件,我需要将它们写入文件系统。下面是我的代码。当我执行这个过程时,前 8 个左右的文件会在几秒钟内写入,从那里开始,速度会呈指数级下降,在前 8 个文件之后,每 40 秒大约写入 1 个文件。对我来说,这没有任何意义,为什么会这样呢?前 8 个文件速度很快,但之后一切都会变慢。我尝试将其作为存储过程运行,将 UTL_FILE.fopen 更改为“wb”(写入二进制文件),并使用 NFS 安装的文件系统,以免妨碍数据库的性能。这些都没有产生任何影响。按照这个速度,我需要 6 个小时才能提取 400 个文件,每个文件平均大小约为 1.5MB。有谁发现我的代码有什么问题或者知道更好的方法来做到这一点?顺便说一句,我使用了这里找到的示例代码 http://www.oracle-base .com/articles/9i/ExportBlob9i.php 作为起点。
感谢您的帮助!
DECLARE
TYPE comment_text IS TABLE OF documents.comment_text%TYPE;
TYPE localdata IS TABLE OF documents.localdata%TYPE;
l_file UTL_FILE.FILE_TYPE;
l_buffer RAW(32767);
l_amount BINARY_INTEGER := 32767;
l_pos INTEGER := 1;
l_blob localdata;
l_fname comment_text;
l_blob_len INTEGER;
l_x NUMBER := 1;
BEGIN
SELECT comment_text, localdata
BULK COLLECT INTO l_fname, l_blob
FROM documents
WHERE user_id='BILLYBOB';
IF SQL%ROWCOUNT =0 THEN
DBMS_OUTPUT.PUT_LINE('No records found!');
ELSE
FOR i IN l_fname.FIRST .. l_fname.LAST
LOOP
l_blob_len := DBMS_LOB.getlength(l_blob(i));
DBMS_OUTPUT.PUT_LINE(l_blob_len);
l_file := UTL_FILE.fopen('BLOBS',l_x || '_' || l_fname(i),'w', 32767);
l_pos := 1;
l_x := l_x + 1;
WHILE l_pos < l_blob_len
LOOP
DBMS_LOB.read(l_blob(i), l_amount, l_pos, l_buffer);
UTL_FILE.put_raw(l_file, l_buffer, TRUE);
l_pos := l_pos + l_amount;
END LOOP;
UTL_FILE.fclose(l_file);
END LOOP;
END IF;
END;
I am having performance issues when extracting BLOBS from an oracle 10gR2 10.2.05 database I administer. I have around 400 files stored as BLOBS that I need to write out to the file system. Below is my code. When I execute this procedure the first 8 or so files are written within a couple of seconds and from there things slow down exponentially, somewhere around 1 file every 40 seconds after the first 8. To me this doesn't make any sense, why would the first 8 files be fast but after that everything slows down. I have tried running this as a stored procedure, changing the UTL_FILE.fopen to "wb" (write binary), and also using a NFS mounted file system so as not to impede the performance of the database. None of this has had any impact. At this rate it is going to take me 6 hours to extract 400 files that average around 1.5MB each. Does anyone see anything wrong with my code or know of a better way to do this? By the way I used this example code found here http://www.oracle-base.com/articles/9i/ExportBlob9i.php as a starting point.
Thanks for any help!
DECLARE
TYPE comment_text IS TABLE OF documents.comment_text%TYPE;
TYPE localdata IS TABLE OF documents.localdata%TYPE;
l_file UTL_FILE.FILE_TYPE;
l_buffer RAW(32767);
l_amount BINARY_INTEGER := 32767;
l_pos INTEGER := 1;
l_blob localdata;
l_fname comment_text;
l_blob_len INTEGER;
l_x NUMBER := 1;
BEGIN
SELECT comment_text, localdata
BULK COLLECT INTO l_fname, l_blob
FROM documents
WHERE user_id='BILLYBOB';
IF SQL%ROWCOUNT =0 THEN
DBMS_OUTPUT.PUT_LINE('No records found!');
ELSE
FOR i IN l_fname.FIRST .. l_fname.LAST
LOOP
l_blob_len := DBMS_LOB.getlength(l_blob(i));
DBMS_OUTPUT.PUT_LINE(l_blob_len);
l_file := UTL_FILE.fopen('BLOBS',l_x || '_' || l_fname(i),'w', 32767);
l_pos := 1;
l_x := l_x + 1;
WHILE l_pos < l_blob_len
LOOP
DBMS_LOB.read(l_blob(i), l_amount, l_pos, l_buffer);
UTL_FILE.put_raw(l_file, l_buffer, TRUE);
l_pos := l_pos + l_amount;
END LOOP;
UTL_FILE.fclose(l_file);
END LOOP;
END IF;
END;
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
我很确定您不应该在过程开始时将所有 BLOB 提取到数组中。由于您读取 BLOB 数据并且从未真正关闭任何 lob 定位器,因此 Oracle 必须将所有这些信息保留在内存中。我猜这是内存溢出的情况。
试试这个:
I'm pretty sure you shouldn't fetch all BLOBs into an array at the start of the procedure. Since you read the BLOB data and never actually close any lob locator, Oracle has to keep all this info in memory. I would guess this is a case of memory overfill.
Try this instead: