Oracle 将 blob 数据从一个表复制到另一个表

发布于 2024-11-30 20:50:53 字数 189 浏览 1 评论 0原文

我有一个带有 blob 列的表(超过 1.000.000 行和 60GB 数据)。我想将表的大部分行(不是所有行)移动到另一个表。我尝试 insert into X select from y 命令,但速度太慢。

最快的方法是什么?

我有 Oracle 10 或 11。

I have a table (over 1.000.000 rows and 60GB data) with a blob column. I want to move most of the rows (not all rows) of the table to another table. I tried insert into X select from y command but it is too slow.

What is the fastest way?

I have Oracle 10 or 11.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

神魇的王 2024-12-07 20:50:53

使用 /*+ 追加 */ 提示来传递归档日志
当您使用提示 oracle dosent 创建归档日志时

insert  /*+ append */ into TABLE1
select  *
from    TABLE2

use /*+ append */ hint to pass archive log
when you use the hint oracle dosent create arcive logs

insert  /*+ append */ into TABLE1
select  *
from    TABLE2
北凤男飞 2024-12-07 20:50:53

现在提出建议已经太晚了,但它可能对上述解决方案有所帮助,如果您的新(目标)表具有约束或索引或触发器,那么首先尝试删除/禁用它们,然后加载大量数据,最后创建/启用您的约束、索引和触发器返回并分析您的表索引。仅当您只需复制批量数据一次时,才建议使用这种节省时间的解决方案。因为在表中插入新记录时,DBMS 确保了约束、检查和索引,从而降低了速度

its too late to suggest something, but it may help with above solutions, if your new(target) table has constraints or indexes or triggers, then try to remove/disable them first, then load your bulk of data and finally create/enable your constraints, indexes and triggers back and analyze your table indexes. this time saving solution is only suggested when you only have to replicate your bulk data once. as while inserting new records in table DBMS ensures the constraints, checks and indexes which reduces the speed

痕至 2024-12-07 20:50:53

好的,我们不了解您的系统,因此很难告诉您太多信息。你的问题实际上取决于你的环境。无论如何,这里有一些测试来显示使用您的方法与其他方法相比所需的时间和资源:

假设您的方法是方法 1,另一个方法是方法 2。

20:43:24 SQL> set autotrace on;
20:43:30 SQL> alter session set SQL_TRACE=TRUE;

Session altered.

20:43:39 SQL> --let's make sure we are reading from disk (for arguments sake)
20:43:45 SQL> alter system flush shared_pool;

System altered.

20:43:45 SQL> alter system flush buffer_cache;

System altered.

20:43:45 SQL> 
20:43:45 SQL> --clear my examples
20:43:45 SQL> drop table t;
drop table t
           *
ERROR at line 1:
ORA-00942: table or view does not exist


20:43:49 SQL> drop table u;
drop table u
           *
ERROR at line 1:
ORA-00942: table or view does not exist


20:43:49 SQL> 
20:43:49 SQL> --create table u - we will populate this with random numbers
20:43:49 SQL> create table u (y varchar2(4000));

Table created.

20:43:50 SQL> 
20:43:50 SQL> --insert 1 million rows of random numbers
20:43:50 SQL> insert into u
20:43:50   2  (select dbms_random.normal
20:43:50   3  from dual
20:43:50   4  CONNECT BY level <= 1000000);

1000000 rows created.


Execution Plan
----------------------------------------------------------
Plan hash value: 1236776825

------------------------------------------------------------------------------
| Id  | Operation                     | Name | Rows  | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | INSERT STATEMENT              |      |     1 |     2   (0)| 00:00:01 |
|   1 |  LOAD TABLE CONVENTIONAL      | U    |       |            |          |
|*  2 |   CONNECT BY WITHOUT FILTERING|      |       |            |          |
|   3 |    FAST DUAL                  |      |     1 |     2   (0)| 00:00:01 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(LEVEL<=1000000)


Statistics
----------------------------------------------------------
       4175  recursive calls
      58051  db block gets
      13118  consistent gets
         47  physical reads
   54277624  redo size
        675  bytes sent via SQL*Net to client
        647  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
         56  sorts (memory)
          0  sorts (disk)
    1000000  rows processed

20:44:21 SQL> 
20:44:21 SQL> --create table t - we will populate this from table u
20:44:21 SQL> create table t (x varchar2(4000));

Table created.

20:44:21 SQL> 
20:44:21 SQL> --let's make sure we are reading from disk (for arguments sake)
20:44:21 SQL> alter system flush shared_pool;

System altered.

20:44:21 SQL> alter system flush buffer_cache;

System altered.

20:44:26 SQL> 
20:44:26 SQL> --insert data from u to t (this is how you said you did this)
20:44:26 SQL> insert into t (select * from u);

1000000 rows created.


Execution Plan
----------------------------------------------------------
Plan hash value: 537870620

---------------------------------------------------------------------------------
| Id  | Operation                | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |      |   997K|  1905M|  1750   (1)| 00:00:21 |
|   1 |  LOAD TABLE CONVENTIONAL | T    |       |       |            |          |
|   2 |   TABLE ACCESS FULL      | U    |   997K|  1905M|  1750   (1)| 00:00:21 |
---------------------------------------------------------------------------------

Note
-----
   - dynamic sampling used for this statement (level=2)


Statistics
----------------------------------------------------------
       5853  recursive calls
      58201  db block gets
      24213  consistent gets
       6551  physical reads
   54591764  redo size
        681  bytes sent via SQL*Net to client
        599  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
         57  sorts (memory)
          0  sorts (disk)
    1000000  rows processed

20:44:41 SQL> 
20:44:41 SQL> 
20:44:41 SQL> --now let's start over with a different method
20:44:41 SQL> drop table t;

Table dropped.

20:44:48 SQL> drop table u;

Table dropped.

20:44:50 SQL> 
20:44:50 SQL> --create table u - we will populate this with random numbers
20:44:50 SQL> create table u (y varchar2(4000));

Table created.

20:44:51 SQL> 
20:44:51 SQL> --insert 1 million rows of random numbers
20:44:51 SQL> insert into u
20:44:51   2  (select dbms_random.normal
20:44:51   3  from dual
20:44:51   4  CONNECT BY level <= 1000000);

1000000 rows created.


Execution Plan
----------------------------------------------------------
Plan hash value: 1236776825

------------------------------------------------------------------------------
| Id  | Operation                     | Name | Rows  | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | INSERT STATEMENT              |      |     1 |     2   (0)| 00:00:01 |
|   1 |  LOAD TABLE CONVENTIONAL      | U    |       |            |          |
|*  2 |   CONNECT BY WITHOUT FILTERING|      |       |            |          |
|   3 |    FAST DUAL                  |      |     1 |     2   (0)| 00:00:01 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(LEVEL<=1000000)


Statistics
----------------------------------------------------------
       2908  recursive calls
      58153  db block gets
      12831  consistent gets
         10  physical reads
   54284104  redo size
        683  bytes sent via SQL*Net to client
        647  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
         31  sorts (memory)
          0  sorts (disk)
    1000000  rows processed

20:45:20 SQL> 
20:45:20 SQL> --let's make sure we are reading from disk (for arguments sake)
20:45:20 SQL> alter system flush shared_pool;

System altered.

20:45:20 SQL> alter system flush buffer_cache;

System altered.

20:45:25 SQL> 
20:45:25 SQL> --create table t using table u
20:45:25 SQL> create table t as (select * from u);

Table created.

20:45:36 SQL> 
20:45:36 SQL> drop table t;

Table dropped.

20:45:41 SQL> drop table u;

Table dropped.

20:45:41 SQL> 
20:45:41 SQL> commit;

Commit complete.

20:45:41 SQL> spool off

好的,所以我们关心我们测试的两种方法,即

insert into t (select * from u);

我们得到了自动跟踪答案,而

create table t as (select * from u);

我们没有得到自动跟踪。

幸运的是,我还运行了 sql_trace,并获得了 TKprof 的统计数据。

这就是我得到的:
对于“insert into t (select * from u);”:

********************************************************************************

SQL ID: bjdnhkhq8r6h4
Plan Hash: 537870620
insert into t (select * from u)



call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.03          2          2          0           0
Execute      1      1.74       7.67       6201      22538      58121     1000000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      1.74       7.71       6203      22540      58121     1000000

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 91  

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  LOAD TABLE CONVENTIONAL  (cr=0 pr=0 pw=0 time=0 us)
      1   TABLE ACCESS FULL U (cr=4 pr=5 pw=0 time=0 us cost=1750 size=1997891896 card=997948)

********************************************************************************

对于“create table t as (select * from u)”,我们得到:

********************************************************************************

SQL ID: asawpwvdj1nbv
Plan Hash: 2321469388
create table t as (select * from u)


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.03          2          2          1           0
Execute      1      0.90       2.68       6372      12823       8573     1000000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.90       2.71       6374      12825       8574     1000000

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 91  

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  LOAD AS SELECT  (cr=13400 pr=6382 pw=6370 time=0 us)
1000000   TABLE ACCESS FULL U (cr=12640 pr=6370 pw=0 time=349545 us cost=1750 size=2159012856 card=1078428)

********************************************************************************

那么这告诉我们什么?
出色地:
-方法 2 比方法 1 花费的总时间大约减少 65%(对于 100 万行,减少了整整 5 秒)
-方法 2 总体上比方法 1 花费的 CPU 时间少了约 48%
- 方法 2 比方法 1 解析的磁盘略多
- 方法 2 检索到的缓冲区比方法 1 少很多,

希望这对您有帮助:)

Ok, so we don't know your system so it's hard to tell you much. Your question really depends on your environment. Regardless, here are some tests anyways to show the time and resources it takes to use your method versus another method:

Let's say your method is method 1 and the other method is method 2.

20:43:24 SQL> set autotrace on;
20:43:30 SQL> alter session set SQL_TRACE=TRUE;

Session altered.

20:43:39 SQL> --let's make sure we are reading from disk (for arguments sake)
20:43:45 SQL> alter system flush shared_pool;

System altered.

20:43:45 SQL> alter system flush buffer_cache;

System altered.

20:43:45 SQL> 
20:43:45 SQL> --clear my examples
20:43:45 SQL> drop table t;
drop table t
           *
ERROR at line 1:
ORA-00942: table or view does not exist


20:43:49 SQL> drop table u;
drop table u
           *
ERROR at line 1:
ORA-00942: table or view does not exist


20:43:49 SQL> 
20:43:49 SQL> --create table u - we will populate this with random numbers
20:43:49 SQL> create table u (y varchar2(4000));

Table created.

20:43:50 SQL> 
20:43:50 SQL> --insert 1 million rows of random numbers
20:43:50 SQL> insert into u
20:43:50   2  (select dbms_random.normal
20:43:50   3  from dual
20:43:50   4  CONNECT BY level <= 1000000);

1000000 rows created.


Execution Plan
----------------------------------------------------------
Plan hash value: 1236776825

------------------------------------------------------------------------------
| Id  | Operation                     | Name | Rows  | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | INSERT STATEMENT              |      |     1 |     2   (0)| 00:00:01 |
|   1 |  LOAD TABLE CONVENTIONAL      | U    |       |            |          |
|*  2 |   CONNECT BY WITHOUT FILTERING|      |       |            |          |
|   3 |    FAST DUAL                  |      |     1 |     2   (0)| 00:00:01 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(LEVEL<=1000000)


Statistics
----------------------------------------------------------
       4175  recursive calls
      58051  db block gets
      13118  consistent gets
         47  physical reads
   54277624  redo size
        675  bytes sent via SQL*Net to client
        647  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
         56  sorts (memory)
          0  sorts (disk)
    1000000  rows processed

20:44:21 SQL> 
20:44:21 SQL> --create table t - we will populate this from table u
20:44:21 SQL> create table t (x varchar2(4000));

Table created.

20:44:21 SQL> 
20:44:21 SQL> --let's make sure we are reading from disk (for arguments sake)
20:44:21 SQL> alter system flush shared_pool;

System altered.

20:44:21 SQL> alter system flush buffer_cache;

System altered.

20:44:26 SQL> 
20:44:26 SQL> --insert data from u to t (this is how you said you did this)
20:44:26 SQL> insert into t (select * from u);

1000000 rows created.


Execution Plan
----------------------------------------------------------
Plan hash value: 537870620

---------------------------------------------------------------------------------
| Id  | Operation                | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |      |   997K|  1905M|  1750   (1)| 00:00:21 |
|   1 |  LOAD TABLE CONVENTIONAL | T    |       |       |            |          |
|   2 |   TABLE ACCESS FULL      | U    |   997K|  1905M|  1750   (1)| 00:00:21 |
---------------------------------------------------------------------------------

Note
-----
   - dynamic sampling used for this statement (level=2)


Statistics
----------------------------------------------------------
       5853  recursive calls
      58201  db block gets
      24213  consistent gets
       6551  physical reads
   54591764  redo size
        681  bytes sent via SQL*Net to client
        599  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
         57  sorts (memory)
          0  sorts (disk)
    1000000  rows processed

20:44:41 SQL> 
20:44:41 SQL> 
20:44:41 SQL> --now let's start over with a different method
20:44:41 SQL> drop table t;

Table dropped.

20:44:48 SQL> drop table u;

Table dropped.

20:44:50 SQL> 
20:44:50 SQL> --create table u - we will populate this with random numbers
20:44:50 SQL> create table u (y varchar2(4000));

Table created.

20:44:51 SQL> 
20:44:51 SQL> --insert 1 million rows of random numbers
20:44:51 SQL> insert into u
20:44:51   2  (select dbms_random.normal
20:44:51   3  from dual
20:44:51   4  CONNECT BY level <= 1000000);

1000000 rows created.


Execution Plan
----------------------------------------------------------
Plan hash value: 1236776825

------------------------------------------------------------------------------
| Id  | Operation                     | Name | Rows  | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | INSERT STATEMENT              |      |     1 |     2   (0)| 00:00:01 |
|   1 |  LOAD TABLE CONVENTIONAL      | U    |       |            |          |
|*  2 |   CONNECT BY WITHOUT FILTERING|      |       |            |          |
|   3 |    FAST DUAL                  |      |     1 |     2   (0)| 00:00:01 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(LEVEL<=1000000)


Statistics
----------------------------------------------------------
       2908  recursive calls
      58153  db block gets
      12831  consistent gets
         10  physical reads
   54284104  redo size
        683  bytes sent via SQL*Net to client
        647  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
         31  sorts (memory)
          0  sorts (disk)
    1000000  rows processed

20:45:20 SQL> 
20:45:20 SQL> --let's make sure we are reading from disk (for arguments sake)
20:45:20 SQL> alter system flush shared_pool;

System altered.

20:45:20 SQL> alter system flush buffer_cache;

System altered.

20:45:25 SQL> 
20:45:25 SQL> --create table t using table u
20:45:25 SQL> create table t as (select * from u);

Table created.

20:45:36 SQL> 
20:45:36 SQL> drop table t;

Table dropped.

20:45:41 SQL> drop table u;

Table dropped.

20:45:41 SQL> 
20:45:41 SQL> commit;

Commit complete.

20:45:41 SQL> spool off

Ok, so we care about the two methods we tested i.e.

insert into t (select * from u);

which we get an autotrace answer for and

create table t as (select * from u);

which we do not get an autotrace for.

Fortunately I also ran sql_trace and I picked up a TKprof of stats.

This is what I get:
for "insert into t (select * from u);":

********************************************************************************

SQL ID: bjdnhkhq8r6h4
Plan Hash: 537870620
insert into t (select * from u)



call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.03          2          2          0           0
Execute      1      1.74       7.67       6201      22538      58121     1000000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      1.74       7.71       6203      22540      58121     1000000

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 91  

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  LOAD TABLE CONVENTIONAL  (cr=0 pr=0 pw=0 time=0 us)
      1   TABLE ACCESS FULL U (cr=4 pr=5 pw=0 time=0 us cost=1750 size=1997891896 card=997948)

********************************************************************************

and for "create table t as (select * from u)" we get:

********************************************************************************

SQL ID: asawpwvdj1nbv
Plan Hash: 2321469388
create table t as (select * from u)


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.03          2          2          1           0
Execute      1      0.90       2.68       6372      12823       8573     1000000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.90       2.71       6374      12825       8574     1000000

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 91  

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  LOAD AS SELECT  (cr=13400 pr=6382 pw=6370 time=0 us)
1000000   TABLE ACCESS FULL U (cr=12640 pr=6370 pw=0 time=349545 us cost=1750 size=2159012856 card=1078428)

********************************************************************************

So what does this tell us?
well:
-Method 2 took about 65% less overall time than method 1 (a whole 5 seconds less for 1 million rows)
-Method 2 took about 48% less CPU time overall than method 1
-Slightly more disk was parsed with method 2 than method 1
-a lot less buffers were retrieved for method 2 than for method 1

hope this helps you :)

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文