Mysql大记录表插入问题

发布于 2024-11-06 02:25:23 字数 889 浏览 0 评论 0原文

我是一名开发人员,在管理包含大量记录的表时遇到问题。

我正在执行一个 cron 作业来填充主表(表 A)中的数据,该表有 5-6 列和大约 4,00,000 到 5,00,000 行,然后创建另一个表,并且该表中的数据将随着时间的推移继续增加。

表 A 包含原始数据,我的输出表是表 B

我的 cron 脚本截断表 B 中的数据,然后使用选择查询插入数据

TRUNCATE TABLE_B; 

INSERT INTO TABLE_B (field1, field2) 
SELECT DISTINCT(t1.field2), t2.field2 
FROM TABLE_A AS t1 
INNER JOIN TABLE_A t2 ON t2.field1=t1.field1 
WHERE t1.field2 <> t2.field2 
GROUP BY t1.field2, t2.field2 
ORDER BY COUNT(t1.field2) DESC; 

上面的选择查询生成大约 1,50,000 到 2,00,000 行

现在填充表 B 需要太多时间同时,如果我的应用程序尝试访问表 B,则选择查询失败

解释查询结果如下:

'1','PRIMARY','T1','ALL','field1_index',NULL,NULL,NULL,'431743','Using temporary;Using filesort' 
'1','PRIMARY','T2','ref','field1_index','field1_index','767','DBNAME.T1.field1','1','Using where' 

有人可以帮助我改进此过程,或指导我上述过程的替代方案吗?

谢谢苏克

I am a developer and I am facing an issue while managing table which has large amount of records.

I am executing a cron job to fill up data in primary table (Table A) which has 5-6 columns and approx 4,00,000 to 5,00,000 rows and then creating another table and data in this table would continue to increase over the time.

TABLE A contains the raw data and my output table is TABLE B

My cron script truncates data in Table B then inserts data using select query

TRUNCATE TABLE_B; 

INSERT INTO TABLE_B (field1, field2) 
SELECT DISTINCT(t1.field2), t2.field2 
FROM TABLE_A AS t1 
INNER JOIN TABLE_A t2 ON t2.field1=t1.field1 
WHERE t1.field2 <> t2.field2 
GROUP BY t1.field2, t2.field2 
ORDER BY COUNT(t1.field2) DESC; 

Above select query produces approx 1,50,000 to 2,00,000 rows

Now it takes too much time to populate TABLE B and meanwhile If my application tries to access TABLE B then select query fails

Explaining query results following:

'1','PRIMARY','T1','ALL','field1_index',NULL,NULL,NULL,'431743','Using temporary;Using filesort' 
'1','PRIMARY','T2','ref','field1_index','field1_index','767','DBNAME.T1.field1','1','Using where' 

Can someone please help me in improving this process, or guide me alternatives for above process?

Thanks

Suketu

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

┼── 2024-11-13 02:25:23

您应该在存储过程中完成整个过程。

不要截断这么大的表。请执行以下步骤:

  1. 将 TableB 结构复制到 TableB_Copy。
  2. 删除表B。
  3. 将 TableB_Copy 重命名为 TableB
  4. 禁用 TableB 上的索引
  5. 将 TableA 中的数据插入 TableB
  6. 在 TableB 上创建索引。

You should do the whole process in a stored proc.

Do not truncate such a large table. Follow the following steps:

  1. Copy the TableB structure to TableB_Copy.
  2. DROP TABLEB.
  3. Rename TableB_Copy to TableB
  4. Disable indexes on TableB
  5. Insert the data from TableA into TableB
  6. Create the indexes on TableB.
謸气贵蔟 2024-11-13 02:25:23

根据我的观点,解决方案将是这样的:

SELECT 
    DISTINCT(t1.field2), t2.field2 
    FROM 
        TABLE_A AS t1
    INNER JOIN 
        TABLE_A t2 ON 
        t2.field1=t1.field1 
    WHERE
        t1.field2 <> t2.field2 
    GROUP BY
        t1.field2, t2.field2 
    ORDER BY
       COUNT(t1.field2)
    DESC INTO OUTPUT "PATH-TO-FILE";

例如文件为“C:\TEMP\DATA1.SQL”。此查询会发生什么,使用 TAB 分隔符创建一个简单的新文件以插入到任何表中。
现在如何将数据导入到表中。

LOAD DATA
    "PATH-TO-FILE"
INTO TABLE
     table_name

通过此查询,将插入数据,另一方面,您将能够使用要在其中插入数据的表。

According to my view the solution would be like this:

SELECT 
    DISTINCT(t1.field2), t2.field2 
    FROM 
        TABLE_A AS t1
    INNER JOIN 
        TABLE_A t2 ON 
        t2.field1=t1.field1 
    WHERE
        t1.field2 <> t2.field2 
    GROUP BY
        t1.field2, t2.field2 
    ORDER BY
       COUNT(t1.field2)
    DESC INTO OUTPUT "PATH-TO-FILE";

For instance file as "C:\TEMP\DATA1.SQL". What will happen with this query a simple new file is created with TAB delimiter to insert into any table.
Now how to import the data to table.

LOAD DATA
    "PATH-TO-FILE"
INTO TABLE
     table_name

With this query the data will be inserted and on the other hand you will be able to use the table in which you are inserting the data.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文