我可以优化此 SQL 远程更新吗?
我有这个更新远程表的 sql 更新语句。有什么方法可以优化/加速这段代码吗?它作为存储过程的一部分运行。
DECLARE @WIP Table(Item varchar(25), WIP int)
--Get Work In Progress Numbers
INSERT INTO @WIP
select (UPC.ItemPrefix + '-' + UPC.ItemCode) As Item, SUM(PO.Quantity) As WIP
--into #WIP
from [NCLGS].[dbo].[AL_ItemUPCs] UPC
INNER JOIN al_PO PO ON PO.UPCID = UPC.UPCID
where PO.status in ('Assigned', 'New', 'UnAssigned', 'WaitingForFile')
Group by (UPC.ItemPrefix + '-' + UPC.ItemCode)
--SLOW PART, takes over 17 minutes
UPDATE [Server].[Database].[dbo].[Item]
SET QtyOnHand = IH.QtyOnHand,
QtyWorkInProgress = W.WIP
FROM Avanti_InventoryHeader IH
INNER JOIN [Server].[Database].[dbo].[Item] I ON I.ItemNumber = IH.ItemNumber
LEFT JOIN @WIP W ON IH.ItemNumber = W.Item
WHERE isnumeric(left(IH.ItemNumber, 2)) = 0
I have this sql update statement which updates a remote table. Is there any way I can optimize/speed up this code? It is running as part of a stored procedure.
DECLARE @WIP Table(Item varchar(25), WIP int)
--Get Work In Progress Numbers
INSERT INTO @WIP
select (UPC.ItemPrefix + '-' + UPC.ItemCode) As Item, SUM(PO.Quantity) As WIP
--into #WIP
from [NCLGS].[dbo].[AL_ItemUPCs] UPC
INNER JOIN al_PO PO ON PO.UPCID = UPC.UPCID
where PO.status in ('Assigned', 'New', 'UnAssigned', 'WaitingForFile')
Group by (UPC.ItemPrefix + '-' + UPC.ItemCode)
--SLOW PART, takes over 17 minutes
UPDATE [Server].[Database].[dbo].[Item]
SET QtyOnHand = IH.QtyOnHand,
QtyWorkInProgress = W.WIP
FROM Avanti_InventoryHeader IH
INNER JOIN [Server].[Database].[dbo].[Item] I ON I.ItemNumber = IH.ItemNumber
LEFT JOIN @WIP W ON IH.ItemNumber = W.Item
WHERE isnumeric(left(IH.ItemNumber, 2)) = 0
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(5)
我过去也遇到过类似的问题,我不得不使用动态 SQL 来提高性能。
我发现当我将本地临时表与远程表连接时,SQL 会将所有数据带到本地服务器,然后过滤 where 语句中的内容。
在这种情况下,我将尝试在动态查询中使用 SELECT UNION ALL 将整个变量表 @WIP 作为嵌套表传递。
我正在谈论这样的事情:
它看起来不太整洁,但它可能对你有用,就像对我一样。
I had a similar problem in the past and I had to use dynamic SQL to increase the performance.
I found out that when I joined the local temp table with the remote table SQL was bringing all the data to the local server and then filtering what I had in the where statement.
In this case I would try to pass the whole variable table @WIP as a nested table using SELECT UNION ALL in a dynamic query.
I am talking about something like this:
It does not look very neat, but it might work for you as it did for me.
在进行连接之前,我会尝试先将项目加载到本地表变量中。
此外,您可以考虑通过从表变量中删除未更新的记录并仅将更新的记录连接到链接服务器来进一步限制更新。
I'd try loading the items into a local table variable first before doing the join.
Additionally you could consider further further restricting the update by removing records from the table variable that aren't being updated and join only the updated records to the linked server.
尝试在远程服务器上创建一个由该存储过程调用的存储过程。
在远程存储过程中,将远程服务器上所需的数据拉入临时表中。然后在远程服务器上执行 UPDATE / JOIN。
根据我的经验,尝试跨链接服务器进行联接可能会非常慢......并且在进行任何联接之前在一台服务器或另一台服务器上获取所有所需数据通常会更快。
Try creating a stored procedure on the remote server that is called by this stored procedure.
In the remote stored procedure, pull the data that you need onto the remote server into temporary tables. Then perform the UPDATE / JOIN on the remote server.
In my experience, trying to do a join across a linked server can be very slow... and it is often faster to get all the required data on one server or the other before doing any joining.
我发现了查询的真正问题,它更新了数千条记录,即使这些记录没有改变。因此,我查询更改的记录,将其保存在表变量中,并且仅更新更改的记录。整个过程(不仅仅是这部分)从16分44秒缩短到1分26秒。
对这个方法有什么评论吗?
I figured out the real problem with the query, it was updating thousands of records, even if those records didn't change. So I queried for the changed records, saved that in a table variable, and only updated the changed records. Entire procedure (not just this part) went from 16min 44sec to 1min 26 sec.
Any comments on this method?
PostgreSQL 的 UPDATE 中有一个 ONLY 子句,只会更新提到的表。否则,它会尝试更新您正在加入的所有表,这可能是瓶颈所在。您使用什么类型的 SQL?如果是 Postgres,可能还有其他一些,请尝试将更新行更改为
There is an ONLY clause in PostgreSQL's UPDATE that will only update the table mentioned. Otherwise it tries to update all the tables you're joining and that could be where the bottleneck is. What type of SQL are you using? If it is Postgres, are possibly some others, try changing the update line to