r DBI参数化查询:服务器最多支持2100个参数
我正在寻找解决此问题的R解决方案。我的参数列表超过18000长,因此我试图将其分为循环,以在每次迭代期间使用2000个参数运行查询(除了最后一次迭代,可能少于2000)。但是,它似乎是在每次迭代中的某个地方“存储”参数,因此在第一次迭代后,它告诉我我达到了极限。如果我将其分解为1000的块,则在第二次迭代后会分解。我的代码看起来像:
Start_List<-(some list of values)
for (i in 1:ceiling(length(List)/2000)) {
#Partition List into chunks of length 2000
List<-Start_List[2000*(i-1)+1:min(2000*i,length(Start_List))]
#Create qmarks for List
qmarks_List <- paste(rep("?", length(List)), collapse = ",")
#Query
query <- paste("
SELECT columns
FROM table
WHERE column IN (", qmarks_List, ")
")
loop_df <- dbGetQuery(db, query, params= c(as.list(List)))
#Store the query in a list
query_list[[i]]<-loop_df
}
如何清除参数,以便以每次迭代的0参数重新启动?
更新:8/24/2022仍在寻找解决此问题的解决方案。
I am looking to an R solution to this problem. My list of parameters is over 18000 long, so I attempted to split this up into a for-loop to run the query during each iteration with 2000 parameters (except the last iteration, which may have less than 2000). However, it seems to be "storing" parameters somewhere during each iteration, so after the first iteration it tells me I hit the limit. If I break it up into chunks of 1000, it breaks down after the second iteration. My code looks like:
Start_List<-(some list of values)
for (i in 1:ceiling(length(List)/2000)) {
#Partition List into chunks of length 2000
List<-Start_List[2000*(i-1)+1:min(2000*i,length(Start_List))]
#Create qmarks for List
qmarks_List <- paste(rep("?", length(List)), collapse = ",")
#Query
query <- paste("
SELECT columns
FROM table
WHERE column IN (", qmarks_List, ")
")
loop_df <- dbGetQuery(db, query, params= c(as.list(List)))
#Store the query in a list
query_list[[i]]<-loop_df
}
How can I clear the parameters so it starts back at 0 parameters each iteration?
Update: 8/24/2022 still looking for a solution to this problem.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
虽然我无法重现此错误,但我可以提出一个替代方案:将您的值上传到临时表,对其进行查询(设置会员或左键),然后将其删除。首先使用边界参数的原因是防止SQL注射(恶意或意外腐败),我相信这种建议保留了这种意图。
尽管这应该解决您的问题,但它也应该简化并加快您的过程:它不再迭代您的18k长列表的组,而是进行单个查询,单个数据扣除,以检索所有记录。如果您仍然需要将它们分组,则可以在R中轻松完成(也许比SQL更容易完成,但是我相信SQL-GURU也可以为此演示安全/快速/有效的SQL方法)。
如果您不知道SQL Server中的临时表:使用
#
进行预处;使用##
进行准备使其成为“全局”临时表,这意味着任何连接上的同一用户都会看到该表。当此连接关闭时,两种类型将自动删除。While I cannot reproduce this bug, I can present an alternative: upload your value to a temporary table, query against it (either set-membership or left-join), then delete it. The reason to use bound-parameters in the first place is to prevent SQL injection (malicious or accidental corruption), and I believe this suggestion preserves that intent.
While this should fix the problem you're having, it also should simplify and speed-up your process: instead of iterating over groups of your 18K-long list, it does a single query, single data-pull, to retrieve all of the records. If you still need them grouped afterwards, that can be done easily in R (perhaps more easily than in SQL, but I'm confident a SQL-guru could demonstrate a safe/fast/efficient SQL method for this as well).
If you aren't aware of temp tables in SQL Server: prepending with
#
makes it a per-connection temporary table, meaning that no other connected user (even same user, different connection) will see this table; prepending with##
makes it a "global" temporary table, meaning that the same user on any connection will see the table. Both types are automatically dropped when this connection closes.