当column_value IN (Array) 的情况下,传递给SQL 的数组可以有多大?
我正在编写一些代码,该代码将在一个包含超过 60,000 行的表中查找唯一 id,以查找数组中的唯一 id,使用
mysql_query("SELECT * FROM users WHERE unique_indexed_user_id IN('".join("', '", $array)."')") ;
该数组的数量不限于最终用户,因此他们最终可能会选择一个大数组。因此我必须限制数组
if( count($array_limit)>$array_limit )
array_splice($array, $array_limit);
,但我不知道如何计算出限制,这段代码正在社交网络中使用,供人们邀请他们的朋友参加某些活动。所以越大越好。但是我不知道mysql可以处理多大的数组?
$array_limit
的值应该是多少?
I am writing some code that will lookup for unique id's in a table with over 60,000 rows for the ones that are in an array using
mysql_query("SELECT * FROM users WHERE unique_indexed_user_id IN('".join("', '", $array)."')") ;
the amount of this array is not limited to the end user so they might end up selecting a large array. thus I have to limit the array
if( count($array_limit)>$array_limit )
array_splice($array, $array_limit);
but I have no idea how to figure out the limit, this code is being used in a social network for people to invite their friends to something. so the bigger it is the better. however I don't know how big of an array mysql can handle?
what should the value of $array_limit
be?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
你真的需要多少?
我在查询中使用了大约 10000 个项目,没有出现任何问题(同时将数据从一个数据库移动到另一个数据库,而不是在实时站点中),因此您当然可以进行非常大的查询。
然而,当您想要向用户显示这么多项目时,获取这么多项目是没有意义的,因为没有人滚动浏览这么大的列表到最后,并且拥有这么多数据很可能会大大减慢用户界面的速度。
您应该根据请求使用某种分页或附加,以便每次只能获得大约一百个项目。
How many do you need, really?
I have used something like 10000 items in a query without problem (while moving data from one database to another, not in a live site), so you can certainly make a horribly big query.
There is however no point in getting that many items when you want to display them to a user, because noone scrolls through a list that large to the end, and having that much data would most likely slow down the user interface considerably.
You should use some kind of paging or append on request, so that you only get something like a hundred items each time.
传递给 MySQL 的查询的最大长度是 max_packet_size 变量的长度。
https://dev.mysql.com/doc/ refman/8.0/en/packet-too-large.html
您可能还需要考虑其他因素,例如解析和执行具有这么多 ID 的查询所需的时间长度。我自己就采用了这种方法,在几千个 ID 之后它开始严重变慢。如果可以的话,您可能想尝试
JOIN
。The max length of a query passed to MySQL is the length of your
max_packet_size
variable.https://dev.mysql.com/doc/refman/8.0/en/packet-too-large.html
You also might want to take other things into consideration, such as the length of time it will take to parse and execute a query with that many IDs. I've taken this approach myself, and it started to seriously slow down after a few thousand IDs. You may want to try a
JOIN
, if you can.