在 Python 中将列表的列表转换为元组
我有一个列表列表(用简单的列表理解生成):
>>> base_lists = [[a, b] for a in range(1, 3) for b in range(1, 6)]
>>> base_lists
[[1,1],[1,2],[1,3],[1,4],[1,5],[2,1],[2,2],[2,3],[2,4],[2,5]]
我想将整个列表转换为包含列表中所有值的元组,即:
resulting_tuple = (1,1,1,2,1,3,1,4,1,5,2,1,2,2,2,3,2,4,2,5)
最有效的方法是什么? (通过列表理解生成相同元组的方法也是一个可以接受的答案。)我已经在此处和 Python 文档中查看了答案,但是我一直无法找到合适的答案。
编辑:
非常感谢所有回答的人!
I have a list of lists (generated with a simple list comprehension):
>>> base_lists = [[a, b] for a in range(1, 3) for b in range(1, 6)]
>>> base_lists
[[1,1],[1,2],[1,3],[1,4],[1,5],[2,1],[2,2],[2,3],[2,4],[2,5]]
I want to turn this entire list into a tuple containing all of the values in the lists, i.e.:
resulting_tuple = (1,1,1,2,1,3,1,4,1,5,2,1,2,2,2,3,2,4,2,5)
What would the most effective way to do this be? (A way to generate this same tuple with list comprehension would also be an acceptable answer.) I've looked at answers here and in the Python documentation, however I have been unable to find a suitable one.
EDIT:
Many thanks to all who answered!
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(5)
编辑:请注意,由于
base_lists
如此短,genexp(具有无限可用内存)速度很慢。考虑以下文件tu.py
:现在:
当列表较长时(即,当性能确实很重要时),情况会有所不同。例如,将
100 *
放在定义base_lists
的 RHS 上:因此对于长列表,只有
withsum
是性能灾难 - 其他的都在大致相同,尽管itertools
显然具有优势,并且列表推导式(当有足够的内存可用时,因为它总是在微基准测试中;-)比 Genexps 更快。使用
1000 *
,genexp 减慢约 10 倍(相对于100 *
),withit 和 listcomp 减慢约 12 倍,withsum 减慢约 180 倍(withsum 为 < code>O(N squared),而且在这个大小上它开始遭受严重的堆碎片的困扰)。Edit: note that, with
base_lists
so short, the genexp (with unlimited memory available) is slow. Consider the following filetu.py
:Now:
When lists are longer (i.e., when performance really matters) things are a bit different. E.g., putting a
100 *
on the RHS definingbase_lists
:so for long lists only
withsum
is a performance disaster -- the others are in the same ballpark, although clearlyitertools
has the edge, and list comprehensions (when abundant memory is available, as it always will be in microbenchmarks;-) are faster than genexps.Using
1000 *
, genexp slows down by about 10 times (wrt the100 *
), withit and listcomp by about 12 times, and withsum by about 180 times (withsum isO(N squared)
, plus it's starting to suffer from serious heap fragmentation at that size).resulting_tuple = tuple(base_lists 中 l 的项目,l 中的项目)
resulting_tuple = tuple(item for l in base_lists for item in l)