如何分块迭代列表

发布于 2024-07-11 05:59:17 字数 839 浏览 9 评论 0原文

我有一个 Python 脚本,它接受一个整数列表作为输入,我需要一次处理四个整数。 不幸的是,我无法控制输入,或者我会将其作为四元素元组列表传递。 目前,我正在以这种方式迭代它:

for i in range(0, len(ints), 4):
    # dummy op for example code
    foo += ints[i] * ints[i + 1] + ints[i + 2] * ints[i + 3]

不过,它看起来很像“C-think”,这让我怀疑有一种更Pythonic的方式来处理这种情况。 该列表在迭代后将被丢弃,因此不需要保留。 也许这样的事情会更好?

while ints:
    foo += ints[0] * ints[1] + ints[2] * ints[3]
    ints[0:4] = []

但仍然不太“感觉”正确。 :-/

更新: 随着 Python 3.12 的发布,我更改了接受的答案。 对于尚未(或无法)跳转到 Python 3.12 的任何人,我鼓励您查看之前接受的答案< /a> 或下面任何其他优秀的、向后兼容的答案。

相关问题: 如何拆分在Python中将列表分成均匀大小的块?

I have a Python script which takes as input a list of integers, which I need to work with four integers at a time. Unfortunately, I don't have control of the input, or I'd have it passed in as a list of four-element tuples. Currently, I'm iterating over it this way:

for i in range(0, len(ints), 4):
    # dummy op for example code
    foo += ints[i] * ints[i + 1] + ints[i + 2] * ints[i + 3]

It looks a lot like "C-think", though, which makes me suspect there's a more pythonic way of dealing with this situation. The list is discarded after iterating, so it needn't be preserved. Perhaps something like this would be better?

while ints:
    foo += ints[0] * ints[1] + ints[2] * ints[3]
    ints[0:4] = []

Still doesn't quite "feel" right, though. :-/

Update: With the release of Python 3.12, I've changed the accepted answer. For anyone who has not (or cannot) make the jump to Python 3.12 yet, I encourage you to check out the previous accepted answer or any of the other excellent, backwards-compatible answers below.

Related question: How do you split a list into evenly sized chunks in Python?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(30

剧终人散尽 2024-07-18 05:59:19

从 Python 3.8 开始,您可以使用 walrus := 运算符和 itertools.islice

from itertools import islice

list_ = [i for i in range(10, 100)]

def chunker(it, size):
    iterator = iter(it)
    while chunk := list(islice(iterator, size)):
        print(chunk)
In [2]: chunker(list_, 10)                                                         
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
[20, 21, 22, 23, 24, 25, 26, 27, 28, 29]
[30, 31, 32, 33, 34, 35, 36, 37, 38, 39]
[40, 41, 42, 43, 44, 45, 46, 47, 48, 49]
[50, 51, 52, 53, 54, 55, 56, 57, 58, 59]
[60, 61, 62, 63, 64, 65, 66, 67, 68, 69]
[70, 71, 72, 73, 74, 75, 76, 77, 78, 79]
[80, 81, 82, 83, 84, 85, 86, 87, 88, 89]
[90, 91, 92, 93, 94, 95, 96, 97, 98, 99]

Since Python 3.8 you can use the walrus := operator and itertools.islice.

from itertools import islice

list_ = [i for i in range(10, 100)]

def chunker(it, size):
    iterator = iter(it)
    while chunk := list(islice(iterator, size)):
        print(chunk)
In [2]: chunker(list_, 10)                                                         
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
[20, 21, 22, 23, 24, 25, 26, 27, 28, 29]
[30, 31, 32, 33, 34, 35, 36, 37, 38, 39]
[40, 41, 42, 43, 44, 45, 46, 47, 48, 49]
[50, 51, 52, 53, 54, 55, 56, 57, 58, 59]
[60, 61, 62, 63, 64, 65, 66, 67, 68, 69]
[70, 71, 72, 73, 74, 75, 76, 77, 78, 79]
[80, 81, 82, 83, 84, 85, 86, 87, 88, 89]
[90, 91, 92, 93, 94, 95, 96, 97, 98, 99]

腹黑女流氓 2024-07-18 05:59:19
import itertools
def chunks(iterable,size):
    it = iter(iterable)
    chunk = tuple(itertools.islice(it,size))
    while chunk:
        yield chunk
        chunk = tuple(itertools.islice(it,size))

# though this will throw ValueError if the length of ints
# isn't a multiple of four:
for x1,x2,x3,x4 in chunks(ints,4):
    foo += x1 + x2 + x3 + x4

for chunk in chunks(ints,4):
    foo += sum(chunk)

其他方式:

import itertools
def chunks2(iterable,size,filler=None):
    it = itertools.chain(iterable,itertools.repeat(filler,size-1))
    chunk = tuple(itertools.islice(it,size))
    while len(chunk) == size:
        yield chunk
        chunk = tuple(itertools.islice(it,size))

# x2, x3 and x4 could get the value 0 if the length is not
# a multiple of 4.
for x1,x2,x3,x4 in chunks2(ints,4,0):
    foo += x1 + x2 + x3 + x4
import itertools
def chunks(iterable,size):
    it = iter(iterable)
    chunk = tuple(itertools.islice(it,size))
    while chunk:
        yield chunk
        chunk = tuple(itertools.islice(it,size))

# though this will throw ValueError if the length of ints
# isn't a multiple of four:
for x1,x2,x3,x4 in chunks(ints,4):
    foo += x1 + x2 + x3 + x4

for chunk in chunks(ints,4):
    foo += sum(chunk)

Another way:

import itertools
def chunks2(iterable,size,filler=None):
    it = itertools.chain(iterable,itertools.repeat(filler,size-1))
    chunk = tuple(itertools.islice(it,size))
    while len(chunk) == size:
        yield chunk
        chunk = tuple(itertools.islice(it,size))

# x2, x3 and x4 could get the value 0 if the length is not
# a multiple of 4.
for x1,x2,x3,x4 in chunks2(ints,4,0):
    foo += x1 + x2 + x3 + x4
攒眉千度 2024-07-18 05:59:19

如果您不介意使用外部包,您可以使用 iteration_utilities .grouper 来自 iteration_utilties 1。 它支持所有可迭代(不仅仅是序列):

from iteration_utilities import grouper
seq = list(range(20))
for group in grouper(seq, 4):
    print(group)

打印:

(0, 1, 2, 3)
(4, 5, 6, 7)
(8, 9, 10, 11)
(12, 13, 14, 15)
(16, 17, 18, 19)

如果长度不是组大小的倍数,它还支持填充(不完整的最后一组)或截断(丢弃不完整的最后一组)最后一个:

from iteration_utilities import grouper
seq = list(range(17))
for group in grouper(seq, 4):
    print(group)
# (0, 1, 2, 3)
# (4, 5, 6, 7)
# (8, 9, 10, 11)
# (12, 13, 14, 15)
# (16,)

for group in grouper(seq, 4, fillvalue=None):
    print(group)
# (0, 1, 2, 3)
# (4, 5, 6, 7)
# (8, 9, 10, 11)
# (12, 13, 14, 15)
# (16, None, None, None)

for group in grouper(seq, 4, truncate=True):
    print(group)
# (0, 1, 2, 3)
# (4, 5, 6, 7)
# (8, 9, 10, 11)
# (12, 13, 14, 15)

Benchmarks

I还决定比较上述几种方法的运行时间。 这是一个双对数图,根据不同大小的列表分为“10”个元素组。 对于定性结果:越低意味着越快:

在此处输入图像描述

至少在此基准测试中,iteration_utilities.grouper 表现最好。 接下来是Craz的方法。

该基准测试是使用 simple_benchmark1 创建的。 用于运行此基准测试的代码是:

import iteration_utilities
import itertools
from itertools import zip_longest

def consume_all(it):
    return iteration_utilities.consume(it, None)

import simple_benchmark
b = simple_benchmark.BenchmarkBuilder()

@b.add_function()
def grouper(l, n):
    return consume_all(iteration_utilities.grouper(l, n))

def Craz_inner(iterable, n, fillvalue=None):
    args = [iter(iterable)] * n
    return zip_longest(*args, fillvalue=fillvalue)

@b.add_function()
def Craz(iterable, n, fillvalue=None):
    return consume_all(Craz_inner(iterable, n, fillvalue))

def nosklo_inner(seq, size):
    return (seq[pos:pos + size] for pos in range(0, len(seq), size))

@b.add_function()
def nosklo(seq, size):
    return consume_all(nosklo_inner(seq, size))

def SLott_inner(ints, chunk_size):
    for i in range(0, len(ints), chunk_size):
        yield ints[i:i+chunk_size]

@b.add_function()
def SLott(ints, chunk_size):
    return consume_all(SLott_inner(ints, chunk_size))

def MarkusJarderot1_inner(iterable,size):
    it = iter(iterable)
    chunk = tuple(itertools.islice(it,size))
    while chunk:
        yield chunk
        chunk = tuple(itertools.islice(it,size))

@b.add_function()
def MarkusJarderot1(iterable,size):
    return consume_all(MarkusJarderot1_inner(iterable,size))

def MarkusJarderot2_inner(iterable,size,filler=None):
    it = itertools.chain(iterable,itertools.repeat(filler,size-1))
    chunk = tuple(itertools.islice(it,size))
    while len(chunk) == size:
        yield chunk
        chunk = tuple(itertools.islice(it,size))

@b.add_function()
def MarkusJarderot2(iterable,size):
    return consume_all(MarkusJarderot2_inner(iterable,size))

@b.add_arguments()
def argument_provider():
    for exp in range(2, 20):
        size = 2**exp
        yield size, simple_benchmark.MultiArgument([[0] * size, 10])

r = b.run()

1 免责声明:我是库 iteration_utilitiessimple_benchmark 的作者。

If you don't mind using an external package you could use iteration_utilities.grouper from iteration_utilties 1. It supports all iterables (not just sequences):

from iteration_utilities import grouper
seq = list(range(20))
for group in grouper(seq, 4):
    print(group)

which prints:

(0, 1, 2, 3)
(4, 5, 6, 7)
(8, 9, 10, 11)
(12, 13, 14, 15)
(16, 17, 18, 19)

In case the length isn't a multiple of the groupsize it also supports filling (the incomplete last group) or truncating (discarding the incomplete last group) the last one:

from iteration_utilities import grouper
seq = list(range(17))
for group in grouper(seq, 4):
    print(group)
# (0, 1, 2, 3)
# (4, 5, 6, 7)
# (8, 9, 10, 11)
# (12, 13, 14, 15)
# (16,)

for group in grouper(seq, 4, fillvalue=None):
    print(group)
# (0, 1, 2, 3)
# (4, 5, 6, 7)
# (8, 9, 10, 11)
# (12, 13, 14, 15)
# (16, None, None, None)

for group in grouper(seq, 4, truncate=True):
    print(group)
# (0, 1, 2, 3)
# (4, 5, 6, 7)
# (8, 9, 10, 11)
# (12, 13, 14, 15)

Benchmarks

I also decided to compare the run-time of a few of the mentioned approaches. It's a log-log plot grouping into groups of "10" elements based on a list of varying size. For qualitative results: Lower means faster:

enter image description here

At least in this benchmark the iteration_utilities.grouper performs best. Followed by the approach of Craz.

The benchmark was created with simple_benchmark1. The code used to run this benchmark was:

import iteration_utilities
import itertools
from itertools import zip_longest

def consume_all(it):
    return iteration_utilities.consume(it, None)

import simple_benchmark
b = simple_benchmark.BenchmarkBuilder()

@b.add_function()
def grouper(l, n):
    return consume_all(iteration_utilities.grouper(l, n))

def Craz_inner(iterable, n, fillvalue=None):
    args = [iter(iterable)] * n
    return zip_longest(*args, fillvalue=fillvalue)

@b.add_function()
def Craz(iterable, n, fillvalue=None):
    return consume_all(Craz_inner(iterable, n, fillvalue))

def nosklo_inner(seq, size):
    return (seq[pos:pos + size] for pos in range(0, len(seq), size))

@b.add_function()
def nosklo(seq, size):
    return consume_all(nosklo_inner(seq, size))

def SLott_inner(ints, chunk_size):
    for i in range(0, len(ints), chunk_size):
        yield ints[i:i+chunk_size]

@b.add_function()
def SLott(ints, chunk_size):
    return consume_all(SLott_inner(ints, chunk_size))

def MarkusJarderot1_inner(iterable,size):
    it = iter(iterable)
    chunk = tuple(itertools.islice(it,size))
    while chunk:
        yield chunk
        chunk = tuple(itertools.islice(it,size))

@b.add_function()
def MarkusJarderot1(iterable,size):
    return consume_all(MarkusJarderot1_inner(iterable,size))

def MarkusJarderot2_inner(iterable,size,filler=None):
    it = itertools.chain(iterable,itertools.repeat(filler,size-1))
    chunk = tuple(itertools.islice(it,size))
    while len(chunk) == size:
        yield chunk
        chunk = tuple(itertools.islice(it,size))

@b.add_function()
def MarkusJarderot2(iterable,size):
    return consume_all(MarkusJarderot2_inner(iterable,size))

@b.add_arguments()
def argument_provider():
    for exp in range(2, 20):
        size = 2**exp
        yield size, simple_benchmark.MultiArgument([[0] * size, 10])

r = b.run()

1 Disclaimer: I'm the author of the libraries iteration_utilities and simple_benchmark.

我不会写诗 2024-07-18 05:59:19

从 Python 3.12 开始,itertools 模块获得batched 函数,专门涵盖对输入可迭代的批次进行迭代,其中最终批次可能不完整(每个批次是一个元组)。 根据文档中给出的示例代码:

>>> for batch in batched('ABCDEFG', 3):
...     print(batch)
...
('A', 'B', 'C')
('D', 'E', 'F')
('G',)

性能说明:

batched 的实现,与迄今为止所有 itertools 函数一样,位于 C 层,因此它具有 Python 级别代码无法比拟的优化能力,例如,

  • 在每次提取新批次时,它都会主动分配一个大小精确的元组(除了最后一批之外的所有批次),而不是构建元组元素按摊销增长的元素导致多次重新分配(就像在 islice 上调用 tuple 的解决方案所做的那样)
  • 它只需要查找 .__next__ 的方法的方式)
  • 底层迭代器的函数每批次一次,而不是每批次 n 次(基于 zip_longest((iter(iterable),) * n) 检查结束情况是一个简单的 C 级 NULL 检查(很简单,并且无论如何都需要处理可能的异常)
  • 处理结束情况是一个 C goto 后跟一个直接 < code>realloc(无需将副本复制到较小的元组)到已知的最终大小,因为它正在跟踪已成功拉取的元素数量(无需复杂的“创建哨兵以供使用”) as fillvalue 并执行 Python 级别 if/else 检查每个批次以查看其是否为空,最后一个批次需要搜索fillvalue 最后出现,用于创建基于 zip_longest 的解决方案所需的缩减元组”。

在所有这些优点之间,它应该大大优于任何Python级别的解决方案(甚至高度优化的解决方案,推动大多数或所有每个项目的工作都转移到 C 层),无论输入迭代是长还是短,也无论批大小和最终(可能不完整)批的大小(基于zip_longest的解决方案使用有保证的唯一fillvalue来保证安全是最好的解决方案当 itertools.batched 不可用时,几乎所有情况下的解决方案,但它们可能会遭受“很少有大批次,最终批次大部分未完全填充”的病态情况,特别是在 3.10 之前 < code>bisect 无法用于优化将 fillvalue 切片从 O(n) 线性搜索降至 O(log n) 二分搜索,但是 batched 完全避免了这种搜索,因此它根本不会遇到病态的情况)。

As of Python 3.12, the itertools module gains a batched function that specifically covers iterating over batches of an input iterable, where the final batch may be incomplete (each batch is a tuple). Per the example code given in the docs:

>>> for batch in batched('ABCDEFG', 3):
...     print(batch)
...
('A', 'B', 'C')
('D', 'E', 'F')
('G',)

Performance notes:

The implementation of batched, like all itertools functions to date, is at the C layer, so it's capable of optimizations Python level code cannot match, e.g.

  • On each pull of a new batch, it proactively allocates a tuple of precisely the correct size (for all but the last batch), instead of building the tuple up element by element with amortized growth causing multiple reallocations (the way a solution calling tuple on an islice does)
  • It only needs to look up the .__next__ function of the underlying iterator once per batch, not n times per batch (the way a zip_longest((iter(iterable),) * n)-based approach does)
  • The check for the end case is a simple C level NULL check (trivial, and required to handle possible exceptions anyway)
  • Handling the end case is a C goto followed by a direct realloc (no making a copy into a smaller tuple) down to the already known final size, since it's tracking how many elements it has successfully pulled (no complex "create sentinel for use as fillvalue and do Python level if/else checks for each batch to see if it's empty, with the final batch requiring a search for where the fillvalue appeared last, to create the cut-down tuple" required by zip_longest-based solutions).

Between all these advantages, it should massively outperform any Python-level solution (even highly optimized ones that push most or all of the per-item work to the C layer), regardless of whether the input iterable is long or short, and regardless of whether the batch size and the size of the final (possibly incomplete) batch (zip_longest-based solutions using guaranteed unique fillvalues for safety are the best possible solution for almost all cases when itertools.batched is not available, but they can suffer in pathological cases of "few large batches, with final batch mostly, not completely, filled", especially pre-3.10 when bisect can't be used to optimize slicing off the fillvalues from O(n) linear search down to O(log n) binary search, but batched avoids that search entirely, so it won't experience pathological cases at all).

萌能量女王 2024-07-18 05:59:19

我需要一个也适用于集合和生成器的解决方案。 我想不出任何非常简短和漂亮的东西,但至少它具有可读性。

def chunker(seq, size):
    res = []
    for el in seq:
        res.append(el)
        if len(res) == size:
            yield res
            res = []
    if res:
        yield res

列表:

>>> list(chunker([i for i in range(10)], 3))
[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9]]

设置:

>>> list(chunker(set([i for i in range(10)]), 3))
[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9]]

生成器:

>>> list(chunker((i for i in range(10)), 3))
[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9]]

I needed a solution that would also work with sets and generators. I couldn't come up with anything very short and pretty, but it's quite readable at least.

def chunker(seq, size):
    res = []
    for el in seq:
        res.append(el)
        if len(res) == size:
            yield res
            res = []
    if res:
        yield res

List:

>>> list(chunker([i for i in range(10)], 3))
[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9]]

Set:

>>> list(chunker(set([i for i in range(10)]), 3))
[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9]]

Generator:

>>> list(chunker((i for i in range(10)), 3))
[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9]]
别把无礼当个性 2024-07-18 05:59:19

more-itertools 包有 chunked 方法正是这样做的:

import more_itertools
for s in more_itertools.chunked(range(9), 4):
    print(s)

打印

[0, 1, 2, 3]
[4, 5, 6, 7]
[8]

chunked 返回中的项目一个列表。 如果您更喜欢可迭代,请使用 ichunked

The more-itertools package has chunked method which does exactly that:

import more_itertools
for s in more_itertools.chunked(range(9), 4):
    print(s)

Prints

[0, 1, 2, 3]
[4, 5, 6, 7]
[8]

chunked returns the items in a list. If you'd prefer iterables, use ichunked.

装迷糊 2024-07-18 05:59:19

这个问题的理想解决方案是使用迭代器(而不仅仅是序列)。 它也应该很快。

这是 itertools 文档提供的解决方案:

def grouper(n, iterable, fillvalue=None):
    #"grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx"
    args = [iter(iterable)] * n
    return itertools.izip_longest(fillvalue=fillvalue, *args)

Using ipython's %timeit on my mac book Air, I get 47.5 us per loop.

然而,这对我来说确实不起作用,因为结果被填充为均匀大小的组。 没有填充的解决方案稍微复杂一些。 最幼稚的解决方案可能是:

def grouper(size, iterable):
    i = iter(iterable)
    while True:
        out = []
        try:
            for _ in range(size):
                out.append(i.next())
        except StopIteration:
            yield out
            break
        
        yield out

简单,但相当慢:每个循环 693 us

我能想到的最佳解决方案使用 islice 进行内部循环:

def grouper(size, iterable):
    it = iter(iterable)
    while True:
        group = tuple(itertools.islice(it, None, size))
        if not group:
            break
        yield group

使用相同的数据集,每个循环我得到 305 us 。

由于无法比这更快地获得纯粹的解决方案,我提供了以下解决方案,但有一个重要的警告:如果您的输入数据中包含 filldata 实例,您可能会得到错误的答案。

def grouper(n, iterable, fillvalue=None):
    #"grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx"
    args = [iter(iterable)] * n
    # itertools.zip_longest on Python 3
    for x in itertools.izip_longest(*args, fillvalue=fillvalue):
        if x[-1] is fillvalue:
            yield tuple(v for v in x if v is not fillvalue)
        else:
            yield x

我真的不喜欢这个答案,但它的速度要快得多。 每个循环 124 us

The ideal solution for this problem works with iterators (not just sequences). It should also be fast.

This is the solution provided by the documentation for itertools:

def grouper(n, iterable, fillvalue=None):
    #"grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx"
    args = [iter(iterable)] * n
    return itertools.izip_longest(fillvalue=fillvalue, *args)

Using ipython's %timeit on my mac book air, I get 47.5 us per loop.

However, this really doesn't work for me since the results are padded to be even sized groups. A solution without the padding is slightly more complicated. The most naive solution might be:

def grouper(size, iterable):
    i = iter(iterable)
    while True:
        out = []
        try:
            for _ in range(size):
                out.append(i.next())
        except StopIteration:
            yield out
            break
        
        yield out

Simple, but pretty slow: 693 us per loop

The best solution I could come up with uses islice for the inner loop:

def grouper(size, iterable):
    it = iter(iterable)
    while True:
        group = tuple(itertools.islice(it, None, size))
        if not group:
            break
        yield group

With the same dataset, I get 305 us per loop.

Unable to get a pure solution any faster than that, I provide the following solution with an important caveat: If your input data has instances of filldata in it, you could get wrong answer.

def grouper(n, iterable, fillvalue=None):
    #"grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx"
    args = [iter(iterable)] * n
    # itertools.zip_longest on Python 3
    for x in itertools.izip_longest(*args, fillvalue=fillvalue):
        if x[-1] is fillvalue:
            yield tuple(v for v in x if v is not fillvalue)
        else:
            yield x

I really don't like this answer, but it is significantly faster. 124 us per loop

古镇旧梦 2024-07-18 05:59:19
from itertools import izip_longest

def chunker(iterable, chunksize, filler):
    return izip_longest(*[iter(iterable)]*chunksize, fillvalue=filler)
from itertools import izip_longest

def chunker(iterable, chunksize, filler):
    return izip_longest(*[iter(iterable)]*chunksize, fillvalue=filler)
天冷不及心凉 2024-07-18 05:59:19

由于还没有人提到,这里有一个 zip() 解决方案:

>>> def chunker(iterable, chunksize):
...     return zip(*[iter(iterable)]*chunksize)

仅当序列的长度始终能被块大小整除时,它才有效,或者如果不能整除则不关心尾随块。

示例:

>>> s = '1234567890'
>>> chunker(s, 3)
[('1', '2', '3'), ('4', '5', '6'), ('7', '8', '9')]
>>> chunker(s, 4)
[('1', '2', '3', '4'), ('5', '6', '7', '8')]
>>> chunker(s, 5)
[('1', '2', '3', '4', '5'), ('6', '7', '8', '9', '0')]

或者使用 itertools.izip 返回迭代器而不是列表:

>>> from itertools import izip
>>> def chunker(iterable, chunksize):
...     return izip(*[iter(iterable)]*chunksize)

填充可以是使用@ΤΖΩΤΖIΟΥ的答案修复< /a>:

>>> from itertools import chain, izip, repeat
>>> def chunker(iterable, chunksize, fillvalue=None):
...     it   = chain(iterable, repeat(fillvalue, chunksize-1))
...     args = [it] * chunksize
...     return izip(*args)

Since nobody's mentioned it yet here's a zip() solution:

>>> def chunker(iterable, chunksize):
...     return zip(*[iter(iterable)]*chunksize)

It works only if your sequence's length is always divisible by the chunk size or you don't care about a trailing chunk if it isn't.

Example:

>>> s = '1234567890'
>>> chunker(s, 3)
[('1', '2', '3'), ('4', '5', '6'), ('7', '8', '9')]
>>> chunker(s, 4)
[('1', '2', '3', '4'), ('5', '6', '7', '8')]
>>> chunker(s, 5)
[('1', '2', '3', '4', '5'), ('6', '7', '8', '9', '0')]

Or using itertools.izip to return an iterator instead of a list:

>>> from itertools import izip
>>> def chunker(iterable, chunksize):
...     return izip(*[iter(iterable)]*chunksize)

Padding can be fixed using @ΤΖΩΤΖΙΟΥ's answer:

>>> from itertools import chain, izip, repeat
>>> def chunker(iterable, chunksize, fillvalue=None):
...     it   = chain(iterable, repeat(fillvalue, chunksize-1))
...     args = [it] * chunksize
...     return izip(*args)
手心的海 2024-07-18 05:59:19

与其他提案类似,但不完全相同,我喜欢这样做,因为它简单且易于阅读:

it = iter([1, 2, 3, 4, 5, 6, 7, 8, 9])
for chunk in zip(it, it, it, it):
    print chunk

>>> (1, 2, 3, 4)
>>> (5, 6, 7, 8)

这样您就不会得到最后的部分块。 如果您想将 (9, None, None, None) 作为最后一个块,只需使用 itertools 中的 izip_longest 即可。

Similar to other proposals, but not exactly identical, I like doing it this way, because it's simple and easy to read:

it = iter([1, 2, 3, 4, 5, 6, 7, 8, 9])
for chunk in zip(it, it, it, it):
    print chunk

>>> (1, 2, 3, 4)
>>> (5, 6, 7, 8)

This way you won't get the last partial chunk. If you want to get (9, None, None, None) as last chunk, just use izip_longest from itertools.

晌融 2024-07-18 05:59:19

另一种方法是使用 iter 的两个参数形式:

from itertools import islice

def group(it, size):
    it = iter(it)
    return iter(lambda: tuple(islice(it, size)), ())

这可以轻松地适应使用填充(这类似于 Markus Jarderot 的回答):

from itertools import islice, chain, repeat

def group_pad(it, size, pad=None):
    it = chain(iter(it), repeat(pad))
    return iter(lambda: tuple(islice(it, size)), (pad,) * size)

这些甚至可以组合起来用于可选填充:

_no_pad = object()
def group(it, size, pad=_no_pad):
    if pad == _no_pad:
        it = iter(it)
        sentinel = ()
    else:
        it = chain(iter(it), repeat(pad))
        sentinel = (pad,) * size
    return iter(lambda: tuple(islice(it, size)), sentinel)

Another approach would be to use the two-argument form of iter:

from itertools import islice

def group(it, size):
    it = iter(it)
    return iter(lambda: tuple(islice(it, size)), ())

This can be adapted easily to use padding (this is similar to Markus Jarderot’s answer):

from itertools import islice, chain, repeat

def group_pad(it, size, pad=None):
    it = chain(iter(it), repeat(pad))
    return iter(lambda: tuple(islice(it, size)), (pad,) * size)

These can even be combined for optional padding:

_no_pad = object()
def group(it, size, pad=_no_pad):
    if pad == _no_pad:
        it = iter(it)
        sentinel = ()
    else:
        it = chain(iter(it), repeat(pad))
        sentinel = (pad,) * size
    return iter(lambda: tuple(islice(it, size)), sentinel)
流年里的时光 2024-07-18 05:59:19

如果列表很大,执行此操作的最佳方法是使用生成器:

def get_chunk(iterable, chunk_size):
    result = []
    for item in iterable:
        result.append(item)
        if len(result) == chunk_size:
            yield tuple(result)
            result = []
    if len(result) > 0:
        yield tuple(result)

for x in get_chunk([1,2,3,4,5,6,7,8,9,10], 3):
    print x

(1, 2, 3)
(4, 5, 6)
(7, 8, 9)
(10,)

If the list is large, the highest-performing way to do this will be to use a generator:

def get_chunk(iterable, chunk_size):
    result = []
    for item in iterable:
        result.append(item)
        if len(result) == chunk_size:
            yield tuple(result)
            result = []
    if len(result) > 0:
        yield tuple(result)

for x in get_chunk([1,2,3,4,5,6,7,8,9,10], 3):
    print x

(1, 2, 3)
(4, 5, 6)
(7, 8, 9)
(10,)
流心雨 2024-07-18 05:59:19

使用小功能和小东西确实对我没有吸引力; 我更喜欢只使用切片:

data = [...]
chunk_size = 10000 # or whatever
chunks = [data[i:i+chunk_size] for i in xrange(0,len(data),chunk_size)]
for chunk in chunks:
    ...

Using little functions and things really doesn't appeal to me; I prefer to just use slices:

data = [...]
chunk_size = 10000 # or whatever
chunks = [data[i:i+chunk_size] for i in xrange(0,len(data),chunk_size)]
for chunk in chunks:
    ...
不知所踪 2024-07-18 05:59:19

使用 map() 而不是 zip() 修复了 JF Sebastian 答案中的填充问题:

>>> def chunker(iterable, chunksize):
...   return map(None,*[iter(iterable)]*chunksize)

示例:

>>> s = '1234567890'
>>> chunker(s, 3)
[('1', '2', '3'), ('4', '5', '6'), ('7', '8', '9'), ('0', None, None)]
>>> chunker(s, 4)
[('1', '2', '3', '4'), ('5', '6', '7', '8'), ('9', '0', None, None)]
>>> chunker(s, 5)
[('1', '2', '3', '4', '5'), ('6', '7', '8', '9', '0')]

Using map() instead of zip() fixes the padding issue in J.F. Sebastian's answer:

>>> def chunker(iterable, chunksize):
...   return map(None,*[iter(iterable)]*chunksize)

Example:

>>> s = '1234567890'
>>> chunker(s, 3)
[('1', '2', '3'), ('4', '5', '6'), ('7', '8', '9'), ('0', None, None)]
>>> chunker(s, 4)
[('1', '2', '3', '4'), ('5', '6', '7', '8'), ('9', '0', None, None)]
>>> chunker(s, 5)
[('1', '2', '3', '4', '5'), ('6', '7', '8', '9', '0')]
无人接听 2024-07-18 05:59:19

用于以 4 大小的块迭代列表 x 的单行临时解决方案 -

for a, b, c, d in zip(x[0::4], x[1::4], x[2::4], x[3::4]):
    ... do something with a, b, c and d ...

One-liner, adhoc solution to iterate over a list x in chunks of size 4 -

for a, b, c, d in zip(x[0::4], x[1::4], x[2::4], x[3::4]):
    ... do something with a, b, c and d ...
九命猫 2024-07-18 05:59:19

为了避免所有转换为列表import itertools并且:

>>> for k, g in itertools.groupby(xrange(35), lambda x: x/10):
...     list(g)

产生:

... 
0 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
1 [10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
2 [20, 21, 22, 23, 24, 25, 26, 27, 28, 29]
3 [30, 31, 32, 33, 34]
>>> 

我检查了groupby并且它不会转换为列表或使用len所以我(认为​​)这将延迟每个值的解析,直到实际使用它为止。 遗憾的是,(目前)所有可用的答案似乎都没有提供这种变化。

显然,如果您需要依次处理每个项目,请在 g 上嵌套一个 for 循环:

for k,g in itertools.groupby(xrange(35), lambda x: x/10):
    for i in g:
       # do what you need to do with individual items
    # now do what you need to do with the whole group

我对此特别感兴趣的是需要使用生成器来批量向 gmail API 提交最多 1000 个更改:

    messages = a_generator_which_would_not_be_smart_as_a_list
    for idx, batch in groupby(messages, lambda x: x/1000):
        batch_request = BatchHttpRequest()
        for message in batch:
            batch_request.add(self.service.users().messages().modify(userId='me', id=message['id'], body=msg_labels))
        http = httplib2.Http()
        self.credentials.authorize(http)
        batch_request.execute(http=http)

To avoid all conversions to a list import itertools and:

>>> for k, g in itertools.groupby(xrange(35), lambda x: x/10):
...     list(g)

Produces:

... 
0 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
1 [10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
2 [20, 21, 22, 23, 24, 25, 26, 27, 28, 29]
3 [30, 31, 32, 33, 34]
>>> 

I checked groupby and it doesn't convert to list or use len so I (think) this will delay resolution of each value until it is actually used. Sadly none of the available answers (at this time) seemed to offer this variation.

Obviously if you need to handle each item in turn nest a for loop over g:

for k,g in itertools.groupby(xrange(35), lambda x: x/10):
    for i in g:
       # do what you need to do with individual items
    # now do what you need to do with the whole group

My specific interest in this was the need to consume a generator to submit changes in batches of up to 1000 to the gmail API:

    messages = a_generator_which_would_not_be_smart_as_a_list
    for idx, batch in groupby(messages, lambda x: x/1000):
        batch_request = BatchHttpRequest()
        for message in batch:
            batch_request.add(self.service.users().messages().modify(userId='me', id=message['id'], body=msg_labels))
        http = httplib2.Http()
        self.credentials.authorize(http)
        batch_request.execute(http=http)
骄兵必败 2024-07-18 05:59:19

除非我错过了什么,否则没有提到以下带有生成器表达式的简单解决方案。 它假设块的大小和数量都是已知的(通常是这种情况),并且不需要填充:

def chunks(it, n, m):
    """Make an iterator over m first chunks of size n.
    """
    it = iter(it)
    # Chunks are presented as tuples.
    return (tuple(next(it) for _ in range(n)) for _ in range(m))

Unless I misses something, the following simple solution with generator expressions has not been mentioned. It assumes that both the size and the number of chunks are known (which is often the case), and that no padding is required:

def chunks(it, n, m):
    """Make an iterator over m first chunks of size n.
    """
    it = iter(it)
    # Chunks are presented as tuples.
    return (tuple(next(it) for _ in range(n)) for _ in range(m))
眼眸印温柔 2024-07-18 05:59:19

在你的第二种方法中,我将通过这样做前进到下一组 4 组:

ints = ints[4:]

但是,我没有进行任何性能测量,所以我不知道哪一个可能更有效。

话虽如此,我通常会选择第一种方法。 这并不漂亮,但这通常是与外部世界交互的结果。

In your second method, I would advance to the next group of 4 by doing this:

ints = ints[4:]

However, I haven't done any performance measurement so I don't know which one might be more efficient.

Having said that, I would usually choose the first method. It's not pretty, but that's often a consequence of interfacing with the outside world.

你另情深 2024-07-18 05:59:19

使用 NumPy 很简单:

ints = array([1, 2, 3, 4, 5, 6, 7, 8])
for int1, int2 in ints.reshape(-1, 2):
    print(int1, int2)

输出:

1 2
3 4
5 6
7 8

With NumPy it's simple:

ints = array([1, 2, 3, 4, 5, 6, 7, 8])
for int1, int2 in ints.reshape(-1, 2):
    print(int1, int2)

output:

1 2
3 4
5 6
7 8
乄_柒ぐ汐 2024-07-18 05:59:19

我从不希望我的块被填充,所以这个要求是必不可少的。 我发现处理任何可迭代的能力也是必要的。 鉴于此,我决定扩展已接受的答案,https://stackoverflow.com/a/434411/1074659

如果由于需要比较和过滤填充值而不需要填充,则此方法的性能会受到轻微影响。 然而,对于大块大小,该实用程序的性能非常好。

#!/usr/bin/env python3
from itertools import zip_longest


_UNDEFINED = object()


def chunker(iterable, chunksize, fillvalue=_UNDEFINED):
    """
    Collect data into chunks and optionally pad it.

    Performance worsens as `chunksize` approaches 1.

    Inspired by:
        https://docs.python.org/3/library/itertools.html#itertools-recipes

    """
    args = [iter(iterable)] * chunksize
    chunks = zip_longest(*args, fillvalue=fillvalue)
    yield from (
        filter(lambda val: val is not _UNDEFINED, chunk)
        if chunk[-1] is _UNDEFINED
        else chunk
        for chunk in chunks
    ) if fillvalue is _UNDEFINED else chunks

I never want my chunks padded, so that requirement is essential. I find that the ability to work on any iterable is also requirement. Given that, I decided to extend on the accepted answer, https://stackoverflow.com/a/434411/1074659.

Performance takes a slight hit in this approach if padding is not wanted due to the need to compare and filter the padded values. However, for large chunk sizes, this utility is very performant.

#!/usr/bin/env python3
from itertools import zip_longest


_UNDEFINED = object()


def chunker(iterable, chunksize, fillvalue=_UNDEFINED):
    """
    Collect data into chunks and optionally pad it.

    Performance worsens as `chunksize` approaches 1.

    Inspired by:
        https://docs.python.org/3/library/itertools.html#itertools-recipes

    """
    args = [iter(iterable)] * chunksize
    chunks = zip_longest(*args, fillvalue=fillvalue)
    yield from (
        filter(lambda val: val is not _UNDEFINED, chunk)
        if chunk[-1] is _UNDEFINED
        else chunk
        for chunk in chunks
    ) if fillvalue is _UNDEFINED else chunks
盛装女皇 2024-07-18 05:59:19
def chunker(iterable, n):
    """Yield iterable in chunk sizes.

    >>> chunks = chunker('ABCDEF', n=4)
    >>> chunks.next()
    ['A', 'B', 'C', 'D']
    >>> chunks.next()
    ['E', 'F']
    """
    it = iter(iterable)
    while True:
        chunk = []
        for i in range(n):
            try:
                chunk.append(next(it))
            except StopIteration:
                yield chunk
                raise StopIteration
        yield chunk

if __name__ == '__main__':
    import doctest

    doctest.testmod()
def chunker(iterable, n):
    """Yield iterable in chunk sizes.

    >>> chunks = chunker('ABCDEF', n=4)
    >>> chunks.next()
    ['A', 'B', 'C', 'D']
    >>> chunks.next()
    ['E', 'F']
    """
    it = iter(iterable)
    while True:
        chunk = []
        for i in range(n):
            try:
                chunk.append(next(it))
            except StopIteration:
                yield chunk
                raise StopIteration
        yield chunk

if __name__ == '__main__':
    import doctest

    doctest.testmod()
北方的韩爷 2024-07-18 05:59:19

还有一个答案,其优点是:

1)易于理解
2)适用于任何可迭代,而不仅仅是序列(上面的一些答案会因文件句柄而窒息)
3) 不会一次性将块加载到内存中
4) 不会在内存中创建对同一迭代器的引用的长块列表
5) 列表末尾没有填充值的填充话

虽如此,我还没有对其进行计时,因此它可能比一些更聪明的方法慢,并且考虑到用例,某些优点可能是不相关的。

def chunkiter(iterable, size):
  def inneriter(first, iterator, size):
    yield first
    for _ in xrange(size - 1): 
      yield iterator.next()
  it = iter(iterable)
  while True:
    yield inneriter(it.next(), it, size)

In [2]: i = chunkiter('abcdefgh', 3)
In [3]: for ii in i:                                                
          for c in ii:
            print c,
          print ''
        ...:     
        a b c 
        d e f 
        g h 

更新:
由于内部和外部循环从同一迭代器中提取值,因此存在一些缺点:
1) continue 在外循环中没有按预期工作 - 它只是继续到下一个项目而不是跳过一个块。 但是,这似乎不是问题,因为外循环中没有任何可测试的内容。
2) 在内循环中,break 无法按预期工作 - 控制将再次进入内循环,并使用迭代器中的下一个项目。 要跳过整个块,请将内部迭代器(上面的 ii)包装在元组中,例如 for c in tuple(ii),或者设置一个标志并耗尽迭代器。

Yet another answer, the advantages of which are:

1) Easily understandable
2) Works on any iterable, not just sequences (some of the above answers will choke on filehandles)
3) Does not load the chunk into memory all at once
4) Does not make a chunk-long list of references to the same iterator in memory
5) No padding of fill values at the end of the list

That being said, I haven't timed it so it might be slower than some of the more clever methods, and some of the advantages may be irrelevant given the use case.

def chunkiter(iterable, size):
  def inneriter(first, iterator, size):
    yield first
    for _ in xrange(size - 1): 
      yield iterator.next()
  it = iter(iterable)
  while True:
    yield inneriter(it.next(), it, size)

In [2]: i = chunkiter('abcdefgh', 3)
In [3]: for ii in i:                                                
          for c in ii:
            print c,
          print ''
        ...:     
        a b c 
        d e f 
        g h 

Update:
A couple of drawbacks due to the fact the inner and outer loops are pulling values from the same iterator:
1) continue doesn't work as expected in the outer loop - it just continues on to the next item rather than skipping a chunk. However, this doesn't seem like a problem as there's nothing to test in the outer loop.
2) break doesn't work as expected in the inner loop - control will wind up in the inner loop again with the next item in the iterator. To skip whole chunks, either wrap the inner iterator (ii above) in a tuple, e.g. for c in tuple(ii), or set a flag and exhaust the iterator.

满意归宿 2024-07-18 05:59:19
def group_by(iterable, size):
    """Group an iterable into lists that don't exceed the size given.

    >>> group_by([1,2,3,4,5], 2)
    [[1, 2], [3, 4], [5]]

    """
    sublist = []

    for index, item in enumerate(iterable):
        if index > 0 and index % size == 0:
            yield sublist
            sublist = []

        sublist.append(item)

    if sublist:
        yield sublist
def group_by(iterable, size):
    """Group an iterable into lists that don't exceed the size given.

    >>> group_by([1,2,3,4,5], 2)
    [[1, 2], [3, 4], [5]]

    """
    sublist = []

    for index, item in enumerate(iterable):
        if index > 0 and index % size == 0:
            yield sublist
            sublist = []

        sublist.append(item)

    if sublist:
        yield sublist
趁年轻赶紧闹 2024-07-18 05:59:19

您可以使用 分区chunks 函数来自 funcy 库:

from funcy import partition

for a, b, c, d in partition(4, ints):
    foo += a * b * c * d

这些函数还有迭代器版本 ipartitionichunks,在这种情况下会更高效。

您还可以查看他们的实现

You can use partition or chunks function from funcy library:

from funcy import partition

for a, b, c, d in partition(4, ints):
    foo += a * b * c * d

These functions also has iterator versions ipartition and ichunks, which will be more efficient in this case.

You can also peek at their implementation.

何以心动 2024-07-18 05:59:19

关于JF Sebastian这里给出的解决方案:

def chunker(iterable, chunksize):
    return zip(*[iter(iterable)]*chunksize)

它很聪明,但有一个缺点 - 总是返回元组。 如何获取字符串?
当然你可以写''.join(chunker(...)),但临时元组无论如何都会被构造。

您可以通过编写自己的 zip 来摆脱临时元组,如下所示:

class IteratorExhausted(Exception):
    pass

def translate_StopIteration(iterable, to=IteratorExhausted):
    for i in iterable:
        yield i
    raise to # StopIteration would get ignored because this is generator,
             # but custom exception can leave the generator.

def custom_zip(*iterables, reductor=tuple):
    iterators = tuple(map(translate_StopIteration, iterables))
    while True:
        try:
            yield reductor(next(i) for i in iterators)
        except IteratorExhausted: # when any of iterators get exhausted.
            break

然后

def chunker(data, size, reductor=tuple):
    return custom_zip(*[iter(data)]*size, reductor=reductor)

示例用法:

>>> for i in chunker('12345', 2):
...     print(repr(i))
...
('1', '2')
('3', '4')
>>> for i in chunker('12345', 2, ''.join):
...     print(repr(i))
...
'12'
'34'

About solution gave by J.F. Sebastian here:

def chunker(iterable, chunksize):
    return zip(*[iter(iterable)]*chunksize)

It's clever, but has one disadvantage - always return tuple. How to get string instead?
Of course you can write ''.join(chunker(...)), but the temporary tuple is constructed anyway.

You can get rid of the temporary tuple by writing own zip, like this:

class IteratorExhausted(Exception):
    pass

def translate_StopIteration(iterable, to=IteratorExhausted):
    for i in iterable:
        yield i
    raise to # StopIteration would get ignored because this is generator,
             # but custom exception can leave the generator.

def custom_zip(*iterables, reductor=tuple):
    iterators = tuple(map(translate_StopIteration, iterables))
    while True:
        try:
            yield reductor(next(i) for i in iterators)
        except IteratorExhausted: # when any of iterators get exhausted.
            break

Then

def chunker(data, size, reductor=tuple):
    return custom_zip(*[iter(data)]*size, reductor=reductor)

Example usage:

>>> for i in chunker('12345', 2):
...     print(repr(i))
...
('1', '2')
('3', '4')
>>> for i in chunker('12345', 2, ''.join):
...     print(repr(i))
...
'12'
'34'
情仇皆在手 2024-07-18 05:59:19

我喜欢这种方法。 它感觉简单且并不神奇,并且支持所有可迭代类型并且不需要导入。

def chunk_iter(iterable, chunk_size):
it = iter(iterable)
while True:
    chunk = tuple(next(it) for _ in range(chunk_size))
    if not chunk:
        break
    yield chunk

I like this approach. It feels simple and not magical and supports all iterable types and doesn't require imports.

def chunk_iter(iterable, chunk_size):
it = iter(iterable)
while True:
    chunk = tuple(next(it) for _ in range(chunk_size))
    if not chunk:
        break
    yield chunk
月寒剑心 2024-07-18 05:59:19

这里非常Pythonic(您也可以内联 split_groups 函数的主体)

import itertools
def split_groups(iter_in, group_size):
    return ((x for _, x in item) for _, item in itertools.groupby(enumerate(iter_in), key=lambda x: x[0] // group_size))

for x, y, z, w in split_groups(range(16), 4):
    foo += x * y + z * w

Quite pythonic here (you may also inline the body of the split_groups function)

import itertools
def split_groups(iter_in, group_size):
    return ((x for _, x in item) for _, item in itertools.groupby(enumerate(iter_in), key=lambda x: x[0] // group_size))

for x, y, z, w in split_groups(range(16), 4):
    foo += x * y + z * w
耀眼的星火 2024-07-18 05:59:18
def chunker(seq, size):
    return (seq[pos:pos + size] for pos in range(0, len(seq), size))

适用于任何序列:

text = "I am a very, very helpful text"

for group in chunker(text, 7):
   print(repr(group),)
# 'I am a ' 'very, v' 'ery hel' 'pful te' 'xt'

print('|'.join(chunker(text, 10)))
# I am a ver|y, very he|lpful text

animals = ['cat', 'dog', 'rabbit', 'duck', 'bird', 'cow', 'gnu', 'fish']

for group in chunker(animals, 3):
    print(group)
# ['cat', 'dog', 'rabbit']
# ['duck', 'bird', 'cow']
# ['gnu', 'fish']
def chunker(seq, size):
    return (seq[pos:pos + size] for pos in range(0, len(seq), size))

Works with any sequence:

text = "I am a very, very helpful text"

for group in chunker(text, 7):
   print(repr(group),)
# 'I am a ' 'very, v' 'ery hel' 'pful te' 'xt'

print('|'.join(chunker(text, 10)))
# I am a ver|y, very he|lpful text

animals = ['cat', 'dog', 'rabbit', 'duck', 'bird', 'cow', 'gnu', 'fish']

for group in chunker(animals, 3):
    print(group)
# ['cat', 'dog', 'rabbit']
# ['duck', 'bird', 'cow']
# ['gnu', 'fish']
老旧海报 2024-07-18 05:59:18

修改自 Python 的 Recipes 部分/docs.python.org/library/itertools.html" rel="noreferrer">itertools 文档:

from itertools import zip_longest

def grouper(iterable, n, fillvalue=None):
    args = [iter(iterable)] * n
    return zip_longest(*args, fillvalue=fillvalue)

示例

grouper('ABCDEFGHIJ', 3, 'x')  # --> 'ABC' 'DEF' 'GHI' 'Jxx'

注意:在 Python 2 上,使用 izip_longest 而不是 zip_longest

Modified from the Recipes section of Python's itertools docs:

from itertools import zip_longest

def grouper(iterable, n, fillvalue=None):
    args = [iter(iterable)] * n
    return zip_longest(*args, fillvalue=fillvalue)

Example

grouper('ABCDEFGHIJ', 3, 'x')  # --> 'ABC' 'DEF' 'GHI' 'Jxx'

Note: on Python 2 use izip_longest instead of zip_longest.

绳情 2024-07-18 05:59:18
chunk_size = 4
for i in range(0, len(ints), chunk_size):
    chunk = ints[i:i+chunk_size]
    # process chunk of size <= chunk_size
chunk_size = 4
for i in range(0, len(ints), chunk_size):
    chunk = ints[i:i+chunk_size]
    # process chunk of size <= chunk_size
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文