python2和python3之间的执行时间略有不同
最后,我用 python 编写了一个简单的排列生成器(实现 Knuth 在“The Art...4”中描述的“plainchanges”算法)。 我很好奇 python2 和 python3 之间执行时间的差异。 这是我的函数:
def perms(s):
s = tuple(s)
N = len(s)
if N <= 1:
yield s[:]
raise StopIteration()
for x in perms(s[1:]):
for i in range(0,N):
yield x[:i] + (s[0],) + x[i:]
在 python3 版本中,我只是将 print x 更改为 print(x),因为 print 是 py3 中的函数。 我使用 timeit 模块测试了两者。
我的测试:
$ echo "python2.6:" && ./testing.py && echo "python3:" && ./testing3.py
python2.6:
args time[ms]
1 0.003811
2 0.008268
3 0.015907
4 0.042646
5 0.166755
6 0.908796
7 6.117996
8 48.346996
9 433.928967
10 4379.904032
python3:
args time[ms]
1 0.00246778964996
2 0.00656183719635
3 0.01419159912
4 0.0406293644678
5 0.165960511097
6 0.923101452814
7 6.24257639835
8 53.0099868774
9 454.540967941
10 4585.83498001
如您所见,对于参数数量小于 6 的情况,python 3 更快,但角色颠倒了,python2.6 做得更好。 由于我是Python编程新手,我想知道为什么会这样?或者也许我的脚本针对 python2 进行了更优化?
预先感谢您的热情答复:)
Lastly I wrote a simple generator of permutations in python (implementation of "plain changes" algorithm described by Knuth in "The Art... 4").
I was curious about the differences in execution time of it between python2 and python3.
Here is my function:
def perms(s):
s = tuple(s)
N = len(s)
if N <= 1:
yield s[:]
raise StopIteration()
for x in perms(s[1:]):
for i in range(0,N):
yield x[:i] + (s[0],) + x[i:]
In the python3 version I just changed print x to print(x) as print is a function in py3.
I tested both using timeit module.
My tests:
$ echo "python2.6:" && ./testing.py && echo "python3:" && ./testing3.py
python2.6:
args time[ms]
1 0.003811
2 0.008268
3 0.015907
4 0.042646
5 0.166755
6 0.908796
7 6.117996
8 48.346996
9 433.928967
10 4379.904032
python3:
args time[ms]
1 0.00246778964996
2 0.00656183719635
3 0.01419159912
4 0.0406293644678
5 0.165960511097
6 0.923101452814
7 6.24257639835
8 53.0099868774
9 454.540967941
10 4585.83498001
As you can see, for number of arguments less than 6, python 3 is faster, but then roles are reversed and python2.6 does better.
As I am a novice in python programming, I wonder why is that so? Or maybe my script is more optimized for python2?
Thank you in advance for kind answer :)
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
引用:
所以你会看到大约 5% 的减速。引用的文字称预计增速将放缓 10%。所以我认为这是合理的放缓。
然而,正如此处和此处。因此,如果您担心 5% 的速度下降,请尝试 3.1 或 3.2。
Quoting:
So you're seeing about a 5% slowdown. The quoted text says to expect a 10% slowdown. So I'd be accept that as a reasonable slowdown.
However, is has been improving as can be seen here and here. So give 3.1 or 3.2 a try if you're concerned about the 5% slowdown.
这其实是一个非常有趣的问题。
我使用了在 Python 2.6、2.7、3.0、3.1 和 3.2 上运行的以下脚本。
平台是Ubuntu 10.10,64位,所有版本的Python都是从源代码编译的。我得到以下结果:
经过更多实验,我跟踪了片段的性能差异:
x[:i] + (s[0],) + x[i:]
如果我只是在循环开始时计算一个元组并为每个yield 语句返回它,两个版本的Python 都以相同的速度运行。 (排列是错误的,但这不是重点。)如果我单独为该片段计时,它会明显变慢。
接下来我使用 dis.dis() 来查看两个版本生成的字节码。
两个版本生成的字节码有很大不同。不幸的是,我不知道为什么字节码不同,所以我真的没有回答这个问题。但切片和构建元组的性能确实存在显着差异。
This is actually a very interesting question.
I used the following script which runs on Python 2.6, 2.7, 3.0, 3.1, and 3.2.
The platform is Ubuntu 10.10, 64 bit, and all versions of Python were compiled from source. I get the following results:
After some more experimentation, I tracked the difference in performance to the fragment:
x[:i] + (s[0],) + x[i:]
If I just calculate one tuple at the beginning of the loop and return it for every yield statement, both versions of Python run at the same speed. (And the permutations are wrong, but that's not the point.)If I time that fragment by itself, it is significantly slower.
I next used dis.dis() to look at the bytecode generated by both versions.
The generated bytecode is very different between the two versions. Unfortunately, I don't know why the bytecode is different so I really haven't answered the question. But there really is a significant difference in performance for slicing and building tuples.
我当然认为这些数字在统计上微不足道。
有太多因素在起作用,导致这些变化没有任何意义。
I would certainly call those numbers statistically insignificant.
There are too many factors at work for those variations to really hold any meaning.