为什么我的 Python 程序每个进程的 CPU 平均利用率只有 33%?如何让 Python 使用所有可用的 CPU?

发布于 2024-08-10 23:33:57 字数 336 浏览 4 评论 0原文

我使用Python 2.5.4。我的电脑:CPU AMD Phenom X3 720BE,主板 780G,4GB RAM,Windows 7 32 位。

我使用Python线程但不能让每个python.exe进程消耗100% CPU。为什么他们平均只使用 33-34% 左右?

我希望将所有可用的计算机资源用于这些大型计算,以便尽快完成它们。

编辑: 谢谢大家。现在我正在使用并行Python,一切都运行良好。我的CPU现在总是100%。谢谢大家!

I use Python 2.5.4. My computer: CPU AMD Phenom X3 720BE, Mainboard 780G, 4GB RAM, Windows 7 32 bit.

I use Python threading but can not make every python.exe process consume 100% CPU. Why are they using only about 33-34% on average?.

I wish to direct all available computer resources toward these large calculations so as to complete them as quickly as possible.

EDIT:
Thanks everybody. Now I'm using Parallel Python and everything works well. My CPU now always at 100%. Thanks all!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(8

坏尐絯 2024-08-17 23:33:57

看来你有一个 3 核的 CPU。如果您想在本机 Python 代码中使用多个 CPU 核心,则必须生成多个进程。 (两个或多个Python线程不能在不同的CPU上同时运行)

作为R。 Pate 说,Python 的 multiprocessing 模块是一种方法。不过,我建议您查看 Parallel Python。它负责分配任务和消息传递。您甚至可以在许多单独的计算机上运行任务,而无需对代码进行少量更改。

使用它非常简单:

import pp

def parallel_function(arg):
    return arg

job_server = pp.Server() 

# Define your jobs
job1 = job_server.submit(parallel_function, ("foo",))
job2 = job_server.submit(parallel_function, ("bar",))

# Compute and retrieve answers for the jobs.
print job1()
print job2()

It appears that you have a 3-core CPU. If you want to use more than one CPU core in native Python code, you have to spawn multiple processes. (Two or more Python threads cannot run concurrently on different CPUs)

As R. Pate said, Python's multiprocessing module is one way. However, I would suggest looking at Parallel Python instead. It takes care of distributing tasks and message-passing. You can even run tasks on many separate computers with little change to your code.

Using it is quite simple:

import pp

def parallel_function(arg):
    return arg

job_server = pp.Server() 

# Define your jobs
job1 = job_server.submit(parallel_function, ("foo",))
job2 = job_server.submit(parallel_function, ("bar",))

# Compute and retrieve answers for the jobs.
print job1()
print job2()
恍梦境° 2024-08-17 23:33:57

尝试使用 multiprocessing 模块,因为 Python 虽然具有真实的本机线程,但会限制其持有 GIL 时并发使用。另一种选择是编写 C 扩展模块并从 Python 调用其中的函数,如果您需要真正的速度,您应该考虑一下。您可以在这些 C 函数中释放 GIL。

另请参阅 David Beazley令人兴奋的 GIL

Try the multiprocessing module, as Python, while it has real, native threads, will restrict their concurrent use while the GIL is held. Another alternative, and something you should look at if you need real speed, is writing a C extension module and calling functions in it from Python. You can release the GIL in those C functions.

Also see David Beazley's Mindblowing GIL.

2024-08-17 23:33:57

全局解释器锁

采用这种锁的原因包括:

* 提高了单线程程序的速度(无需获取或释放锁
  分别针对所有数据结构)
* 轻松集成通常不是线程安全的 C 库。

用以下语言编写的应用程序
GIL 必须使用单独的进程
(即口译员)实现全面
并发性,每个解释器都有
它自己的 GIL。

Global Interpreter Lock

The reasons of employing such a lock include:

* increased speed of single-threaded programs (no necessity to acquire or release locks
  on all data structures separately)
* easy integration of C libraries that usually are not thread-safe.

Applications written in languages with
a GIL have to use separate processes
(i.e. interpreters) to achieve full
concurrency, as each interpreter has
its own GIL.

余生一个溪 2024-08-17 23:33:57

从 CPU 使用情况来看,您似乎仍在单核上运行。尝试使用 3 个或更多具有相同线程代码的线程运行简单计算,看看它是否利用了所有核心。如果没有,您的线程代码可能有问题。

From CPU usage it looks like you're still running on a single core. Try running a trivial calculation with 3 or more threads with same threading code and see if it utilizes all cores. If it doesn't, something might be wrong with your threading code.

睡美人的小仙女 2024-08-17 23:33:57

您的瓶颈可能在其他地方,例如硬盘驱动器(分页)或内存访问。

You bottleneck is probably somewhere else, like the hard-drive (paging), or memory access.

我恋#小黄人 2024-08-17 23:33:57

您应该执行一些操作系统和 Python 监控来确定瓶颈所在。

以下是 Windows 7 的一些信息:

性能监视器:您可以使用 Windows 性能监视器来检查您运行的程序如何影响计算机的性能,无论是实时还是通过收集日志数据以供以后分析。 (控制面板 -> 所有控制面板项目 -> 性能信息和工具 -> 高级工具 -> 查看性能监视器)

资源监视器控制面板 -> 所有控制面板项目 -> 性能信息和工具 -> 高级工具 -> 查看资源监视器

You should perform some Operating System and Python monitoring to determine where the bottle neck is.

Here is some info for windows 7:

Performance Monitor: You can use Windows Performance Monitor to examine how programs you run affect your computer’s performance, both in real time and by collecting log data for later analysis. (Control Panel-> All Control Panel Items->Performance Information and Tools-> Advanced Tools- > View Performance Monitor)

Resource Monitor: Windows Resource Monitor is a system tool that allows you to view information about the use of hardware (CPU, memory, disk, and network) and software (file handles and modules) resources in real time. You can use Resource Monitor to start, stop, suspend, and resume processes and services. (Control Panel-> All Control Panel Items->Performance Information and Tools-> Advanced Tools- > View Resource Monitor)

蹲墙角沉默 2024-08-17 23:33:57

我通过手动运行第二个脚本解决了导致我发表这篇文章的问题。这篇文章帮助我同时运行多个Python脚本

我设法在新打开的终端窗口中输入命令来执行。不像 Shift + Enter 那样方便,但也能完成工作。

I solved the problems that led me to this post by running a second script manually. This post helped me run multiple python scripts at the same time.

I managed to execute in the newly-opened terminal window typing a command there. Not as convenient as shift + enter but does the job.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文