urllib2 冻结 GUI

发布于 2024-10-01 09:11:40 字数 170 浏览 6 评论 0原文

我正在使用 PyGTK 开发一个小型应用程序。通过代理使用 URLlib2 将冻结我的 GUI。有什么办法可以防止这种情况发生吗?

我实际完成工作的代码与 GUI 是分开的,所以我想可能是使用子进程来调用 python 文件。但是,如果我要将应用程序转换为 exe 文件,那会如何工作呢?

谢谢

I'm using PyGTK for a small app I've developing. The usage of URLlib2 through a proxy will freeze my GUI. Is there anyway to prevent that?

My code that actually does the work is seperate from the GUI, so I was thinking may be using subprocess to call the python file. However, how would that work if I was to convert the app to an exe file?

Thanks

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

鸩远一方 2024-10-08 09:11:40

从主线程调用 urllib2 会阻塞 Gtk 事件循环,从而冻结用户界面。这并非特定于 urllib2,而是发生在任何长时间运行的函数上(例如 subprocess.call)。

使用 glib 中的异步 IO 工具或在单独的线程中调用 urllib2 来避免此问题。

Calling urllib2 from the main thread blocks the Gtk event loop and consequently freezes the user interface. This is not specific to urllib2, but happens with any longer running function (e.g. subprocess.call).

Either use the asynchronous IO facilities from glib or call urllib2 in a separate thread to avoid this issue.

失退 2024-10-08 09:11:40

我会考虑使用 multiprocess 模块,创建一对 Queue 对象...一个用于 GUI 控制器或其他组件将请求发送到 urllib2< /代码> 过程;另一个用于返回结果。

对于简单的设计来说,只需一对 Queue 对象就足够了(只需两个进程)。 urllib2 进程简单地使用请求队列中的请求并将响应发布到结果队列。另一端的进程可以异步操作,发布请求,并从事件循环中的任何位置(或从单独的线程)拉出响应并将其发布回字典或调度回调函数(可能也作为字典维护) )。

(例如,我可能让请求模型创建一个回调处理对象,使用该对象的 ID 作为键将其存储在字典中,并将该 ID 和 URL 的元组发布到请求队列,然后让响应处理拉取响应队列的 ID 和响应文本,以便事件处理循环可以将响应分派到最初存储在字典中的对象的 .callback() 方法。 URL 文本结果,但也可以实现对 Exception 对象的处理(当然,如果我们的主 GUI 是的话,可能会分派到我们假设的回调对象接口中的 .errback() 方法)。在多线程中,我们必须确保对该字典的一致访问,但是对此字典的所有访问都是非阻塞的。

更复杂的设计是可能的。一个 urllib2 处理进程池可以共享一对 Queue 对象(这些队列的优点在于它们为我们处理所有锁定和一致性细节;多个生产者/消费者得到支持)。

如果 GUI 需要分散到可以共享相同 urllib2 进程或池的多个进程中,那么就需要寻找消息总线(例如,spread 或 AMQP)。还可以使用共享内存和多进程锁定原语;但这需要付出更多的努力。

I'd consider using the multiprocess module, creating a pair of Queue objects ... one for the GUI controller or other components to send requests to the urllib2 process; the other for returning the results.

Just a pair of Queue objects would be sufficient for a simple design (just two processes). The urllib2 process simple consumes requests from it's request queue and posts response to the results queue. The process on the other side can operate asynchronously, posting requests and, from anywhere in the event loop (or from a separate thread), pulling responses out and posting them back to a dictionary or dispatching a callback function (probably also maintained as a dictionary).

(For example I might have the request model create a callback handling object, store it in a dictionary using the object's ID as the key, and post a tuple of of that ID and the URL to the request queue, then have the response processing pull IDs and response text of the response queue so that the event handling loop can then dispatch the response to the .callback() method of the object which was stored in the dictionary to begin with. The responses could be URL text results but handling for Exception objects could also be implemented (perhaps dispatched to a .errback() method in our hypothetical callback object's interface). Naturally if our main GUI is multi-threaded we have to ensure coherent access to this dictionary. However there should be relatively low contention on that. All access to this dictionary is non-blocking).

More complex designs are possible. A pool of urllib2 handling processes could all share one pair of Queue objects (the beauty of these queues is that they handle all the locking and coherency details for us; multiple producers/consumers are supported).

If the GUI needed to be fanned out into multiple processes that could share the same urllib2 process or pool then it would be time to look for a message bus (spread or AMQP for example). Share memory and the multiprocess locking primitives could also be used; but that would involve quite a bit more effort.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文