Plone 3.3.5 负载均衡时是否需要粘性会话?
我们遇到了一个问题,我们怀疑与负载平衡有关。我们在 Apache 后面有 4 个 ZEO 前端客户端。有时(从日志中)创建新内容项会记录错误。
2011-04-13T15:39:57 ERROR Zope.SiteErrorLog 1302701997.20.258830910503 https://x/intranet
/portal_factory/MyType/xxx.2011-04-13.9797548037/xxx_edit
ValueError: Unable to find
我们怀疑正在发生的情况是,portal_factory 将临时创建的项目存储在 ZEO 客户端会话存储中(我们如何确认这一点),并且该存储不在 ZEO 客户端之间共享。当用户点击“保存”时,会发生验证错误,并且浏览器会被引导回编辑屏幕。然后,此编辑屏幕视图将转到另一个 ZEO 客户端,该客户端在其会话存储中没有临时“创建中的项目”。
然而,我们之前运行过许多负载平衡的 Plone 站点,并且之前没有收到过此问题的报告,因此我怀疑错误原因可能是其他原因,或者该站点上存在某个因素触发了该行为。
以下是一些相关信息,不幸的是,非常模糊:
http://plone。 org/documentation/kb/sticky-sessions-and-mod_proxy_balancer
We encountered an issue which we suspect is a related to load balancing. We are having a 4 ZEO front-end clients behind Apache. Sometimes (from a logs) creation of a new content item logs an error.
2011-04-13T15:39:57 ERROR Zope.SiteErrorLog 1302701997.20.258830910503 https://x/intranet
/portal_factory/MyType/xxx.2011-04-13.9797548037/xxx_edit
ValueError: Unable to find
What we suspect is happening is that portal_factory stores temporarily created items in the ZEO client session storage (how we can confirm this) and this storage is not shared between ZEO clients. When the user hits save, validation error happens and the browser is directed back to the edit screen. Then this edit screen view goes to another ZEO client which does not have the temporary "item in creation" in its session storage.
However, we have been running many load balanced Plone sites before and we haven't had reports of this issue before, so I suspect the error cause could be something else or there is a certain factor on this site triggering the behavior.
Here is some related information which is, unfortunately, very vague:
http://plone.org/documentation/kb/sticky-sessions-and-mod_proxy_balancer
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
在 Plone 3 中,对象创建逻辑中仍然保留了一些确实使用会话的代码。它支持类似小部件的界面,其中对象创建分布在多个实际请求中。这种支持和代码在 Plone 4 中消失了。Plone
3 中的代码依赖于访问 request.SESSION。棘手的一点是,如果其他代码已经创建了会话,则该代码仅使用该会话。 Plone(甚至 Plone 3)中的任何代码都不应该首先创建会话,因此通常它不会在那里并且不会被使用。但是,如果站点中的任何代码确实创建了会话,则对象创建逻辑也将使用它。这应该可以解释为什么您在大多数网站上都没有看到该问题。
所有这些都特别棘手,因为只需调用 request.SESSION 就会创建一个会话。因此,Products.Archetypes 中的 content_edit_impl.py 脚本使用不同的 API 来访问会话:
create=0 告诉 API 在尚不存在会话时避免隐式创建会话。
您可以尝试查找创建会话的代码,从 Archetypes 自定义代码以删除会话部分,或者将会话存储移动到 ZEO 中并在所有 Zope 实例之间共享。虽然不建议在高流量网站上这样做,但它应该适用于简单的场景(https://weblion.psu.edu/trac/weblion/wiki/TemporaryStorageInZeo)。
In Plone 3 there is still some code left in the object creation logic which does indeed use sessions. It's there to support a widget-like interface, where object creation is spread across multiple actual requests. This support and the code is gone in Plone 4.
This code in Plone 3 relies on accessing request.SESSION. The tricky bit is, that the code only uses the session if some other code already created it. No code in Plone (even Plone 3) should create the session in the first place, so usually it won't be there and won't be used. But if any code in the site does create the session, the object creation logic will use it as well. This should explain why you don't see the problem in most sites.
All of this is especially tricky, since simply calling request.SESSION will create a session. The content_edit_impl.py script in Products.Archetypes therefor uses a different API to get to the session:
The create=0 tells the API to avoid implicitly creating a session if none exists yet.
You can either try to find the code that creates the session, customize the code from Archetypes to remove the session part or move the session store into ZEO and share it across all Zope instances. While this isn't recommended for high traffic sites, it should work fine for simple scenarios (some hints at https://weblion.psu.edu/trac/weblion/wiki/TemporaryStorageInZeo).
您的诊断不正确; Portal_factory 工具是无状态的,因此不需要任何会话关联。
您的错误消息也非常模糊,看起来不完整。您是否检查过实例日志以获取完整的回溯?
Your diagnosis is incorrect; the portal_factory tool is stateless and thus does not require any session affinity.
Your error message is also very vague and looks incomplete. Have you checked the instance log for complete tracebacks?