Python:使 eval 安全
我想要一种简单的方法来在 Python 中执行“计算器 API”。
现在我不太关心计算器将支持的确切功能集。
我希望它接收一个字符串,例如 "1+1"
并返回一个带有结果的字符串,在我们的例子中为 "2"
。
有没有办法让 eval
对于这样的事情是安全的?
首先,我会这样做
env = {}
env["locals"] = None
env["globals"] = None
env["__name__"] = None
env["__file__"] = None
env["__builtins__"] = None
eval(users_str, env)
,以便调用者不能弄乱我的局部变量(或看到它们)。
但我确信我在这里监督了很多事情。
eval
的安全问题是否可以修复,或者是否有太多微小的细节无法使其正常工作?
I want an easy way to do a "calculator API" in Python.
Right now I don't care much about the exact set of features the calculator is going to support.
I want it to receive a string, say "1+1"
and return a string with the result, in our case "2"
.
Is there a way to make eval
safe for such a thing?
For a start I would do
env = {}
env["locals"] = None
env["globals"] = None
env["__name__"] = None
env["__file__"] = None
env["__builtins__"] = None
eval(users_str, env)
so that the caller cannot mess with my local variables (or see them).
But I am sure I am overseeing a lot here.
Are eval
's security issues fixable or are there just too many tiny details to get it working right?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
肯定是后者——聪明的黑客总能找到绕过你的预防措施的方法。
如果您对仅使用基本类型文字的纯表达式感到满意,请使用 ast.literal_eval——这就是它的用途!对于任何更高级的东西,我推荐一个解析包,例如 ply 如果您熟悉并熟悉经典的 lexx/yacc 方法,或 pyparsing 可能更具 Python 风格的方法。
Definitely the latter -- a clever hacker will always manage to find a way around your precautions.
If you're satisfied with plain expressions using elementary-type literals only, use ast.literal_eval -- that's what it's for! For anything fancier, I recommend a parsing package, such as ply if you're familiar and comfortable with the classic lexx/yacc approach, or pyparsing for a possibly more Pythonic approach.
可以访问流程中定义的任何类,然后可以实例化它并调用它的方法。 CPython 解释器可能会出现段错误,或者使其退出。请参阅此:Eval 确实很危险
It is possible to get access to any class that has been defined in the process, and then you can instantiate it and invoke methods on it. It is possible to segfault the CPython interpreter, or make it quit. See this: Eval really is dangerous
安全问题无法(甚至接近)解决。
我会使用
pyparsing
将表达式解析为标记列表(这应该不要太难,因为语法很简单),然后单独处理标记。您还可以使用
ast
模块构建 Python AST(因为您使用的是有效的 Python 语法),但这可能会存在微妙的安全漏洞。The security issues are not (even close to) fixable.
I would use
pyparsing
to parse the expression into a list of tokens (this should not be too difficult, because the grammar is straightforward) and then handle the tokens individually.You could also use the
ast
module to build a Python AST (since you are using valid Python syntax), but this may be open to subtle security holes.Perl 有一个 Safe eval 模块 http://perldoc.perl.org/Safe.html
谷歌搜索“ Perl Safe 的 Python 等价物”发现
http://docs.python.org/2/library/rexec.html
但是这个Python“restricted exec”已被弃用。
——
总的来说,“评估”安全性,无论用什么语言,都是一个大问题。 SQL 注入攻击只是此类安全漏洞的一个例子。 Perl Safe 多年来一直存在安全错误 - 我记得最近的一个错误,它是安全的,除了从安全 eval 返回的对象上的析构函数之外。
我可能会将这种东西用于我自己的工具,但不会暴露在网络上。
然而,我希望有一天完全安全的评估将在许多/任何语言中可用。
Perl has a Safe eval module http://perldoc.perl.org/Safe.html
Googling "Python equivalent of Perl Safe" finds
http://docs.python.org/2/library/rexec.html
but this Python "restricted exec" is deprecated.
--
overall, "eval" security, in any language, is a big issue. SQL injection attacks are just an example of such a security hole. Perl Safe has had security bugs over the years - most recent one I remember, it was safe, except for destructors on objects returned from the safe eval.
It's the sort of thing that i might use for my own tools, but not web exposed.
However, I hope that someday fully secure evals will be available in many / any languages.