为什么安全要求喜欢阻止人工智能的使用?

发布于 2024-07-10 21:59:59 字数 152 浏览 13 评论 0原文

似乎安全要求似乎并不像使用人工智能来满足安全相关要求的系统(特别是在涉及巨大的潜在破坏/死亡风险的情况下)。 谁能提出为什么? 我一直认为,只要你正确地编程你的逻辑,你在算法中投入的智能越多,该算法就越有可能能够防止危险情况。 实际情况有什么不同吗?

Seems that requirements on safety do not seem to like systems that use AI for safety-related requirements (particularly where large potential risks of destruction/death are involved). Can anyone suggest why? I always thought that, provided you program your logic properly, the more intelligence you put in an algorithm, the more likely this algorithm is capable of preventing a dangerous situation. Are things different in practice?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(10

无妨# 2024-07-17 21:59:59

大多数人工智能算法都是模糊的——通常是边发展边学习。 对于具有至关重要的安全重要性的项目,您想要的是确定性的。 这些算法更容易证明正确,这对于许多安全关键应用至关重要。

Most AI algorithms are fuzzy -- typically learning as they go along. For items that are of critical safety importance what you want is deterministic. These algorithms are easier to prove correct, which is essential for many safety critical applications.

有深☉意 2024-07-17 21:59:59

我认为原因是双重的。

首先,人工智能可能会做出不可预测的决定。 诚然,它们可能是有益的,但在谈论安全问题时,你不能冒这样的风险,尤其是当人们的生命处于危险之中时。

第二个是,决策背后的“推理”并不总是能够被追踪(有时人工智能会使用随机元素来生成结果),并且当出现问题时,没有能力确定“原因”(在非常精确的方式)成为一种责任。

最终,这取决于责任感和可靠性。

I would think that the reason is twofold.

First it is possible that the AI will make unpredictable decisions. Granted, they can be beneficial, but when talking about safety-concerns, you can't take risks like that, especially if people's lives are on the line.

The second is that the "reasoning" behind the decisions can't always be traced (sometimes there is a random element used for generating results with an AI) and when something goes wrong, not having the ability to determine "why" (in a very precise manner) becomes a liability.

In the end, it comes down to accountability and reliability.

柏拉图鍀咏恒 2024-07-17 21:59:59

系统越复杂,测试就越困难。
系统越重要,100% 全面的测试就越重要。

因此,对于关键系统,人们更喜欢拥有可以测试的次优特性,并依靠人类交互来做出复杂的决策。

The more complex a system is, the harder it is to test.
And the more crucial a system is, the more important it becomes to have 100% comprehensive tests.

Therefore for crucial systems people prefer to have sub-optimal features, that can be tested, and rely on human interaction for complex decision making.

不羁少年 2024-07-17 21:59:59

从安全的角度来看,人们通常关心的是保证行为的可预测性/确定性和快速响应时间。 虽然可以使用人工智能风格的编程技术来完成其中之一或两者,但随着系统的控制逻辑变得更加复杂,提供关于系统将如何运行的令人信服的论据变得更加困难(足以令人信服地满足审计员的要求)。

From a safety standpoint, one often is concerned with guaranteed predictability/determinism of behavior and rapid response time. While it's possible to do either or both with AI-style programming techniques, as a system's control logic becomes more complex it's harder to provide convincing arguments about how the system will behave (convincing enough to satisfy an auditor).

成熟稳重的好男人 2024-07-17 21:59:59

我猜人工智能系统通常被认为更复杂。 复杂性通常是一件坏事,尤其是当它与“魔法”相关时,这也是一些人对人工智能系统的看法。

这并不是说替代方案一定更简单(或更好)。

当我们完成控制系统编码时,我们必须显示每个代码路径的跟踪表以及输入的排列。 这是为了确保我们不会将设备置于危险状态(对于员工或基础设施而言),并“证明”程序做了它们应该做的事情。

正如 @tvanfosson 所指出的,如果程序是模糊且不确定的,那么这将是非常棘手的。 我认为你应该接受这个答案。

I would guess that AI systems are generally considered more complex. Complexity is usually a bad thing, especially when it relates to "magic" which is how some people perceive AI systems.

That's not to say that the alternative is necessarily simpler (or better).

When we've done control systems coding, we've had to show trace tables for every single code path, and permutation of inputs. This was required to insure that we didn't put equipment into a dangerous state (for employees or infrastructure), and to "prove" that the programs did what they were supposed to do.

That'd be awfully tricky to do if the program were fuzzy and non-deterministic, as @tvanfosson indicated. I think you should accept that answer.

や三分注定 2024-07-17 21:59:59

关键的陈述是“只要你正确地编程你的逻辑”。 那么,如何“提供”这一点? 经验表明,大多数程序都充满了错误。

保证没有错误的唯一方法是形式验证,但这对于除了最原始的简单系统之外的所有系统实际上都是不可行的,并且(更糟糕的是)通常是在规范而不是代码上完成的,所以你仍然不知道在您证明规范完美无缺之后,代码可以正确实现您的规范。

The key statement is "provided you program your logic properly". Well, how do you "provide" that? Experience shows that most programs are chock full of bugs.

The only way to guarantee that there are no bugs would be formal verification, but that is practically infeasible for all but the most primitively simple systems, and (worse) is usually done on specifications rather than code, so you still don't know of the code correctly implements your spec after you've proven the spec to be flawless.

世界等同你 2024-07-17 21:59:59

我认为这是因为人工智能很难理解并且无法维护。

即使人工智能程序被认为是模糊的,或者它在发布时就已经“学习”了,但它在完成之前就已经对所有已知案例进行了很好的测试(并且它已经从中学习了)。 大多数情况下,这种“学习”会改变程序中的一些“阈值”或权重,之后,即使对于创建者来说,也很难真正理解和维护该代码。

在过去的 30 年里,这种情况发生了变化,创建了数学家更容易理解的语言,使他们更容易测试并围绕问题提供新的伪代码(例如 mat lab AI 工具箱)

I think that is because AI is very hard to understand and that becomes impossible to maintain.

Even if a AI program is considered fuzzy, or that it "learns" by the moment it is released, it is very well tested to all know cases(and it already learned from it) before its even finished. Most of the cases this "learning" will change some "thresholds" or weights in the program and after that, it is very hard to really understand and maintain that code, even for the creators.

This have been changing in the last 30 years by creating languages easier to understand for mathematicians, making it easier for them to test, and deliver new pseudo-code around the problem(like mat lab AI toolbox)

半步萧音过轻尘 2024-07-17 21:59:59

由于人工智能没有公认的定义,因此问题应该更加具体。

我的答案是自适应算法仅采用参数估计(一种学习)来提高输出信息的安全性。 即使这在功能安全中也不受欢迎,尽管所提出的算法的行为似乎不仅是确定性的(所有计算机程序都是确定性的)而且很容易确定。

做好准备,评估员会要求您展示涵盖输入数据和故障模式的所有组合的测试报告。 您的算法是自适应的意味着它不仅取决于当前输入值,还取决于许多或所有早期值。 您知道,在宇宙时代内不可能进行完整的测试覆盖。

一种评分方法是表明以前接受的更简单的算法(最先进的算法)并不安全。 如果你知道你的问题空间,这应该很容易(如果不知道,请远离人工智能)。

您的问题可能存在另一种可能性:一个引人注目的监控功能,指示参数是否被准确估计。

As there is no accepted definition of AI, the question shall be more specific.

My answer is on adaptive algorithms merely employing parameter estimation - a kind of learning - to improve the safety of the output information. Even this is not welcome in functional safety although it may seem that the behaviour of a proposed algorithm is not only deterministic (all computer programs are) but also easy to determine.

Be prepared for the assessor asking you to demonstrate test reports covering all combinations of input data and failure modes. Your algorithm being adaptive means it depends not only on current input values but on many or all of the earlier values. You know that a full test coverage is impossible within the age of the universe.

One way to score is showing that previously accepted simpler algorithms (state of the art) are not safe. This shall be easy if you know your problem space (if not, keep away from AI).

Another possibility may exist for your problem: a compelling monitoring function indicating whether the parameter is estimated accurately.

§对你不离不弃 2024-07-17 21:59:59

普通算法如果设计和测试不当,可能会以多种方式导致人员死亡。 如果您还没有阅读过,您应该查找 Therac 25。 在这个系统中,行为应该是完全确定性的,但事情仍然变得非常非常错误。 想象一下,如果它也试图“智能地”推理。

There are enough ways that ordinary algorithms, when shoddily designed and tested, can wind up killing people. If you haven't read about it, you should look up the case of Therac 25. This was a system where the behaviour was supposed to be completely deterministic, and things still went horribly, horribly wrong. Imagine if it were trying to reason "intelligently", too.

悲歌长辞 2024-07-17 21:59:59

针对复杂问题空间的“普通算法”往往比较棘手。 另一方面,一些“智能”算法具有简单的结构。 对于贝叶斯推理的应用尤其如此。 您只需要知道数据的似然函数(如果数据分为统计上独立的子集,则复数适用)。

可以测试似然函数。 如果测试无法覆盖足够远的尾部以达到所需的置信水平,则只需添加更多数据,例如来自另一个传感器的数据。 你的算法的结构不会改变。

缺点是贝叶斯推理所需的 CPU 性能。

此外,提及 Therac 25 没有任何帮助,因为根本不涉及任何算法,只是多任务意大利面条代码。 作者援引作者的话说,“这些事故在涉及软件编码错误方面相当独特——大多数与计算机相关的事故并不涉及编码错误,而是涉及软件需求中的错误,例如遗漏以及环境条件和系统状态处理不当。”

"Ordinary algorithms" for a complex problem space tend to be arkward. On the other hand, some "intelligent" algorithms have a simple structure. This is especially true for applications of Bayesian inference. You just have to know the likelihood function(s) for your data (plural applies if the data separates into statistically independent subsets).

Likelihood functions can be tested. If the test cannot cover the tails far enough to reach the required confidence level, just add more data, for example from another sensor. The structure of your algorithm will not change.

A drawback is/was the CPU performance required for Bayesian inference.

Besides, mentioning Therac 25 is not helpful, since no algorithm at all was involved, just multitasking spaghetti code. Citing the authors, "[the] accidents were fairly unique in having software coding errors involved -- most computer-related accidents have not involved coding errors but rather errors in the software requirements such as omissions and mishandled environmental conditions and system states."

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文