Prolog 的并发程度如何?

发布于 2024-11-27 00:26:49 字数 210 浏览 0 评论 0原文

我在网上找不到任何关于此的信息...我也是Prolog的新手...

在我看来,Prolog可以是高度并发的,也许在尝试匹配时一次尝试多种可能性一条规则。现代 Prolog 编译器/解释器本质上*是并发的吗?哪些?默认情况下是否启用并发?我需要以某种方式启用它吗?

* 我对多线程不感兴趣,只是固有的并发性。

I can't find any info on this online... I am also new to Prolog...

It seems to me that Prolog could be highly concurrent, perhaps trying many possibilities at once when trying to match a rule. Are modern Prolog compilers/interpreters inherently* concurrent? Which ones? Is concurrency on by default? Do I need to enable it somehow?

* I am not interested in multi-threading, just inherent concurrency.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

现代 Prolog 编译器/解释器本质上*是并发的吗?哪些?默认情况下是否启用并发?

不是。并发逻辑编程是 20 世纪 80 年代日本第五代计算机程序的主要目标;人们预计 Prolog 变体将在大规模并行硬件上“轻松”并行化。这项努力基本上失败了,因为自动并发并不容易。如今,Prolog 编译器倾向于提供线程库,程序必须手动控制并发量。

要了解为什么并行化 Prolog 与任何其他语言一样困难,请考虑该语言提供的两种主要控制结构:合取(AND,串行执行)和析取(OR,回溯选择)。假设您有一个 AND 构造,例如

p(X) :- q(X), r(X).

您希望并行运行 q(X)r(X)。然后,如果 q 部分统一 X(例如将其绑定到 f(Y)),会发生什么。 r 必须了解此绑定,因此您要么必须传达它,要么必须等待两个连接完成;那么如果其中一个失败,您可能会浪费时间,除非您再次让它们进行通信以进行同步。这会带来开销,而且很难做到正确。现在对于 OR:

p(X) :- q(X).
p(X) :- r(X).

这里有有限数量的选择(当然,Prolog 允许无限多个选择),因此您希望并行运行它们。但是,万一成功了呢?计算的另一个分支必须暂停并保存其状态。您要立即保存其中多少个状态?尽可能多的处理器似乎是合理的,但是您必须注意不要让计算创建不适合内存的状态。这意味着你必须猜测计算的状态有多大,Prolog 对你隐藏了一些东西,因为它抽象了处理器和内存等实现细节;它不是 C。

换句话说,自动并行化是的。第五代计算机项目通过设计承诺选择语言(即无回溯的 Prolog 方言)解决了一些问题。在此过程中,他们彻底改变了语言。必须指出的是,并发语言 Erlang 是 Prolog 的一个分支,它也用回溯换取了更接近函数式编程的东西。它仍然需要用户指导来了解程序的哪些部分可以安全地同时运行。

Are modern Prolog compilers/interpreters inherently* concurrent? Which ones? Is concurrency on by default?

No. Concurrent logic programming was the main aim of the 5th Generation Computer program in Japan in the 1980s; it was expected that Prolog variants would be "easily" parallelized on massively parallel hardware. The effort largely failed, because automatic concurrency just isn't easy. Today, Prolog compilers tend to offer threading libraries instead, where the program must control the amount of concurrency by hand.

To see why parallelizing Prolog is as hard as any other language, consider the two main control structures the language offers: conjunction (AND, serial execution) and disjunction (OR, choice with backtracking). Let's say you have an AND construct such as

p(X) :- q(X), r(X).

and you'd want to run q(X) and r(X) in parallel. Then, what happens if q partially unifies X, say by binding it to f(Y). r must have knowledge of this binding, so either you've got to communicate it, or you have to wait for both conjuncts to complete; then you may have wasted time if one of them fails, unless you, again, have them communicate to synchronize. That gives overhead and is hard to get right. Now for OR:

p(X) :- q(X).
p(X) :- r(X).

There's a finite number of choices here (Prolog, of course, admits infinitely many choices) so you'd want to run both of them in parallel. But then, what if one succeeds? The other branch of the computation must be suspended and its state saved. How many of these states are you going to save at once? As many as there are processors seems reasonable, but then you have to take care to not have computations create states that don't fit in memory. That means you have to guess how large the state of a computation is, something that Prolog hides from you since it abstracts over such implementation details as processors and memory; it's not C.

In other words, automatic parallelization is hard. The 5th Gen. Computer project got around some of the issues by designing committed-choice languages, i.e. Prolog dialects without backtracking. In doing so, they drastically changed the language. It must be noted that the concurrent language Erlang is an offshoot of Prolog, and it too has traded in backtracking for something that is closer to functional programming. It still requires user guidance to know what parts of a program can safely be run concurrently.

后eg是否自 2024-12-04 00:26:49

从理论上讲,这似乎很有吸引力,但存在各种问题,使得这种实施方式看起来不明智。

  • 无论好坏,人们习惯于认为他们的程序是从左到右和自上而下执行的,即使是在 Prolog 中编程时也是如此。在标准 Prolog 中,谓词子句的顺序和子句内术语的顺序在语义上都是有意义的。并行化它们会改变太多现有代码的行为而变得流行。

  • 非关系语言元素才能有意义地使用,即除非发明了非常复杂的依赖性跟踪,否则它们在并行解释器中将变得不可用。

  • 所有现有并行化解决方案都会为线程间通信带来至少一些性能开销。

  • Prolog 通常用于解决高级深度递归问题,例如图遍历、定理证明等。现代机器上的并行化(理想情况下)可以针对某些常数 实现 n 的加速>n,但它无法将不可行的递归解决方法转变为可行的方法,因为这需要指数级的加速。相比之下,Fortran 和 C 程序员通常解决的数值问题通常具有较高但相当有限的计算成本;将 10 小时的工作变成 1 小时的工作,并行化的努力是非常值得的。相比之下,将一个可以提前大约 6 步的程序变成一个(平均)可以提前 6.5 步的程序就没有那么引人注目了。

In theory that seems attractive, but there are various problems that make such an implementation seem unwise.

  • for better or worse, people are used to thinking of their programs as executing left-to-right and top-down, even when programming in Prolog. Both the order of clauses for a predicate and of terms within a clause is semantically meaningful in standard Prolog. Parallelizing them would change the behaviour of far too much exising code to become popular.

  • non-relational language elements such as the cut operator can only be meaningfully be used when you can rely on such execution orders, i.e. they would become unusable in a parallel interpreter unless very complicated dependency tracking were invented.

  • all existing parallelization solutions incur at least some performance overhead for inter-thread communication.

  • Prolog is typically used for high-level, deeply recursive problems such as graph traversal, theorem proving etc. Parallelization on a modern machines can (ideally) achieve a speedup of n for some constant n, but it cannot turn an unviable recursive solution method into a viable one, because that would require an exponential speedup. In contrast, the numerical problems that Fortran and C programmers usually solve typically have a high but quite finite cost of computation; it is well worth the effort of parallelization to turn a 10-hour job into a 1-hour job. In contrast, turning a program that can look about 6 moves ahead into one that can (on average) look 6.5 moves ahead just isn't as compelling.

櫻之舞 2024-12-04 00:26:49

Prolog 中有两种并发概念。一个与多线程相关,另一个与暂停目标相关。我不确定你想知道什么。因此,我将首先对多线程进行一些扩展:

当今广泛使用的 Prolog 系统可以区分它们是否是多线程的。在多线程 Prolog 系统中,您可以生成在同一知识库上并发运行的多个线程。这给查询和动态谓词带来了一些问题,这些问题由这些 Prolog 系统解决。

您可以在此处找到多线程 Prolog 系统的列表:

操作系统和 Web 相关特性

多线程是各种并行化范例的先决条件。相应地,各个 Prolog 系统提供服务于某些范式的构造。典型的范例是线程池,例如在 Web 服务器中使用,或者为长时间运行的 GUI 任务生成线程。

目前线程库还没有 ISO 标准,尽管已经有一个提案,并且每个 Prolog 系统通常都有丰富的库来提供线程同步、线程通信、线程调试和外部代码线程。 Prolog 系统中的垃圾收集必须取得一定的进展,才能允许线程应用程序具有无限长时间运行的线程。

一些现有层甚至允许以独立于 Prolog 系统的方式进行高级并行化范例。例如,Logtalk 有一些映射到各种目标 Prolog 系统的构造。

现在让我们转向暂停的目标。从较旧的 Prolog 系统(事实上,自 Prolog II,1982 年起)我们知道 < a href="http://www.swi-prolog.org/pldoc/doc_for?object=freeze/2" rel="nofollow noreferrer">freeze/2 命令或阻止指令。这些结构迫使目标不能通过现有条款进行扩展,而是列入休眠清单。目标可以稍后被唤醒。由于目标不是立即执行而是只有在被唤醒时才执行,因此挂起的目标有时被视为并发目标,
但这种形式的并行性更好的概念是协程。

暂停的目标对于实现约束求解系统很有用。在最简单的情况下,睡眠列表是一些可变属性。但约束求解系统的一种新方法是约束处理规则。在约束处理规则中,唤醒条件可以暂停目标对模式。通过暂停目标或约束处理规则进行约束求解的可用性可以在此处查看:

Prolog 系统概述

谨致问候

There are two notions of concurrency in Prolog. One is tied to multithreading, the other to suspended goals. I am not sure what you want to know. So I will expand a little bit about multithreading first:

Today widely available Prolog system can be differentiated whether they are multithreaded or not. In a multithreaded Prolog system you can spawn multiple threads that run concurrently over the same knowledge base. This poses some problems for consult and dynamic predicates, which are solved by these Prolog systems.

You can find a list of the Prolog systems that are multithreaded here:

Operating system and Web-related features

Multithreading is a prerequesite for various parallelization paradigmas. Correspondingly the individudal Prolog systems provide constructs that serve certain paradigmas. Typical paradigmas are thread pooling, for example used in web servers, or spawning a thread for long running GUI tasks.

Currently there is no ISO standard for a thread library, although there has been a proposal and each Prolog system has typically rich libraries that provide thread synchronization, thread communication, thread debugging and foreign code threads. A certain progress in garbage collection in Prolog system was necessary to allow threaded applications that have potentially infinitely long running threads.

Some existing layers even allow high level parallelization paradigmas in a Prolog system independent fashion. For example Logtalk has some constructs that map to various target Prolog systems.

Now lets turn to suspended goals. From older Prolog systems (since Prolog II, 1982, in fact) we know the freeze/2 command or blocking directives. These constructs force a goal not to be expanded by existing clauses, but instead put on a sleeping list. The goal can the later be woken up. Since the execution of the goal is not immediate but only when it is woken up, suspended goals are sometimes seen as concurrent goals,
but the better notion for this form of parallelism would be coroutines.

Suspended goals are useful to implement constraint solving systems. In the simplest case the sleeping list is some variable attribute. But a newer approach for constraint solving systems are constraint handling rules. In constraint handling rules the wake up conditions can be suspended goal pair patterns. The availability of constraint solving either via suspended goals or constraint handling rules can be seen here:

Overview of Prolog Systems

Best Regards

草莓酥 2024-12-04 00:26:49

从谷歌的快速搜索中可以看出,并发逻辑编程范式只是少数研究语言的基础,并且不再积极开发。我看到有人声称并发逻辑在 Mozart/Oz 系统中很容易实现。

From a quick google search it appears that the concurrent logic programming paradigm has only been the basis for a few research languages and is no longer actively developed. I have seen claims that concurrent logic is easy to do in the Mozart/Oz system.

辞旧 2024-12-04 00:26:49

在 80 年代/90 年代,人们非常希望将并行性融入到语言中(从而使其“本质上”并行),特别是在第五代项目的背景下。甚至还研究了特殊的硬件结构来实现“并行推理机”(PIM)(类似于“函数式编程”阵营中 LISP 机器的特殊硬件)。由于现成 CPU 的不断改进,硬件工作被放弃,而软件工作由于编译器过于复杂、缺乏对难以实现的高级功能的需求以及可能缺乏回报而被放弃:并行性看起来透明且在语言级别的优雅利用通常意味着昂贵的进程间通信和“幕后”事务锁定。

关于此的一个很好的阅读是

" 《并发逻辑编程语言的演化》

作者:Evan Tick,1994 年 3 月。发表于《逻辑编程杂志》,十周年纪念特刊, 1995”。链接到的 Postscript 文件是完整的,与您在 Elsevier 获得的 PDF 不同。

作者说:

并发逻辑编程有两种主要观点及其
过去几年的发展[即1990-94]。大多数逻辑编程
文献将并发逻辑编程语言视为
逻辑程序的衍生或变体,即主要区别
广泛使用“不关心”非决定论而不是
“不知道”(回溯)非决定论。因此名字commited
选择
或CC语言。第二种观点是并发逻辑
程序是并发的、反应性的程序,与其他程序没有什么不同
“传统”并发语言,例如具有显式消息的“C”
传递,从某种意义上说,过程是进行通信的过程
通过数据流逐步产生答案。愤世嫉俗者可能会说
认为前一种观点更具学术丰富性,而后者
观点更具有实用的公关价值。

本文是对并发实现技术的综述
逻辑编程语言,从而充分公开这两种语言
观点并不是特别相关。相反,快速概述基本
语言语义,以及它们与基础编程的关系
家庭内各种语言的范式就足够了。
不会尝试涵盖许多可行的编程
范式;也不是语义上的细微差别,也不是家族史。 (...)。

我想在本文中提出的要点是并发逻辑
编程语言自诞生以来一直在不断发展,
大约十年前,由于以下tatonnement

  • 系统设计者和编译器编写者只能提供某些有限的鲁棒功能;高效的实施。这带动了
    市场接受这些受限制的语言,因为在一些非正式的语言中
    意义上的,事实上的标准。
  • 程序员开始意识到,某些更具表现力的语言功能对于获取应用程序并不至关重要
    写的,并且没有要求将其包括在内。

因此,我在本文中的立场将是第三种观点:最初如何
丰富的语言逐渐失去了“牙齿”,变得越来越弱,但是
更具实际可实施性,并取得更快的性能。

去进化历史始于并发序言(深层守卫,
原子统一;只读带注释的变量
同步),并经过一系列缩减(例如:GHC
(输入匹配同步)、Parlog(安全)、FCP(平坦)、Fleng(无)
守卫)、Janus(限制通信)、Strand(分配而不是
比输出统一)),现在以 PCN(平面防护罩,
非原子分配输入匹配同步,以及
显式定义的可变变量)。这个术语和其他术语将
被定义为文章的继续。

这种观点可能会让一些人不高兴
读者,因为它假定性能是主要驱动力
语言市场的力量;此外,主要“添加
并发逻辑程序相对于逻辑程序的“价值”在于能力
自然地利用并行性来提高速度。当然是反应性的
语言的性质也增加了价值;例如,在建筑群中
面向对象的应用程序。因此,人们可以说,权力下放
当反应能力被交易时,见证是一件坏事
为了速度。

There was great hope in the 80s/90s to bake parallelism into the language (thus making it "inherently" parallel), in particular in the context of the Fifth Generation Project. Even special hardware constructs were studied to implement "Parallel Inference Machine" (PIM) (Similar to the special hardware for LISP machines in the "functional programming" camp). Hardware efforts were abandoned due to continual improvement of off-the-shelf CPUs and software effort were abandoned due to excessive compiler complexity, lack of demand for hard-to-implement high-level features and likely lack of payoff: parallelism that looks transparent and elegantly exploitable at the language level generally means costly inter-process communication and transactional locking "under the hood".

A good read about this is

"The Deevolution of Concurrent Logic Programming Languages"

by Evan Tick, March 1994. Appeared in "Journal of Logic Programming, Tenth Anniversary Special Issue, 1995". The Postscript file linked to is complete, unlike the PDF you get at Elsevier.

The author says:

There are two main views of concurrent logic programming and its
development over the past several years [i.e. 1990-94]. Most logic programming
literature views concurrent logic programming languages as a
derivative or variant of logic programs, i.e., the main difference
being the extensive use of "don't care" nondeterminism rather than
"don't know" (backtracking) nondeterminism. Hence the name committed
choice
or CC languages. A second view is that concurrent logic
programs are concurrent, reactive programs, not unlike other
"traditional" concurrent languages such as 'C' with explicit message
passing, in the sense that procedures are processes that communicate
over data streams to incrementally produce answers. A cynic might say
that the former view has more academic richness, whereas the latter
view has more practical public relations value.

This article is a survey of implementation techniques of concurrent
logic programming languages, and thus full disclosure of both of these
views is not particularly relevant. Instead, a quick overview of basic
language semantics, and how they relate to fundamental programming
paradigms in a variety of languages within the family, will suffice.
No attempt will be made to cover the many feasible programming
paradigms; nor semantical nuances, nor the family history. (...).

The main point I wish to make in this article is that concurrent logic
programming languages have been deevolving since their inception,
about ten years ago, because of the following tatonnement:

  • Systems designers and compiler writers could supply only certain limited features in robust; efficient implementations. This drove the
    market to accept these restricted languages as, in some informal
    sense, de facto standards.
  • Programmers became aware that certain, more expressive language features were not critically important to getting applications
    written, and did not demand their inclusion.

Thus my stance in this article will be a third view: how the initially
rich languages gradually lost their "teeth," and became weaker, but
more practically implementable, and achieved faster performance.

The deevolutionary history begins with Concurrent Prolog (deep guards,
atomic unification; read-only annotated variables for
synchronization), and after a series of reductions (for example: GHC
(input-matching synchronization), Parlog (safe), FCP (flat), Fleng (no
guards), Janus (restricted communication), Strand (assignment rather
than output unification)), and ends for now with PCN (flat guards,
non-atomic assignments input-matching synchronization, and
explicitly-defined mutable variables). This and other terminology will
be defined as the article proceeds.

This view may displease some
readers because it presupposes that performance is the main driving
force of the language market; and furthermore that the main "added
value" of concurrent logic programs over logic programs is the ability
to naturally exploit parallelism to gain speed. Certainly the reactive
nature of the languages also adds value; e.g., in building complex
object-oriented applications. Thus one can argue that the deevolution
witnessed is a bad thing when reactive capabilities are being traded
for speed.

尤怨 2024-12-04 00:26:49

ECLiPSe-CLP 是一种“很大程度上向后兼容 Prolog”的语言,支持 OR 并行,尽管“由于其他优先事项,目前尚未积极维护此功能”。

[1,2] ECLiPSe-CLP 中的文档 OR-(和 AND-)并行性。

然而,我尝试使用 ECLiPSe-CLP 存储库中的代码让它工作一段时间,但我没有得到它。

ECLiPSe-CLP, a language "largely backward-compatible with Prolog", supports OR-parallelism, even though "this functionality is currently not actively maintained because of other priorities".

[1,2] document OR- (and AND-)parallelism in ECLiPSe-CLP.

However, I tried to get it working some time using the code from ECLiPSe-CLP's repository, but I didn't get it though.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文