% print even numbers between 1 and 10
main :- count(1).
count(11).
count(N) :- N =< 10 | even(N), N2 is N + 1, count(N2).
even(N) :- M is N \\ 2, even(M, N).
even(0, N) :- writeln(N).
even(_, _) :- otherwise | true.
What isn't clear from this snippet, that I think is incredibly cool, is that all of the expressions within a function (here, called processes), actually run in parallel.
In this snippet:
count(N) :- N =< 10 | even(N), N2 is N + 1, count(N2).
count(N) is defined defined as a process that takes an argument named N, and if N is less than or equal to 10, the process changes state to a random process on the right side, and is also forked to become the other two processes.
Where it gets weird, is that the resulting three processes have data dependencies between them, so if the third process were executed first, count(N2), the dependency on N2 wouldn't be satisfied, and so that process would suspend, and a new process be chosen for execution.
It's easy to look at this line of code and think that the three expressions will execute in the order they're written, but any interleaving of them is possible, with that sort of non-determinism being built into the language by design.
I'm currently reading the Strand book, and it more or less describes the language as being Prolog, but without unification and backtracking: instead treating dataflow as being the basis for the computational model.
It would be interesting if current computers could take advantage of all expressions running in parallel; back last century that was exposing more parallelism (and incurring more coordination costs) than a better, coarser-grained, approach.
I guess that this being a declarative language, it is the task of the compiler to determine the cutoff where you just reorder tasks or you spawn a thread, right?
Unfortunately not: it's a physical copy and was a very generous gift from a friend yesterday, who knew I had been trying to track down a used copy for over a year.
Not that I've seen yet, but the manual does include this passage:
> Strand provides pattern matching but no general unification, instead
logic variables must be explicitly assigned, which simplifies the
semantics considerably.
It can be useful to tackle unification in two steps, even in Prolog. By pattern-matching first (a la functional programming) we quickly get either a failure or a set of distinct variables with constraints, which the second 'proper unification' step can try to solve.
For example, unifying `plus(1, 2)` with `plus(X, X)` can start by pattern-matching, which turns the latter into `plus(X, Y)` with the constraint X = Y (to avoid variables occuring more than once). The match will succeed, producing the constraints X = 1 and Y = 2, along with the X = Y constraint from before. The second step will try to solve these constraints, and fail since they're inconsistent.
I've not done much logic programming, but I've encountered this two-step approach as a way to implement co-inductive logic programming, which lets us handle infinite datastructures like streams.
Joe Armstrong and Robert Virding actually experimented with compiling Erlang to Strand. I'm not familiar with all of the details, but I believe they saw a factor of six speedup as compared to the Prolog implementation [0], but deemed the project a failure because of the complexity involved in restricting Strand's parallelism and failure to meet their target of a 70x speedup [1].
I'm actually sharing this in the first place because I managed to acquire a copy of "Strand: New Concepts in Parallel Programming" [2] yesterday, and it includes a case study about the Erlang -> Strand compiler, so I've been having fun trying to piece together the lineage.
Yup! The original Prolog implementation is described in "Use of Prolog for developing a new programming language" [0], and took place in 1986. After performance eventually became an issue, they cross-compiled Erlang to a few other concurrent languages (Strand included), before opting to directly implement Erlang in C.