Revisiting the ReQ language

Draft of 2018.11.03 ☛ 2018.11.05

May include: GP&c.

Some time back—I am shocked to find it was at least three years before this writing—I started building a new interpreter for a rather unusual language that I called ReQ. I was working on it as a way of learning enough about the Clojure language (in which that first ReQ interpreter is written), to help me later build a better interpreter for the Push language. Over the last several years (shit, I mutter to myself) as I’ve been wrestling with my Klapaucius interpreter, talking with Lee Spector’s group about their Clojush interpreter, and developing this new dialect I’ve started calling “KPush”, I’ve been coming back over and over to a handful of key notions.

Maybe these are candidate “design principles” for languages intended for Genetic Programming. Or maybe they’re just code smells in the projects I’ve been working on. But in either case, I keep running across these frequent “clusters” of inconvenience: constraints that don’t feel so much like they’re provoking “creativity” in me, but rather feel more like they’re becoming obstacles around which I need constantly to maneuver.

This has been especially frustrating, personally, because I know Push has far more of the stuff we want in a GP representation scheme. As you’ve perhaps seen me discussing elsewhere, Push is far more capable of capturing complex abstractions and domain-specific ideas than traditional (tree-based) representation schemes, it enjoys a kind of weird redundant flexibility that suggests robust alternative approaches are feasible for many small benchmark problems, and it’s easy to imagine how one might extend its representational reach to address any particular domain-specific nouns and verbs.

But it turns out it’s not nearly so easy in practice to implement these extensions as it is to just imagine them.

Anyway, this particular essay is not intended to be about Push, nor about fixing it. What I’m here to confront is how often the ideas and sketches of features I think will help Push problems look like what I did in the old ReQ project. That was an over-complicated and under-planned sketch, an exercise mainly aimed at learning the concepts and idioms of a the first Lisp-derived language I’d learned—not a serious attempt at building a production GP interpreter. Indeed, in some cases I tried to intentionally “make it weird” just by doing things in an arbitrarily different way from what others had explore before. Just to get some exercise.

But it turns out that some of those features I’d thrown in just to be contrary are—in hindsight—apparent solutions to problems with Push. So maybe it’s time to look at ReQ again.

What was ReQ? What is ReQ?

The oldest (old enough to be called “traditional”) data structures used in genetic programming are probably trees. Specifically, the sort of S-expression trees that John Koza pioneered in his influential work. An S-expression interpreter cycle is something like, “Starting at a root [function] node, recursively evaluate its arguments until the root is fully evaluated.” There are inevitably syntactic and semantic constraints in play—for example, the arguments of a function must be of the right type—but they are familiar ones in general, nothing weird. S-expressions tend to (but don’t always) rely on side-effect-free value-returning root nodes to model data, with a root function’s return value being interpreted as “the answer” in an unambiguous and familiar way.

You are looking in this sort of GP project for a function that maps data to predictions, and so you want a root node of the tree that is a function producing type Prediction (whatever that should be).

There should be a picture here. But there’s not.

Designers of alternatives like LinearGP have mainly focused on imperative serially-executed code collections, something more like “assembly language” in structure: “Point an execution pointer at the first opcode, do that thing right now, and move the execution pointer to either the next position or wherever the opcode suggested instead.” Here, conventions the low-level design of hardware are more often in play, with certain opcodes writing result values to certain “registers”. There is often an assumption (familiar to hardware people but not frequently seen in software development languages) that the “returned result” or “answer” will end up being written to some special place, by some special time limit.

But even relatively simple code containing GOTO or JMP loops might end up looping forever or for a very long time indeed, in which case we’re left wondering what the answer “should have been”. But this is GP, and so we must expect to be running arbitrary code, and so our goal of producing an answer in a given location by some deadline might fail. In those cases, we’ll be left wondering what to do if no answer at all is produced by a prospective “function”, by the deadline at least.

There should have been a picture here, as well. But also, in this case, there was not.

Others, like Lee Spector and our mutual colleagues, loaded a lot of the functionality of the push-down stack data structure into the dynamics of the interpreter itself—rather than the program’s intrinsic structure. The Push interpreter loop boils down to something like “pop the next token from a special stack where the program tokens are stored, and perform the task indicated by that token—one that generally ends up pushing other new tokens onto one or more stacks.” In Push we consume arguments, where most LinearGP systems overwrite their values in designated registers.

Indeed, in Push there’s no “place” that isn’t a stack, and so we have to declare that “the answer” by making an appointment: we will look at a specified item on a particular stack, when the program “finishes running”. But an arbitrary Push loop is just as prone as LinearGP was to reach out towards algorithmic infinity.

It’s entirely likely (and necessary) that some instructions consume arguments from stacks, and then push back more items than they consumed, increasing the net number of items on stacks. Under certain conditions of iteration and recursion, plus this growth, it’s possible that a the “running program” stack will grow forever, and as a result the code won’t ever technically “halt”.

We tend to set an arbitrary time limit on Push program execution because of this tendency towards non-halting dynamics, and look in the “agreed upon place” for our possible answer at the appointed time. But also we will need to expect occasions when “no answer” has been returned at all, at least by the deadline.

Picture here? One wishes… but no.

stand in line

So now let me explain ReQ.

I decided that ReQ would be based on a LIFO queue, analogous to the way those other execution schemes use their own favorite data structures (trees, lists, stacks) for their core metaphors. In ReQ the “code” is stored in a linear, ordered data structure from which items can be removed (“dequeued”) at only one end, and only pushed (“enqueued”) at the other end.

Imagine waiting for service at a ticket booth. The ticket-seller tells you to go to the end of the line and wait again. That’s what a lot of ReQ programs do.

The items in the queue are “code”.

Every ReQ item is a function that consumes zero or more arguments, and returns zero or more results. Obviously the “instructions” you’d recognize—familiar mathematical functions like “+” or “set-union”—act in ways similar to what you probably already suspect.

But in ReQ we will also call literals (including collections) a sort of “function”. The literal “42”, for example is a “function” that takes no arguments, and returns itself. The collection [1 2 +] is similarly a “function” of no arguments, which also returns itself.

The basic ReQ execution loop is simple enough on the surface:

  1. Dequeue the top item (“function”) from the queue.
  2. If it lacks one or more arguments it wants to be “completed”, then consider every remaining item on the queue, in order, until we find any item of the correct type to be an argument (in any position) for that function.
  3. If we find an argument, then the function consumes the argument item (deleting it from the queue), and the result of (possibly partially) applying the function to the matching argument is enqueued at the tail of the queue.
  4. If on the other hand the “function” takes no arguments, for example if it’s a literal, then it simply moves from the head to the end of the queue.

show don’t tell

Before things get complicated—and they will, because I want this to be expressive and flexible and robust—let me draw some cartoons to show how this trivial sort of dynamics works. Especially since there are some tricks already lurking in this simple-sounding version’s dynamics.

Here’s a simple program composed of some literals. I’m intentionally eliding some extra attributes of these structures for the sake of typographic simplicity here. I’m putting square brackets around the queue, and using standard programming syntax for the literals; I’ve labeled the “head” and “tail” of the queue for clarity.

(head) [6 3.2 "foo" false 3] (tail)
       [3.2 "foo" false 3 6]
       ["foo" false 3 6 3.2]
       [false 3 6 3.2 "foo"]
       [3 6 3.2 "foo" false]
       [6 3.2 "foo" false 3]
       [3.2 "foo" false 3 6]
       ... (cycling forever)

In each step of execution, I dequeue one item. It’s a function, but it takes no arguments, and so I am simply moving it to the tail and enqueueing it there.

Notice that there’s no obvious way to say this “ends” without some kind of memory. For now, imagine that the program “cycles forever” through these five steps. Bear with me.

Constants are boring, right? Fine, be that way. Let’s do some simple arithmetic, with partial function application. Remember that I said that we are looking for only one argument—in any position—of the function we’ve dequeued from the head. Here I am showing what I meant by that, again using standard programmers’ syntactic conventions like + for numeric addition:

[6 3.2 + false * 3]
[3.2 + false * 3 6]
[+ false * 3 6 3.2]
[false * 6 3.2 +(3,Scalar)]
[* 6 3.2 +(3,Scalar) false]
[3.2 +(3,Scalar) false *(6,Scalar)]
[+(3,Scalar) false *(6,Scalar) 3.2]
[false *(6,Scalar) 6.2]
[*(6,Scalar) 6.2 false]
[false 37.2]
[37.2 false]

In this example, things start much the same way as before. Literals get shifted to the tail of the queue as we proceed, until we encounter the first function that takes an argument: “+”.

Implicitly, I am expecting the function “+” to have a signature that looks something like “+(Scalar,Scalar)->Scalar”. Similarly, arithmetic multiplication would have a signature something like “*(Scalar,Scalar)->Scalar”. That is, each takes two arguments of type Scalar, and produces a single Scalar result.

So in the cartoon above, when I handle the bare + token, I’m really thinking of it as something more like +(Scalar,Scalar), a “bare” function with no arguments filled in. As I proceed through the queue, holding +(Scalar,Scalar) in hand as it were, I examine each item and ask it, “Can you be an argument for this thing?”

The Boolean value false ain’t a Scalar, so not that. The function *(Scalar,Scalar)->Scalar also is not a Scalar (it’s a function), so not that either. But 3, by definition, is. When we “find” a match, and as soon as we find a match (in any position), we fill the leftmost matching argument with the item, and put the partially evaluated intermediate result at the end of the queue. Not only does + disappear from the head, but 3 disappears from its position, and a new +(3,Scalar)->Scalar function is enqueued at the tail.

In this simple example, applying a function to an argument found in the queue—any matching argument at all—transforms the function into the result of partially applying it to that argument, and consumes the argument itself. They “merge”, in some sense, to form a new whole.

Let me emphasize that, as in the prior example with only literals, we also end up here in a “state” that actually loops forever. Even if we were to end up with only one value on the queue, we could think of the system as “looping” over a single value even when all the functions have been combined into that one value. Hell, even if the queue were empty, containing no functions at all, we could still think of it as “looping forever” over the empty set of its items.

Hold that weird thought in mind for a while.

I know it probably gives you the heebie-jeebies. Look away for a while if you have to, so it doesn’t know how bad you feel about it. Ponder, if you want to, some familiar scheme for “halting if nothing changes”—I promise we all do that at some point when faced with this weird trick. But don’t grab the idea of “halt when cycling” too tightly, because I’m going to eventually demand you go back to “looping forever”.

If it helps at all, maybe you can think about how pretty a cellular automata picture is. Or ponder the utility of an infinite series in higher mathematical applications. I dunno, find something fun and mathematically abstract like that. But let this coexist in your mind with “halt when cycling”, because that isn’t good enough alone.

Let’s see a little fun before we quit today. This scheme, like Push, is remarkably flexible with respect to program order. Here are some of the permutations of that prior little six-item math blob, and what each ends up doing at the point where it cycles:

[6 3.2 + false * 3]  ...becomes...  [false 37.2]
[3.2 6 + false * 3]  ...becomes...  [false 28.8]
[+ 3.2 6 false * 3]  ...becomes...  [false 27.6]
[* + 6 3.2 false 3]  ...becomes...  [false 21.2]

Basically, each different permutation is arithmetically manipulating the three constants with its two functions in a slightly different order. All of them skip over the intervening false along the way. This is a lot like1 the way Push handles permuted programs.

slightly more interesting

Before we get to the odd stuff involving contextual violations of what I’ve already said, let me mention (1) functions that return multiple results, (2) combinators, and (3) higher-order functions.

Let me present a mathematical function first. One which takes a single Complex argument and returns a tuple of its real and imaginary parts. That is, it has a signature like this: reim(Complex)->(Scalar, Scalar). I’m using parentheses here, rather than the square brackets I was using for the queue itself, because I’m honestly not quite settled on notation. I mean by this sugar to say “this function returns this and this, in this order”, not that it’s a vector or a point or anything fraught like that. I mean that when I apply the function reim to a Complex value like {real:8,imaginary:3}, I will therefore get the values 8 and 3, in that order. Not an explicit structure that contains them, just those two things.2

Whenever a function returns multiple results—as opposed to returning a collection of results, which I will put off for now—we will append all of them in the given order to the queue. For example, here the reim does indeed find a Complex value as its argument, and produces both components as its result:

[reim false (8+3i)]
[false 8 3]

Simple enough? Maybe it seems like it. We’ll see over time. There’s some weird shit in here that will reach back and make you wonder.

Next, as in Push (and many other functional and pseudo-functional languages) there are in ReQ a number of combinators which manipulate the order and structure of things from the queue. For a relatively obvious example, we can consider a swap combinator, a function with a signature like swap(a:Any,b:Any)->(b,a), which takes two arbitrary items from the queue and enqueues them in the opposite order at the tail:

[swap 1 2 3 4]
[2 3 4 swap(1,Any)]
[3 4 swap(1,Any) 2]
[4 swap(1,Any) 2 3]
[swap(1,Any) 2 3 4]
[3 4 2 1]

For a more complex argument-handling dynamic, let me present a reverse! function, which takes as its argument an unusual thing we’ll call Context, and which returns a new Context with the elements in reverse order3. That is, its signature is something like reverse!(Context)->Context.

The thing about Context, which sets it apart from Collection, is that it refers to the enclosing collection of a function being executed. In essence, the reverse! function “reaches up” to the level of the queue which originally contained it—the scope in which we’d normally be looking for a function’s arguments—and then acts on that to reverse its order. The example isn’t very exciting or surprising, despite the weird type convention:

[reverse! 1 2 3 4]
[4 3 2 1]

Reasonable enough for now.

Finally, since I’ve slipped the notion of “collections” in already, let’s look at the possibility of higher-order functions in ReQ. Let me start with one that’s probably the most familiar, map. The map function takes a Collection and a function as its arguments, and it applies that function to every item in the collection in parallel:

[ map +(7,Scalar) [3 6 9] ]
[ [3 6 9] map(+(7,Scalar),Collection) ]
[ map(+(7,Scalar),Collection) [3 6 9] ]
[ [10 13 16] ]

Well, actually… I have in a real sense lied about how map works in ReQ. I don’t think the implementation really acts “in parallel”, at least not in the sense you’re imagining. And it’s not necessary for it to be a function of one argument, as you might have expected from this slightly misleading example that gives familiar results.

For example, what happens when there are items in the collection that are not valid arguments for the bound function? What would my map +(7,Scalar) example do if there were a false or a String in there?

Let’s look, and this time I’ll add some steps showing the interior workings of map:

[ map +(7,Scalar) [3 6 false 9 "foo"] ]
[ [3 6 false 9 "foo"] map(+(7,Scalar),Collection) ]
[ map(+(7,Scalar),Collection) [3 6 false 9 "foo"] ]
[                             [6 false 9 "foo" ]  ] ; (10)
[                             [false 9 "foo" ]    ] ; (10 13)
[                             [false "foo" ]      ] ; (10 13 16)
[ [false "foo" 10 13 16] ]

I’ve drawn a little semicolon and a “temporary buffer” off to the right of the queue. Does that make sense?

In each intermediate step of the “transaction” that implements this mapped function (which wants one Scalar argument), the first remaining Scalar in the argument is consumed, and the result of applying the function to it is enqueued in a temporary buffer of results. I keep doing that, one fresh function at a time, until no argument suitable for the function can be found in the collection. Then I terminate the cycle of applications and the buffered results are merged at the tail of the now-reduced collection in which they appeared.

In other words, this Collection is treated as if it were a sort of queue in its own right. Also, notice that in the prior case where all the items in the collection were Scalar values, the result turned out to be as if we had mapped a function in the traditional “parallel” way. But in this second example, where there are extra non-argument items present, what we’ve done has left some “leftovers”. We deal with them in a manner consistent with the queues we’ve been using at the top program level, though with slightly changed dynamics—we’re setting aside the results as we produce them.

In other other words, I am—probably arbitrarily—defining this ReQ version of map here to have internally weird behavior so that I can match the sort of behavior we’re used to in other languages. I am implicitly saying its applied function only consumes one argument, and that the temporary results obtained when it has consumed one argument are set aside and taken out of circulation in a sort of buffer. Once my mapped function cannot produce another result, I’m declaring that the cycle is done, and at that point I merge the ordered collection of intermediate results back in at the tail of the argument queue.

This means that map—as I’ve shown it here, since there are many other variants I could have imagined—applies its argument function over a collection with arbitrary items and will selectively transform the salient arguments without fucking the other ones up, or raising a runtime error.

And at least as long as I am firm about saying map consumes only one argument in each application, and terminates after the first trial where the function finds no arguments, then it turns out that some strange outcomes arise when the mapped function isn’t as simple as +(7,Scalar). For instance here, where I am calling map with a function that takes two arguments, and retaining the same dynamics I spelled out before (consume one and no more arguments, buffer intermediate results, merge when no argument is found) to +(Scalar,Scalar):

[ map + [3 6 false 9 "foo"] ]
[ [3 6 false 9 "foo"] map(+(Scalar,Scalar),Collection) ]
[ map(+(Scalar,Scalar),Collection) [3 6 false 9 "foo"] ]
[ [false "foo" +(3,Scalar) +(6,Scalar) +(9,Scalar)] ]

Other familiar-seeming functional workhorses like reduce, under these quirky metaphors, can do some pretty weird shit very easily.

Say reduce works this way, when we give it familiar arguments at least. Let’s give it an argument that is a function that wanting two identically-typed arguments, that produces a result of that same type. That is, reduce((A,A)->A,[Any]). Here, unlike map, we will consume one argument to produce an intermediate result, and then replace that intermediate result on the head of the stack and cycle again, even if it’s a “final” result.

Well, +(Scalar,Scalar) is a function that matches that signature, right?

[ reduce + [3 5 false 9 "foo"] ]
[ [3 5 false 9 "foo"] reduce(+(Scalar,Scalar),Collection) ]
[ reduce(+(Scalar,Scalar),Collection) [3 5 false 9 "foo"] ]
[                                     [+ 3 5 false 9 "foo"] ] ; ()
[                                     [5 false 9 "foo"] ] ; (+(3,Scalar))
[                                     [+(3,Scalar) 5 false 9 "foo"] ] ; ()
[                                     [ false 9 "foo"] ] ; (8)
[                                     [8 false 9 "foo"] ] ; ()
[                                     [false 9 "foo" 8] ] ; ()
[                                     [+ false 9 "foo" 8] ] ; ()
[                                     [false "foo" 8] ] ; (+(9,Scalar))
[                                     [+(9,Scalar) false "foo" 8] ] ; ()
[                                     [false "foo"] ] ; (17)
[                                     [17 false "foo"] ] ; ()
[                                     [false "foo" 17] ] ; ()
[                                     [+ false "foo" 17] ] ; ()
[                                     [false "foo"] ] ; (+(17,Scalar))
[                                     [+(17,Scalar) false "foo"] ] ; ()
[                                     [false "foo" +(17,Scalar)] ] ; ()
[                                     [+ false "foo" +(17,Scalar)] ] ; ()
[                                     [false "foo" +(17,Scalar) +] ] ; ()
[                                     [false "foo" +(17,Scalar)] ] ; ()
[ [false "foo" 17 ]

In order to make this reduce “act something like” the versions found in other languages, I am adding some odd internal structures.

As in map, the idea here is to drop the applied function at the head of the collection, and then find an argument and produce an intermediate result in the buffer. Unlike map, we recycle the partial result until it finally falls through the collection without gathering any arguments. You can see that in the first few steps, where 8 (remember, it’s a function too!) gets replaced at the head, and appears at the tail because it can gather no arguments.

Then we start over again, with a new version of the applied function.

This continues until the applied function itself slides through the entire collection without finding any arguments. When that happens, we know the result is complete. However, what we do then is intended to bring the result back in line with other traditional reduce functions—and is ad hoc as fuck, if you ask me, but that’s how programming languages develop, isn’t it? We look at the “final thing”, notice that it still takes arguments, and instead of returning it we go back to the last result that took no arguments and place that at the tail of the collection. In this case, we “really” end up with +(17,Scalar), but what we want is the last intermediate result that had no arguments. Which will instead be 17.

Why not return +(17,Scalar) and be done with it? I don’t have a good argument for that, either way. Maybe one should. Maybe the traditions of functional programming are using “default values” and arbitrary termination conditions themselves when they define and implement reduce. After all, we’ve done all the work in either case, and to back away from the final final result +(17,Scalar) seems to be a kind of backing away from an interesting formalism. Somebody may throw a Scalar in there later; maybe reduce really “wants” to produce a function that looks for a default value.

But either way, whether we “back off” or not, the internal dynamics I’ve sketched here have the sort of ReQish flavor I am seeking. It’s using the same metaphors, and only slightly-modified dynamics, to achieve a result close enough to the familiar definition of reduce to be recognizable. At least in simple cases.

And it skipped over weird non-argument things in the collection, which is the feature of interest in all ReQ (and Push and other GP) representation schemes. It works more often than not.

Note also that there does not seem to be an obligation—except as a social one, and insofar as it fits convention—that reduce in this scheme needs to only accept as its argument a function with a signature like (<T>,<T>) -> <T>. One might be tempted to wonder what this reduce would do under other circumstances.

Loads of things, I bet. What if the function were copies(c:Scalar,<T>) -> [<T>], which produces a collection containing c copies of any item? What if the function were itself another reduce, which in turn grabbed other functions to apply from inside the collection in which it’s found?

Are there situations where it could blow up into infinite non-halting recursion?

Sure! That’s how we get interesting behavior. More interesting than “cycling forever”, even….

First taste is free

OK. It’s been a long day here already, and I haven’t even gotten to the interesting things I’ve been eliding. Let me say quickly, and show next time.

You will recall that I said something like “I might be lying about this” when I described the execution loop of the interpreter. Specifically, I stated that when a function is applied to an argument, both the function and its argument are consumed, and the result of applying the function to the argument is enqueued.

Yeah, well. Not technically always the case.

In the new ReQ, each function has an associated energy. The energy of an item is a Scalar value, indicating how many times it can act as a function. Two different people, hearing this, have independently said “cats have nine lives” about here. So yeah, that.

Functions have N lives. Here’s the slightly more complicated execution loop, including details about energy:

  1. the first item is dequeued
  2. if its energy is (strictly) positive, it is able to look for an argument and produce the result of applying itself to that argument; if its energy is zero or less, it is discarded immediately without producing any results
  3. if the item takes no arguments, then it always works (as a function), and the results of calling it are moved to the tail of the queue; for constants, this means energy will appear to drop to 1.0 whenever they are moved from the head to tail, because they are “really” making a copy of themselves, not moving
  4. if no suitable argument is found for the function in the queue, then the function is simply moved, unchanged, to the tail
  5. if one argument is found, and the function’s energy is 1.0 or less, then the results of applying it to that argument are enqueued at the tail; every result is given an energy of 1.0; in cases where functions are only partially applied, their arguments retain their full energy until the final value is obtained
  6. if an argument is found, and the function’s energy is greater than 1.0, then first a copy of the function with a decremented energy is enqueued, and then the results of applying it are enqueued (still with an energy of 1.0, and with arguments of partial functions retaining their original energy)

Here’s how this plays out in some of the examples above.

I’ll first show that in the case where everything has an energy 1.0, we get the same behavior I described in all the prior examples: Constants move to the tail, but with energy changed to 1, because while they take no arguments they do produce themselves as their results, and results by default have energy 1.0. The reim function—which has energy 1.0 but also finds an argument—is consumed (because its energy drops to 0.0), and produces new results with the default energy 1.0. I’ll use little subscripts to indicate the energy of each thing:

[reim₁ false₁ (8+3i)₁]
[false₁ 8₁ 3₁]
[3₁ false₁ 8₁]

But what if everybody in the queue has 2 energy, instead of 1?

[reim₂ false₂ (8+3i)₂]
[false₂ reim₁ 8₁ 3₁]
[reim₁ 8₁ 3₁ false₁]
[8₁ 3₁ false₁ reim₁]

The reim₂ we start with produces a “child” with lower energy in addition to its results. The decremented function is still here, and it will stay there until—possibly because of something happening “elsewhere”, which we have not yet discussed—it finds an argument to act on. In the meantime, the false₂ “produces” a false₁, because it “always works” (it has no arguments). The (8+3i)₂, on the other hand, is consumed completely by the reim₂, and produces two new values with energy 1.

Make sense?

How about another? Here I’ve given the swap (and all the constants) an energy of 2. Because even I got confused just writing this out by hand, I have also been explicit about arguments:

[swap₂(Any,Any) 1₂ 2₂ 3₂ 4₂]
[2₂ 3₂ 4₂ swap₁(Any,Any) swap₁(1₂,Any)]
[3₂ 4₂ swap₁(Any,Any) swap₁(1₂,Any) 2₁]
[4₂ swap₁(Any,Any) swap₁(1₂,Any) 2₁ 3₁]
[swap₁(Any,Any) swap₁(1₂,Any) 2₁ 3₁ 4₁]
[2₁ 3₁ 4₁ swap₁(swap₁(1₂,Any),Any)]
[3₁ 4₁ swap₁(swap₁(1₂,Any),Any) 2₁]
[4₁ swap₁(swap₁(1₂,Any),Any) 2₁ 3₁]
[3₁ 4₁ 2₁ swap₁(1₂,Any)]
[4₁ 2₁ swap₁(1₂,Any) 3₁]
[2₁ swap₁(1₂,Any) 3₁ 4₁]
[swap₁(1₂,Any) 3₁ 4₁ 2₁]
[4₁ 2₁ 3₁ 1₂]
[2₁ 3₁ 1₂ 4₁]
[3₁ 1₂ 4₁ 2₁]
[4₁ 2₁ 3₁ 1₁]

I bet before you watched that unfold, you said, “Hey, the swap₂ ends up contradicting what the first swap₁ does, right? It all will end up in the same position as originally ordered?” That’s what I thought… but no. Then I worked it out, and what it ends up doing is rather different.

What would swap₆ do to a list of sufficient length by the time things settled down? Say we started with [swap₆(Any,Any) 1₂ 2₂ 3₂ 4₂ ... 49₂ 50₂], what would we end up with when cycling starts?

Hey, here’s a fun one: What would you expect a queue like [reduce swap 1 2 3 ...] to end up being? (A: I have no fucking idea, but it tires me out thinking of doing it by hand.)

What do you think a reduce₂ would do, compared to a reduce₁? Maybe something different? (A: you bet it is.)

In any case, I am left with the sense that it will all be interesting.

More about that bit later. Suffice to say, the dynamics resulting from interactions between even simple combinators, functions and literals seem to be remarkably flexible, computationally.

Lively, even.

And that is in fact one of the criteria I’ve been saying is valuable in GP representation schemes. Lots of stuff happens, and much of it “productive”.

  1. This turns out to be different from the way Push handles permuted programs. In Push we can leave partially-completed result fragments on the stack and forget about them forever. For example if we compare the end product of executing Push programs (3 4 +) and (+ 3 4), in the first case we’ll end up with a single 7 value on the numerical stack, and in the second case we’ll have the two unchanged arguments. In ReQ, by contrast, if there are enough functions wanting enough arguments of the right type, we’ll eventually cycle around long enough to consume everything, barring some “competition” that might happen in cases where there are two partially-completed functions that have grabbed one another’s arguments along the way…. 

  2. I suppose if I wanted to return a Collection containing the two values, I would specify a function more like reim(Complex)->[Scalar Scalar], where it’s literally another queue/vector thing like the program itself. 

  3. A different reverse function—one which takes a Collection as an argument and flips it on end—might work in the more reasonable way: reverse([Any])->[Any], meaning that [reverse 1 [2 3] 4] will produce [1 4 [3 2]], having consumed [2 3] and enqueued its reversed form at the tail. Make sense? The reverse! acts on the container originally holding the function itself, while the reverse looks within that container for its argument.