Notes on higher-order functions in ReQ
Draft of 2019.02.11
I want to work through and clarify some notes on how ReQ
evaluation will (or should) work, especially for higher-order functions. This may be so esoteric—and subject to change in the near future—that these notes are only of use to me. But I feel as though I might as well drop them here in public, on the off chance they spark some insights as the language specification and interpreter moves towards release.
Part names
First, a quick review. In ReQ
every evaluation process is associated with a collection. I’ve been drawing these with square brackets, and saying they are “like queues”, but that’s a bit of an oversimplification.
I tend to draw a ReQ
queue (or “program”) like this. It’s an ordered collection of items, some of which may be more collections. Each item has a type (generally not shown explicitly), and each (as I’ll remind us all) is a function. Every item may also have a lifetime associated with it, which I’ll also try to explain below.
[+ 9 false and [3 9 77 *] <]
The execution cycle is simple to describe, in many cases:
- the item at the head of the queue is dequeued; call it the “acting function”
- the remainder of the queue is searched for the first item which can act as any unassigned argument for the acting function
- as soon as any item is found, it is removed from its place in the queue, and the results of applying the function to that single argument (which may be a partially-applied closure) is enqueued at the tail of the collection; if there are multiple results, they are all enqueued in the order produced (details below)
- if no item is found, the “result” of the function is itself, with no change
Starting with the example I showed above, let me walk through some steps of execution:
[+ 9 false and [3 9 77 *] <] ;; + finds 9 and consumes it, producing a partial result [false and [3 9 77 *] < +(9,«Scalar»)] ;; false is a literal [and [3 9 77 *] < +(9,«Scalar») false] ;; and finds false, producing a partial result [[3 9 77 *] < +(9,«Scalar») and(false,«Boolean»)] ;; [3 9 77 *] is a literal [ < +(9,«Scalar») and(false,«Boolean») [3 9 77 *]] ;; < has no arguments [ +(9,«Scalar») and(false,«Boolean») [3 9 77 *] <] ... ;; all compatible arguments are gone, so we cycle forever
Notice also that I will denote unassigned arguments of functions, where I need to make them explicit, like “«Item»
” to indicate “something of type Item
”.
The terminology I want to standardize here is “result” and “outcome”.
The result of applying a function to a queue is either (1) the function itself, if no arguments exist, or (2) the result of partially applying the function to that argument.
The outcome of applying a function to a queue is the new queue one gets. That is, it’s the entire updated queue.
So for a simple example, when I process the queue below, and apply +
:
[+ true 7 false] [true false +(7,«Scalar»)]
The +
function does find a scalar, consumes it, and produces the result +(7,«Scalar»)
, a new function of one argument.
Contrast with this:
[+ true false] [true false +]
Here, the +
function finds no argument, and so the result is the function itself, unchanged.
Functions in ReQ
can produce multiple results. These are produced in a strict order defined in the function itself. When they are enqueued, they are enqueued in the order produced.
So for example, a reim
function might consume a «Complex»
argument and produce two «Scalar»
items:
[reim «Complex»(3,9) 77] [77 3 9]
I may sometimes refer to this implicit ordered collection of result items as the “result packet”.
Lifetime
However, recall that all functions in ReQ
also have a lifetime. The default lifetime for all items is 1.0.
Each time a function is successfully invoked (that is, it either takes no arguments, or it finds a matching argument in the context), its lifetime is decremented by 1.0.
Any item with a lifetime of zero or less will disappear immediately upon “creation”. However, any function that has a strictly positive lifetime is retained when made, and can be invoked when it becomes an active item. Functions with a lifetime of 1.0 or more produce an extra copy of themselves, with a decremented lifetime, whenever invoked.
So for example, like all items the default lifetime of the function +
is 1.0. It will disappear as soon as it consumes its first argument. The lifetime of the result of every successful function invocation is always the default value of 1.0.
Here, I’ll show you the (implied) default lifetimes as explicit subscripts:
[+₁ true₁ 7₁ false₁] [true₁ false₁ +(7,«Scalar»)₁]
However, if I vary this example so that +
has a lifetime larger than 1, a decremented copy will be retained as an “extra” result. For example, here is a +₄
with a lifetime of 4, which produces (in addition to its normal result) a +₃
.
[+₄ true₁ 7₁ false₁] [true₁ false₁ +(7,«Scalar»)₁ +₃]
Retained functions like these are kept at the end of the result “packet”.
The lifetime of arguments in partially-applied functions is retained. That is, the metadata associated with the argument is retained, until it is consumed.
Functions with no arguments
Literals are functions with no arguments. They produce a duplicate of themselves when invoked, without consuming any other items. When they do so, their lifetime is decremented like any other invoked function.
The result of a literal is the literal itself.
Literals with lifetimes will act like other functions. That is, they will produce a result which includes a copy of themselves (with a default lifetime), followed by a decremented copy of themselves.
[999₄ 1₁ 2₁ 3₁] [1₁ 2₁ 3₁ 999₁ 999₃]
summary
All items in ReQ
are functions—including «Collection»
items, which are typically treated as literals. Every function has a scalar lifetime value associated as metadata; by default the lifetime is 1.0.
When a function is invoked, its lifetime is decremented by 1.0.
Literals take no arguments. They are immediately invoked when they are the active item, consuming no arguments, and producing a copy of themselves.
Active functions that take arguments are not invoked unless there is a valid argument found in the queue. If an active function is not invoked, it produces a copy of itself, without decrementing its lifetime. If it is invoked, it produces its defined results, its lifetime is decremented, and if it retains positive lifetime it will produce a copy of itself.
Application functions
apply
The apply
function takes two items of «Any»
type (which, of course, will both be functions). Its result will be the result packet as though the first argument were invoked on a collection containing only the second item. The obvious simple case is:
[apply(+,7)] [+(7,«Scalar»)]
However, consider this one, and notice we’re only retaining the result packet, which is the uninvoked function:
[apply(99,7)] [99]
Items with a lifetime produce a copy in their result packet:
[apply(99₄,7)] [99 99₃]
And consider this one:
[apply(+,false)] [+]
Since the apply
function acts as though we were using +
as the active item with a collection containing only the second argument: [+ false]
. The result packet of +
is that function itself. Also, recall that if a function is not invoked, its lifetime is retained without decrementing. So we would have:
[apply(+₄,false)] [+₄]
do
Like apply
, the do
function takes two items of «Any»
type. Its result will be the entire outcome as though the first argument were active on a collection containing only the second item. This leads to a different result from apply
, which only retains the results not the outcomes.
If the items are compatible, the result will be like that for apply
:
[do(+,7)] [+(7,«Scalar»)]
The lifetime of the applied function is decremented, as with apply
. However, unlike apply
, incompatible arguments are not consumed.
[do(99,7)] [7 99]
In the case of scalars, the lifetime is decremented, but the second argument is part of the result packet:
[do(99₄,7)] [7 99 99₃]
And finally
[do(+₄,false)] [false +₄]
try
Like apply
and do
, the try
function takes two items of «Any»
type. If the first argument can be invoked (with or without consuming the second), that produces the result; if it cannot, the result is the second argument, unchanged.
If the items are compatible, the result will be like that for apply
:
[try(+,7)] [+(7,«Scalar»)]
For literal functions, which can always be invoked, the result is:
[try(99,7)] [99]
For argument-consuming functions that cannot be invoked, the result is the second argument:
[try(+,false)] [false]
Positive lifetimes have the expected effects on successful argument-consuming functions:
[try(+₄,7)] [+(7,«Scalar») +₃]
Scalars with a positive lifetime are always invoked, and therefore produce the standard result:
[try(99₄,7)] [99 99₃]
Functions that are not invoked are discarded regardless of their lifetime:
[try(+₄,false)] [false]
demand
Like the earlier application modes, the demand
function takes two items of «Any»
type. If the first argument can be invoked (with or without consuming the second), that produces the result; if it cannot, no result at all is produced.
If the items are compatible, the result will be like that for apply
:
[demand(+,7)] [+(7,«Scalar»)]
For literal functions, which can always be invoked, the result is as expected:
[demand(99,7)] [99]
In all other cases where the function cannot be invoked with that argument, no result at all is returned:
[demand(+,false)] []
Positive lifetimes have the expected effects on successful argument-consuming functions:
[demand(+₄,7)] [+(7,«Scalar») +₃]
Scalars with a positive lifetime are always invoked, and therefore produce the standard result:
[demand(99₄,7)] [99 99₃]
Functions that are not invoked are discarded regardless of their lifetime:
[demand(+₄,false)] []
greedy
and greedy(x)
The greedy
function takes one item x
of «Any»
type, and produces the ephemeral intermediate function greedy(x)
.
When greedy(x)
is the active item, the wrapped function x
is “run to completion” over the current context. That is, it searches through the entire environment for all possible arguments, consuming everything it can to produce a final form. Arguments are pulled from their locations in the context, and all results are enqueued at the end as though a single function application had occurred.
[greedy(+) 8 5 1] [1 13]
The behavior when arguments are unavailable is straightforward; what’s happening is just a change of the underlying queue dynamics and partial application:
[greedy(and) 8 5 false 1] [8 5 1 and(false,«Boolean»)]
When the function x
has no arguments, it is simply enqueued as normal (without losing lifetime):
[greedy(and) 1 2 3] [1 2 3 and]
When the function x
is a literal, it acts as you might expect:
[greedy(99) 1 2 3] [1 2 3 99]
When the function x
has a lifetime, a replica is produced in each iteration, but only new items produced as results are applied; they are not put into the context, and cannot be acted upon by the other earlier results. The items in the “results stack” are finished greedily, first to last, and when no arguments are available for any of them in the context, they are enqueued in order:
[greedy(+₄) 8 5 false 1] ;; scratch space ("::" indicates function vs context) [+₄ :: 8 5 false 1] [+(8,«Scalar») +₃ :: 5 false 1] [13 +₃ :: false 1] [13 +(1,«Scalar») +₂ :: false] ;; no arguments available [false 13 +(1,«Scalar») +₂] ;; notice that results are enqueued at the end
If the function x
has a lifetime but takes no arguments, notice that (according to these rules) it will actually cycle through its whole lifetime “in place”:
[greedy(99₄) 1 2 3] ;; scratch space [99₄ :: 1 2 3] [99 99₃ :: 1 2 3] [99 99 99₂ :: 1 2 3] [99 99 99 99 :: 1 2 3] [1 2 3 99 99 99 99]
Note that there may be unexpected (but hopefully well-defined) effects when context-switching and other complications are involved.
The map
family
map-apply
The map-apply
function takes any item, and a «Collection»
. The first argument treated as though apply
were called on it and each item in the «Collection»
, and the accumulated result packets are returned as a new «Collection»
.
[map-apply(+,[1 2 3]) ] [ [+(1,«Scalar») +(2,«Scalar») +(3,«Scalar»)] ]
Note that we’re talking about apply
here, not do
. Items that are incompatible are lost, and the uninvoked function is returned instead:
[map-apply(+,[1 false "foo"]) ] [ [+(1,«Scalar») + +] ]
As with apply
, the entire result packet is retained, in order, for each item in the «Collection»
argument. Here, the mapped function has a positive lifetime, so in each case it returns a decremented copy of itself as well as the partial result.
[map-apply(+₄,[1 2 3]) ] [ [+(1,«Scalar») +₃ +(2,«Scalar») +₃ +(3,«Scalar») +₃] ]
This can therefore get complicated, when lifetimes are involved:
[map-apply(+₄,[1 false 99]) ] [ [+(1,«Scalar») +₃ +₄ +(99,«Scalar») +₃] ]
map-do
The map-do
function takes any item, and a «Collection»
. The first argument treated as though do
were called on it and each item in the «Collection»
, and the accumulated result packets are returned as a new «Collection»
.
It works like map-apply
when the mapped function is compatible with items and consumes them:
[map-do(+,[1 2 3]) ] [ [+(1,«Scalar») +(2,«Scalar») +(3,«Scalar»)] ]
However, recall that do
collects the outcome rather than just the result packet. Incompatible arguments remain in place, and uninvoked functions are retained in place:
[map-do(+,[1 false "foo"]) ] [ [+(1,«Scalar») false + "foo" +] ]
Here, the mapped function has a positive lifetime, and the outcome us just like that of map-apply
:
[map-do(+₄,[1 2 3]) ] [ [+(1,«Scalar») +₃ +(2,«Scalar») +₃ +(3,«Scalar») +₃] ]
However, it’s even more complicated when lifetimes are involved and incompatible arguments are present:
[map-do(+₄,[1 false "foo"]) ] [ [+(1,«Scalar») +₃ false +₄ "foo" +₄] ]
map-try
The map-try
function takes any item, and a «Collection»
. The first argument treated as though try
were called on it and each item in the «Collection»
, and the accumulated result packets are returned as a new «Collection»
.
It works like map-apply
and map-do
when the mapped function is compatible with items and consumes them:
[map-try(+,[1 2 3]) ] [ [+(1,«Scalar») +(2,«Scalar») +(3,«Scalar»)] ]
However, recall that we’re using try
not apply
or do
, and so we discard functions that fail to match the arguments:
[map-try(+,[1 false "foo"]) ] [ [+(1,«Scalar») false "foo"] ]
Here, the mapped function has a positive lifetime:
[map-try(+₄,[1 2 3]) ] [ [+(1,«Scalar») +₃ +(2,«Scalar») +₃ +(3,«Scalar») +₃] ]
But again we are using try
, and so it’s more complicated for mixed-type collection arguments:
[map-try(+₄,[1 false "foo"]) ] [ [+(1,«Scalar») +₃ false "foo"] ]
map-demand
This variant may provide the closest to “expected” behavior from traditional functional settings. All compatible arguments in the collection are consumed, and all incompatible ones disappear.
Compatible function and argument:
[map-demand(+,[1 2 3]) ] [ [+(1,«Scalar») +(2,«Scalar») +(3,«Scalar»)] ]
Mixed case:
[map-demand(+,[1 false "foo" 2 ]) ] [ [+(1,«Scalar») +(2,«Scalar»)] ]
Here, the mapped function has a positive lifetime:
[map-demand(+₄,[1 2 3]) ] [ [+(1,«Scalar») +₃ +(2,«Scalar») +₃ +(3,«Scalar») +₃] ]
But we are using demand
, and so it’s more complicated for mixed-type collection arguments:
[map-demand(+₄,[1 false 2]) ] [ [+(1,«Scalar») +₃ false +(2,«Scalar») +₃] ]
Notes on map
variants and type safety
I imagine that of the four map
variations outlined above, the map-demand
one is the most “like” a traditional functional programming language’s version. But of course most functional languages are real sticklers for type consistency, and so they tend to complain loudly when you try something like (map-apply + ["foo"])
, don’t they?
The design core of ReQ
is robustness for mismatched functions and arguments. The fundamental function-application process is more of an “asking” than it is a forced “telling”. One can imagine the active item (a function) “asking” the items in the collection if they’re compatible with its argument signature types. This helps us avoid a huge class of runtime errors, but at a “cost” of more complicated dynamics when mixed-type collections arise.
Notice also that I’ve done a lot of “as if” writing in those explanations. The try
function acts as if the first argument were the active item, and the second was the only element in a collection.
reduce
We discussed greedy
and greedy(x)
above because reduce
uses them in its execution.
The reduce
function takes any item, and a «Collection»
. As with the map
variants, we will be acting as though we were applying the first argument to the items in the second argument, but here we will be carrying the intermediate results forward in each step. As a result, the ReQ
function called reduce
will behave rather differently in many cases from the similar functions in human-writable languages. I’ll try to work out all the edge cases, but realize that this is definitely the territory in which we will often have to trust the interpreter (and of course its tests) to do the right thing in all cases.
Let me work through a basic example first, showing the intermediate (scratch) process in detail. The function x
(the first argument of reduce
) is applied as greedy(x)
exactly as many times a there are items in the «Collection»
.
[reduce(+,[1 11 111 1111])] ;; scratch space ;; iteration #1 [greedy(+) 1 11 111 1111] [+ :: 1 11 111 1111] [+(1,«Scalar») :: 11 111 1111] [12 :: 111 1111] [111 1111 12] ;; iteration #2 [greedy(+) 111 1111 12] [+ :: 111 1111 12] [+(111,«Scalar») :: 1111 12] [1222 :: 12] [12 1222] ;; iteration #3 [greedy(+) 12 1222] [+ :: 12 1222] [+(12,«Scalar») :: 1222] [1234 ::] [1234] ;; iteration #4 [greedy(+) 1234] [+ :: 1234] [+(1234,«Scalar») ::] [+(1234,«Scalar»)] [+(1234,«Scalar»)]
This works out much the way you would expect a traditional reduce
operation to do so, though you will probably be frowning a bit about the intermediate greedy(x)
calculations along the way. Rightly so. It can get complicated:
[reduce(+₄,[1 11 111 1111])] ;; scratch space ;; iteration #1 [greedy(+₄) 1 11 111 1111] [+₄ :: 1 11 111 1111] [+(1,«Scalar») +₃ :: 11 111 1111] [12 +₃ :: 111 1111] [12 +(111,«Scalar») +₂ :: 1111] [12 1222 +₂ ::] [12 1222 +₂] ;; iteration #2 [greedy(+₄) 12 1222 +₂] [+₄ :: 12 1222 +₂] [+(12,«Scalar») +₃ :: 1222 +₂] [1234 +₃ :: +₂] [+₂ 1234 +₃] ;; iteration #3 [greedy(+₄) +₂ 1234 +₃] [+₄ :: +₂ 1234 +₃] [+(1234,«Scalar») +₃ :: +₂ +₃] [+₂ +₃ +(1234,«Scalar») +₃] ;; iteration #4 [greedy(+₄) +₂ +₃ +(1234,«Scalar») +₃] [+₄ :: +₂ +₃ +(1234,«Scalar») +₃] ;; no arguments! [+₂ +₃ +(1234,«Scalar») +₃ +₄] [+₂ +₃ +(1234,«Scalar») +₃ +₄]
Because reduce
can take any function, regardless of the arity, some odd things can happen. For instance, in the case of a function of one input and output, there may not be any discernible difference from map
:
[reduce(wrap,[1 11 111 1111])] ;; scratch space ;; iteration #1 [greedy(wrap) 1 11 111 1111] [wrap :: 1 11 111 1111] [[1] :: 11 111 1111] [11 111 1111 [1]] ;; iteration #2 [greedy(wrap) 11 111 1111 [1]] [wrap :: 11 111 1111 [1]] [[11] :: 111 1111 [1]] [111 1111 [1] [11]] ;; iteration #3 [greedy(wrap) 111 1111 [1] [11]] [wrap :: 111 1111 [1] [11]] [[111] :: 1111 [1] [11]] [1111 [1] [11] [111]] ;; iteration #4 [greedy(wrap) 1111 [1] [11] [111]] [wrap :: 1111 [1] [11] [111]] [[1111] :: [1] [11] [111]] [[1] [11] [111] [1111]] [[1] [11] [111] [1111]]
For a function of no arguments, like 99
, we get something more like “make copies”, and there doesn’t feel like there’s any “reduction” happening at all:
[reduce(99,[1 2 3])] ;; scratch space ;; iteration #1 [greedy(99) 1 2 3] [99 :: 1 2 3] [1 2 3 99] ;; iteration #2 [greedy(99) 1 2 3 99] [99 :: 1 2 3 99] [1 2 3 99 99] ;; iteration #3 [greedy(99) 1 2 3 99 99] [99 :: 1 2 3 99 99] [1 2 3 99 99 99] [1 2 3 99 99 99]
Finally, for functions that produce multiple results, a reduce
can be remarkable, because a lot can happen just in each iteration of greedy(x)
:
[reduce(dup₃,[1 2 3])] ;; scratch space ;; iteration #1 [greedy(dup₃) 1 2 3] [dup₃ :: 1 2 3] [1 1 dup₂ :: 2 3] [1 1 dup₂ :: 2 3] [1 1 2 2 dup :: 3] [1 1 2 2 3 3 ::] [1 1 2 2 3 3] ;; iteration #2 [greedy(dup₃) 1 1 2 2 3 3] [dup₃ :: 1 1 2 2 3 3] [1 1 dup₂ :: 1 2 2 3 3] [1 1 1 1 dup :: 2 2 3 3] [1 1 1 1 2 2 :: 2 3 3] [2 3 3 1 1 1 1 2 2] ;; iteration #3 [greedy(dup₃) 2 3 3 1 1 1 1 2 2] [dup₃ :: 2 3 3 1 1 1 1 2 2] [2 2 dup₂ :: 2 3 3 1 1 1 1 2 2] [2 2 2 2 dup :: 3 3 1 1 1 1 2 2] [2 2 2 2 3 3 :: 3 3 1 1 1 1 2 2] [3 3 1 1 1 1 2 2 2 2 2 2 3 3] [3 3 1 1 1 1 2 2 2 2 2 2 3 3]
The juxt
family
The juxt
functions are like a “reverse map”. Each takes a «Collection»
as its first argument, and a single item as its second. Each variant invokes either apply
, do
, try
or demand
, using the items in the collection as the actors, and the single second argument as the prospective target (as though it were alone in a context).
Recall that with apply
, [apply(X,Y)]
produces [X(Y)]
if Y
is an argument of X
, or [X]
otherwise.
[ juxt-apply([+ false and neg], 99) ] [ [+(99,«Scalar») false and -99] ]
With do
, [do(X,Y)]
produces [X(Y)]
if Y
is an argument of X
, or the result packet [Y X]
otherwise.
[ juxt-do([+ false and neg], 99) ] [ [+(99,«Scalar») false 99 and 99 -99] ]
With try
, [try(X,Y)]
produces [X(Y)]
if Y
is an argument of X
, or [Y]
otherwise.
[ juxt-try([+ false and neg], 99) ] [ [+(99,«Scalar») 99 99 -99] ]
The demand
variant culls all mismatches.
[ juxt-demand([+ false and neg], 99) ] [ [+(99,«Scalar») -99] ]
The xmap
family
The xmap
are the logical (?) extension of map
and juxt
, taking two «Collection»
arguments and producing the power-set of (ordered) pairwise results of all entries in the first to the entries in the second.
Note that they do not produce a nested collection, simply a flat collection in lexicographic order.
[ xmap-apply([+ false and], [* not true 7]) ] [ [+ + + +(7,«Scalar») false false false false and and and(true,«Boolean») and] ]
[ xmap-do([+ false and], [* not true 7]) ] [ [* + not + true + +(7,«Scalar») * false not false true false 7 false * and not and and(true,«Boolean») 7 and] ]
[ xmap-try([+ false and], [* not true 7]) ] [ [* not true +(7,«Scalar») * not true 7 * not and(true,«Boolean») 7] ]
[ xmap-demand([+ false and], [* not true 7]) ] [ [+(7,«Scalar») and(true,«Boolean»)] ]
More later
This section of notes seems to be done. I think the set of higher-order functions are internally consistent, though they’re obviously a bit more diverse than the basic apply
, map
and reduce
from many other functional languages. But that’s intended to deal with the vicissitudes of robust type-insensitive situations in arbitrary code.
There are some other language concepts I’ve mentioned along the way here, which probably interfere with the simple-seeming sketches I’ve provided. For example, context-switching modifiers (which “send” results “down” into collections, or “up” into the surrounding collection above the current one). And multiple execution threads which may overlap one another in a single running program. And the unmentioned multi-faceted nature of «Collection»
as a polymorphic list, set, key-value map, and queue in various contexts. And the behavior and structure of Channel
“variables”, and the runtime definition of “objects” and definition and use of structural types.
Not to mention the behavior of ReQ
code under the probabilistic execution model…. Did I mention that the underlying execution model is subject to arbitrary “errors”? Yeah, well, not today.
Hopefully I will be able to follow up with those aspects of the language shortly. I’m hopeful that those ideas, not quite as fleshed out as the ones here, won’t undermine the consistency I’m looking for here. And also that they’ll avoid very much [visible] complexity, while fostering a good deal of representational flexibility.