Design a 2-dimensional cellular automaton that solves a system of linear equations
Draft of 2005.09.15 ☛ 2015.06.14 ☛ 2016.07.26
Suppose you’re given a matrix of coefficients for a system of linear equations on unknowns. And, sure, what the heck, also you have a vector containing the right-hand side values, too. We’ll interpret these values as a linear programming task: find , such that and all . Simple, yes?
Embed this matrix and this vector in a finite rectangular array of any size you like (up to ), in any position or orientation you like (but without rearranging the rows or columns). If the array you paste the matrix and vector into is larger than needed to hold them, you may assign any “background” values of your choice to the remaining cells; you don’t have to leave them zero, and they don’t all have to be the same. Your array can be comprised of objects more complicated than scalars, but each object must contain at least one scalar attribute called
topLayer into which the numbers are placed, and from which the answers will be read.
Now, construct a set of rules for a cellular automaton, such that one or more of the following is true:
- At some finite time (of your choice) , a contiguous range of values in some row (also of your choice) of the array will contain any feasible solution of the system of equations. As should be obvious, you need to denote the time and row of your “answer registers” before you start cranking the handle of the CA.
- At some time limited as above, a block of entries of some row (of your choice) will contain a unique value (that you pre-specify) if there are no feasible solutions to the system of equations.
- Like (1) above, but the block of numbers in your row of choice should be a solution to the minimization problem , subject to the constraints) that lies anywhere the best-rated quartile of all feasible solutions, if any feasible solutions exist. If the problem is unbounded… well, I dunno. That implies the numbers should be Really Really big; let’s say they have to be above some threshold— if a particular optimum value is unbounded in dimension ? That seems reasonable.
- Like (3) above, but the values in the block of “registers” should be an optimum solution, if one exists (and again we will have to cope with the unboundedness thing).
Any one of these tasks is a nice one to try.
I pass you 139 different arrays of coefficients no larger than , and your algorithm will place them in the CA array (or in the attribute of the array called
topLayer), will state the time and position where the answer can be read [in
topLayer], and then we’ll initialize the array according to whatever specification you prefer and run the rule to time , and check your answer. Your rule needs to succeed at the chosen task 50% of the time. For a High Pass, you need to succeed 80% of the time. For a fucking Nobel Prize in Mathematics with a side of Computer Science and a Pony, obtain a High Pass on all four tasks with one algorithm.
A “cellular automaton” of what sort?
I don’t care. You may use traditional cell-wise update rules, or Margolus-type swapping rules, or any rule you like. In reality it can be any function mapping the previous states of the array to the next value of the array, as long as it’s applied simultaneously to every cell on every time step.
Here: Your rule may access history of any cell in the array as far back as time 0. Your rule may refer to the values of cells locally or remotely. Your rule may read from and write to complicated objects (scalars, vectors, stacks) in the other attributes of the cells. Your rule may use conditionals, or iteration, or random numbers (assume there’s a free pseudorandom number generator on hand). Because (as I said) the array may contain complicated objects, any attributes beyond “topLayer” can be used as scratch space or memory or stacks, and can contain arrays or stacks or single values that are continuous or binary or categorical or logical or even (if you want to get really fancy) expressions, and your rule can use conditionals or transcendental functions, bitwise or general logical operators, exponentiation, composition (of the suggested expressions)—whatever.
Except it shouldn’t be quantum. We don’t want quantums. That’s for later.
The one important firm rule here is that there must be a single explicit functional rule applied to every cell of the array at every time step, synchronously, and that even if there are additional attributes of the objects in the array, the answers must be in the same attribute you stuck the original array into.
Oh, by the way: notice that I didn’t say the final answers had to be in a particular set of cells, just in a specified row. That’s a freebie. The answers (to whichever task) need only appear in attribute
topLayer in a contiguous subset of adjacent cells in a specified row, in the same order as the original variables in the array.
While every cell must have an identical neighborhood structure, you should realize that a Margolus neighborhood appears to change behavior on alternate time steps, you can kludge this with a single rule and a layer of flipping bits. Never assume I will be amused by algorithms whose rules don’t halt, or which use indefinite memory resources.
OK, here’s one that won’t work. Say you define the objects in the array like this:
cellObject: topLayer: real; // the numbers I give you, and the ones I read rowAbove: real; // we'll use this for row operations rowBelow: real; // this one too referenceCells[12,2]: integer; // we'll use this // to dynamically change the cell indices // that each cell refers to divisible?: boolean; // a flag
You say that I should put the array I’m going to hand you into a 300-row 301-column array, with the upper left corner located at cell (31,17) and the RHS vector running down from cell (81,2). You say that the remaining values of the array should be set to a background value of the default initialization constants (zeroes, FALSEs, &c). You say that I can read the answers in row #17 at timestep 3100, and that it will be the Grand Prize Winner of the optimum solution to the problem, or a row of values if the problem is infeasible.
And you define an algorithm like this (in gross pseudocode):
- put the 1000 present values of the
topLayerattribute in the row above me and to the left into my
rowAbove(if there are not 1000 cells to my left, or I’m in the topmost row, wrap around the array toroidally)
- put the largest 1000 values at time of the row below me into my
rowBelowattribute (if there are no values in the history that long ago, use the initialization values)
- for each of the columns in my
referenceCellsarray attribute, replace the values in both rows with the values of the equivalent row and column in the
referenceCellsarray of the cell referred to by me on the last timestep. So if my old column #6 is
(-2,12), replace these values with the contents of column #6 of the
referenceCellsattribute of the cell located 2 to my left and 12 below me.
- if my
topLayervalue is 0, set my
divisible?value to FALSE; else TRUE
- set my
topLayervalue to a random number in [0,1]
- if my
topLayervalue is below 0.1, revert all my attributes to the values they had at the end of the previous timestep; otherwise don’t
We check the code, run the automaton for 3100 steps, and check your answer. And you would, in this case, be wrong1 139 times. For both tests you claimed. You lying fool.
Why don’t you want a pony?
You’d be really really wrong, by the way. ↩