Draft

More on inferring lattice position from Manhattan distances

Draft of 2017.06.19

May include: puzzlemathematicsinformation theory&c.

Ron Jeffries pointed out (in a back channel) that when I moved in my étude “How Much Do You Need to Know?” from a compact square arrangement of numbers, to a rectangular arrangement of numbers of arbitrary dimensions, and finally to an arbitrary polyomino of numbers, that things get complicated.

True enough.

You’ll recall I was talking about whether, if I have a (square) arrangement of numbers like this:

 7  8  2  1
 5 11 16  3
 4 14 15 12
 9 10  6 13

and then also tell you the Manhattan distances between every pair of numbers (walking along the lattice), like this (abbreviated) list:

from  to  up/down/left/right steps
 1     2              1
 1     3              1
 1     4              5
 1     5              4
 1     6              4
 1     7              3
 1     8              2
 1     9              6
 ...
14    15              1
14    16              2
15    16              1

That it seems to me that you can always infer the exact relative position of all the numbers in the square arrangement, with (possibly) some redundent information as well.

As I wrote about it, I started calling these distances “constraints”, mainly because in my mind I had already started thinking of one way to model and solve this sort of problem: as a Constraint Satisfaction Problem of the sort we see often in computer science and mathematical recreations. The same sort of thing you’d see in Sudoku puzzle solving, or constructing a magic square, or even one of those “logic grid puzzles” you see in popular magazines.

The shared structure of these puzzles and problems is that we have a discrete set of “positions” or “slots” into which we are asked to distribute a finite set of objects. In the case of Sudoku, there are empty squares on the grid, and we’re asked to assign the numerals from 1 to 9 to those empty squares, obeying certain constraints imposed by the already-present “clue” numbers. In the case of magic squares of order \(N\), we’re asked to position the numbers from 1 to \(N^2\) in a square arrangement on a lattice, in order to satisfy certain formal constraints on the sums of each row, column and diagonal of that square. In the case of logic grid puzzles, we’re asked to assign relationships between (usually) people and the colors of their hats and who they’re married to and what they ate for dinner so forth, satisfying certain constraints we are given in the form of factual clues, like “Martin had the fish for dinner, and is sitting next to his husband who is wearing blue,” and so forth.

The pleasure (in a puzzle-solving sense of the word) in this class of problem is that there usually are enough constraint clues provided that we can be certain of a single correct solution, while also being certain that there are no hidden contradictions in the clues provided. A good part of the skill involved in constructing these puzzles boils down to those guarantees of uniqueness and solvability.

It seems to me that it’s interesting to note how many clues (and of what sort) we’re given, in order to provide that Goldilocks “just right” amount of constraint between open-endedness and contradiction. Down at the “free end” of the constraint axis, you might imagine a Sudoku grid in which there are no clues whatsoever. The combinatorics of this “minimal sudoku” case have been worked out completely, and it appears that, taking into account trivial rotations and reflections of the grid, there are 6,670,903,752,021,072,936,960 of them.

Way out near the other end of this constraint axis are the “almost done” Sudoku puzzles, in which there is only one empty cell, and we’re asked to fill that in. Not especially fun. But notice one thing: in the case of Sudoku as such, for any given cell like this there will only be a tiny fraction of all constraints in play. That is, there are the constraints that each number only appear once in each 3x3 grid, and the constraints that each number appear once in each 9-item row, and once in each 9-item column. Some of those, even in the case of a single unfilled cell, are redundant, right?

The interesting place for puzzles to live is somewhere in between. Consider the famous case of the world’s largest Sudoku puzzle, carved into a chalk hillside near Bristol, England in 2005 as a PR stunt, which ended up not having a single unique solution, but 1905 different ones. That was a bit too close to the unconstrained end of the axis.

That’s what I’m asking about in this series of études: Given a particular “solution”, what’s the maximum subset of assignments you can “undo” and still have a fully-determined “puzzle” where there is just enough information to return to the original solution (or some simple transformation of it). In the analogous case of “backtracking” to find a minimal Sudoku puzzle, imagine I’ve given you a specific completely filled out puzzle solution, and I ask for you to erase numbers until you reach the tipping point: erase one more number, anywhere, and you won’t be able to return to the exact same solution with certainty.

That is, by the way, also a tricky problem in combinatorics and constraint-handling. I leave it to you.

Not about Sudoku

In the études I’ve proposed here, we’re looking at a somewhat different “puzzle”, though you can probably see the analogy. In the original case, I’ve listed all the \((N \times (N - 1)/2)\) inter-numeric distances for a completely “solved” square arrangement of unique values. And as in the case of “backtracking” from a solved Sudoku puzzle, I’m asking you to selectively “drop” distances from the list, until you reach the “tipping point” and can’t with confidence discover a single arrangement of numbers in a square.

In the rectangular generalization, I’m saying that instead of being certain that the numbers fall in a particular arrangement, there’s a possibility that they are in some other (compact) rectangular arrangement. For example, if there are 16 numbers, they might be in a 4x4 square still, or an 8x2 oblong, or even in a 1x16 line. Thinking about that for even a moment, you will probably notice that in the case of a square, the largest Manhattan distance (between values in opposite corners) is 6, in the case of the oblong it would be 8, and in the case of the line it would be 15. Glancing at a list of Manhattan distances and seeing a value of 11 suffices to tell you which of these three rectangles we have.

As long as you’ve also got a representative example of each numeric value in your remaining “clues”! There’s a tricky Goldilocks zone here too, like the one that tripped up the Bristol PR firm, if for example we accidentally delete all distance clues that refer to the number 16. If, in the course of hiding distance clues, you eliminate an entire number from the remaining visible clues, will there be enough information to infer its presence?

That’s an open question, and seems salient.

Polyominos are not uniquely determined by distances

Finally, when I’d quickly touched on the possibility of letting the rigid square problem relax into a possible rectangle problem, I opened the can of worms all the way and proposed the analogous question for polyominoes.

What Ron pointed out the other day was that in the case of certain polyominoes, there are cases where we can’t be sure of topology from just the Manhattan distance information.

Here’s an example which compares the two different free trominoes, which may be the simplest example of what Ron was worried about:

1 2 3

1:2 => 1
1:3 => 2
2:3 => 1

1 2
. 3       (I've labeled an "empty" cell with ".")

1:2 => 1
1:3 => 2
2:3 => 1

There is not enough information even in a complete list of Manhattan distances to differentiate between these two basic three-item arrangements. So the brief answer to my question of the other day, which was something like, “Is there even enough information to identify every polyomino uniquely, given reflection and rotation, in just the Manhattan distance list?” would seem to be “No.”

Interestingly, the overall topology of the polyomino in question seems to matter. For example, as long as we count distance along the lattice and not the “backbone” of the filled-in-values, we can detect chains of numbers that “wrap around” and form concave polyomino arrangements. For example, consider these three hexominoes:

1 2 3
    4
  6 5

vs

1 2 3
    4
    5 6

vs

1 2 3 4 5 6

I can determine, from distances alone, the difference between the first one and either of the other two. The crucial clue seems to be the distance between 1 and 6, which is 3 steps in the first one, but 5 steps in the latter two. But I can’t—from Manhattan distances alone—tell the difference between either of the latter two.

Opening things up

There are a couple of ways one can react to this insight.

To be clear, let me point out that in the original formulation of the étude I asked a specific question, which was something like, “For a given arrangement of numbers in the lattice, and given assurance that the numbers are all constrained to lie in a square/rectangular/polyomino-shaped envelope, what’s the maximum number of clues I can delete and still be certain of getting the right arrangement?”

The correct answer seems to be “zero” for at least the long, stringy polyominoes. That is, in the case of the string-of-six-numbers I’ve just shown you, above, even if I give you all the Manhattan distances between labeled squares, you can’t tell the difference between bent and straight variations.

One way to respond to this might be, “Well, for which polyominoes is the answer zero?” That in itself is an interesting question, and I expect some deep and convoluted combinatorial geometry is down that path. As I pointed out above, one facet of this seems to involve “concavity”, which is a deep topic all its own.

Another way to respond to this might be, “Well, if for all rectangles I can always determine the layout with all the clues (still a conjecture in its own right), but for some polyominoes of the same shape I can’t, then what information might I need to add to the set of clues to be able to differentiate correctly for all polyominoes?”

One possibility that comes to my mind, after poking around a bit and staring off into space a while, is the Akritean distance, which is the [weighted] average of the Euclidean distance and Manhattan distance between two points in a plane.

Let’s see if using an Akritean distance instead of Manhattan distance changes things. Here are the two polyominoes I drew above, for which the Manhattan distance alone was insufficient:

1 2 3

1:2 => (1 + 1)/2
1:3 => (2 + 2)/2
2:3 => (1 + 1)/2

1 2
. 3      

1:2 => (1 + 1)/2
1:3 => (2 + sqrt(2))/2
2:3 => (1 + 1)/2

Well, that’s encouraging. The distances are now different! The Akritean distance from 1 to 3 in the bent labeled tromino is \((2 + \sqrt(2))/2\), which is measurably different from \(2\). The use of Akritean distance suffices to differentiate these two trominoes from one another. I don’t know if it will suffice to differentiate all polyominoes from one another, though.

Now I feel the need to point it out again: this isn’t a homework problem, and I’m not trying to prove anything in a rigorous mathematical way. When I jumped from Manhattan to Akritean distance (which you may never have heard of), it was because of an Artificial Intelligence class I took, many years ago, from Mike Wellman. The quirks of my own background led me to make that particular jump; you may have thought about something like Euclidean distances, if anything.

It doesn’t matter to me what “next idea” might have cropped up in your head. The question that your idea—or my idea or any new idea for a different distance measure or extra constraint clues—wants to provoke, it seems to me, is more like this: If I had given you the Euclidean distances between the (centers of) numbers on a square lattice, instead of the Manhattan distances, would your answers to earlier parts of this puzzle changed?

In other words, if you make a change in the nature of the clues in the original problem, does the answer change as well?

I don’t know. I have the sense that it “feels” different, and it’s interesting enough already in the case of the polyominoes that Akritean distance (and therefore Euclidean distance) is providing information that Manhattan distance doesn’t by itself.

And there’s this: Since there are many polyominoes “tucked in” to every square arrangement of numbers, and since as you eliminate distances from the complete list you’re making a “square” more “polyomino-shaped” by erosion of a sort… it certainly feels like this change will have an impact on your answer to the original “simple” problem.

I’d like to hear, one way or another.