Draft

Some “simple” number theory benchmark problems for GP

Draft of 2019.02.03

May include: GPbenchmarks&c.

Just yesterday I was able to get a physical copy of a lovely volume called Algorithmic Number Theory: Lattices, Number Fields, Curves and Cryptography from our statewide interlibrary loan system, on the hunch that there might be inspirations in it for developing hard-but-simple benchmark problems and “puzzles” for GP to solve.

Indeed there are. But even better, as I was just now looking it up online to link to the volume, I find that the publisher MSRI provides the full text of at least some chapters in PDF form online, even for the likes of me, a non-academic.

Thank you.

So now I have ways to show you the original (very interesting) papers, and extensions as they arise for GP applications.

First, a direct inspiration involving Pell’s Equation, based on the paper in that volume [PDF link] “Solving the Pell equation” by Hendrik W. Lenstra, Jr.

Then, a random thing that struck me as I was making notes on the first. The latter seems to be silly, but also feasible, and the weird and unexpected thing—to me, who isn’t a professional—is that I can’t imagine it’s original with me, but also can’t find references to it. Maybe by writing it out (and its close relatives) my memory will be jostled and the correct search terms will appear from the fog.

Find an algorithm to solve Pell’s Equation

What I’ll call “Pell’s Equation” is apparently far older than this 17C fellow John Pell, who apparently either had a student who worked on it, or maybe… anyway, the history is convoluted. It’s a relatively simple Diophantine equation that looks like this (I will be following Lenstra’s notation here):

\[x^2 = d y^2 + 1\]

Given some constant integer \(d\), we’re asked to find a pair \((x,y)\) of integers such that this equation is satisfied.

For example, if \(d=7\), there are an unbounded family of solutions of increasing size (ignoring the trivial one, \((1,0)\)):

\[(x,y) = \{(8,3); (127,48); (2024, 765); \dots \}.\]

As the Wikipedia article goes on to point out, “The smallest [non-trivial] solution can be very large. For example, the smallest solution to \(x^2 - 313 y^2 = 1\) is

\[(32188120829134849, 1819380158564160)\dots."\]

Anyway, the history of the problem is extensive and remarkable, but I will leave it to our sources to fill in the blanks and link it to the famous names, and discuss the variety of approaches that have been used by Fermat and [maybe not] Pell and Lagrange and Euler and Lord Brouncker all all those other folks used to explore it.


Here is my GP-facing challenge:

Given an integer \(d\), produce a pair of integers that solve the Pell Equation for that constant.

No, seriously, that’s it. I don’t care if the solution given is the smallest pair of numbers. Just use GP (any generative algorithm-producing system will do) to accept an integer argument, and produce a pair of two scalars that solve the equation.

As you might imagine, there are the usual cloud of questions that should be provoked by this challenge.

Am I asking you to produce values of “type integer”, or can you produce a point in \(\mathbb{R}^2\) and try to get as close as possible, measured by Euclidean distance?

Do you imagine that an algorithm that solves the problem will need to use partial fractions, and if so how will you pass that information along in the toolkit you provide your system?

Do I care about efficiency of the algorithm, and should you? Are there traps along the way in which reasonable approximations can be found relatively easily, which have little to do with some latter refinement?

Which is better, an algorithm that provides two integers that are farther away from the correct solution, or an algorithm that provides two floating-point values that are closer?

Are there solutions that wander off into complex number algebra, which are simpler (and perhaps therefore easier to find) than those that stick strictly to rational or integer values? If so, should you (do you need to) provide “access” to complex numbers for your search?

Probably all salient.

I will try to do this one myself as well. It seems hard, but as I said: easy to define.

Tell me what you find.

Solve some digit-sorting functions

As I said, this next one seems to have come out of the blue to me. I can’t imagine it’s original, but I also can’t seem to find the search terms that disclose earlier versions online. Maybe this has something to do with it being trivial (in mathematical recreation terms) or ridiculously hard (in algorithmic terms). So let me just state it, in the terms I’ve got on hand.

Given two integers \(x\) and \(y\), sort the (decimal) digits of each argument in descending order to produce new values \(x'\) and \(y'\), and return \(x' \times y'\).

So for example if \(x = 8164823\) and \(y = 710094\), we produce \(x' = 8864321\) and \(y' = 974100\), and their product is

\[8864321 \times 974100 = 8634735086100\]

Other than that, it doesn’t feel as if there is a lot to say about this one. The same sorts of penumbral questions arise for this challenge as they do for the Pell Equation one: Can an algorithm arise which produces a very good floating-point value, which is closer to the correct target than a strictly-integer algorithm? If so, which “wins”? Are there necessary tools (like digit-production functions for integer arguments) that you’ll want to throw into the mix, or is that expected to arise as a matter of course during the search for algorithms? Are there trivial (or extraordinary) algorithms already known in the literature for this sort of thing?

And so on.

Three freebies, which may help

As I’ve said many times, the production of new problems proceeds quickly whenever you’re spelling out other problems. While I typed that last one in, a “simpler” feeling family of problems arose as a consequence.

Of course. You have to wonder if this is part of the way humans do this work….

This one feels “small” if we are only talking about human-invented mathematical approaches, but it may be a useful stepping-stone for the prior sorted-digits problem:

Given an (integer) argument \(x\), produce the largest digit in the decimal representation of \(x\).

That is, given \(x=8164823\), produce \(8\); given \(x = 551631\), produce \(6\), and so on. This function destroys a lot of information, of course. I mean it’s always a good guess to pick a digit more than five, just on the assumption that “biggest digits” are uniformly distributed.


If you can do that one, you might want to generalize a teeny, tiny bit and explore base representations:

Given an (integer) argument \(x\) and integer base \(b\) in \(\{2,3,\dots,16\}\), produce the largest digit in the base-\(b\) representation of \(x\).

For example, if \(x = 8164823\) and \(b = 13\), the base-13 representation is "18cb474", and the largest digit there is c. Which was a good bet, I suppose, if we were guessing, since it’s one of the biggest digits in base-13 notation. If \(b = 11\), Ruby informs me the same \(x\) is "4677397", so the answer for that pair would be 9.

As you can discover with a little sketching, the biggest digit in almost every base-2 number will be 1, so maybe don’t focus too much on that sub-problem, OK?


And of course, like me you also noticed this slightly different adjacent problem. This one feels as if it will exercise your GP system’s data structure and iteration behavior nicely, and it also has a bit more “resolution” than the biggest-digit one I describe above:

Given an (integer) argument \(x\), produce the largest two-digit value in the decimal representation of \(x\).

For example, if \(x = 8164823\), the two-digit numbers appearing in that decimal are \(\{81, 16, 64,48, 82, 23\}\), of which \(82\) is biggest.

Of course you are already thinking of the more general variant, where I give you an \(x\) and a strictly positive integer \(d\), which is the number of consecutive digits to look for, and you’re supposed to give me the largest-valued number expressed in \(d\) consecutive digits of \(x\).

But hey. We all have to draw the line somewhere, right?