Draft

Some examples of generalized continued fractions

Draft of 2017.12.22

May include: GPbenchmarks&c.

Continuing my exploration of continued fractions from the other day, I’ve been writing a little Ruby library for evaluating generalized continued fraction values. It works well enough—after all, to evaluate a continued fraction’s value based on the constants, it’s just a matter of plugging them into a simple (though recursive) formula—but like all the benchmarks I pose for GP, I’d like to make it work myself.

So I’ve spent a couple of hours this morning playing with ways of hillclimbing to find a generalized continued fraction representation of a given target value. That is, given some constant \(c\), I want to find values \([b_0,a_1,b_1,\dots,a_n,b_n]\) for this formula:

\[c = b_0 + \cfrac{a_1}{b_1 + \cfrac{a_2}{b_2 + \cfrac{a_3}{b_3 + \dots}}}\]

Now in Ruby (as in many other handy scripting languages), there’s a lovely little Rational class built in, which handles arbitrary-precision fractions. So I’ve been working so far with integer-valued constants, and looking for assignments that provide an exact match for \(c\). This has helped me sidestep some of the problems with floating-point calculations that one might worry about, and it’s been interesting as well.

So basically my code (which I won’t show yet) does hillclimbing on the set of constant values: It picks some arbitrary starting point vector of constants, and then tries changing each value a little bit up or down to make a number of small-scale variant vectors. Then it sorts these variants by how different their exactly-evaluated \(c\) values are from the target, and if the closest-scoring variant is better than the best we’ve found so far, we throw away the old one and keep that better new one.

“Try changing something, see if it’s better, if it is keep it, if it isn’t, try again.”

Interestingly, the “landscape” of the continued fraction function seems to have some local traps. At least that’s what it feels like: when I’m running this hill-climbing process it can occasionally fail (within the time I’ve given it) to converge to a correct value. So that’s something to remember. Maybe this isn’t a monotonic and smooth function, huh?1

Also, it turns out that there are several places in the expanded form above where division by zero becomes an issue, especially if one’s working with exact values. Sure, if any \(b_i\) is zero, and \(a_{i+1}\) is also zero, then the denominator of the convergent at level \(i\) will be zero, and as we say in the discipline “that shit gets all womper-jawed”. But there are also zeroes in denominators when the sum in a denominator works out to zero, not just when both values are zero. So that’s more bad.

In any case, if you try this for yourself, you may have to decide whether to do the rigorous arithmetic that you need to know what values are permitted in any given variation of the vector… or probably (like me) just handle exceptions in your code. Depends what you’re looking for.

Me, I was looking for some evidence of my suspicion that a process like this could easily find numerous variations on the vectors of constants, and that those vectors wouldn’t be obviously related to one another. So here are some interesting—to me—values I’ve uncovered while writing and debugging and playing with hill-climbing algorithms.

\[1000 = [43, 40, 1, -9, 10, -20, 33, -6, 81, 19, 37, 9, 95, -29, 58]\] \[1000 = 43 + \cfrac{40}{1 - \cfrac{9}{10 - \cfrac{20}{33 - \cfrac{6}{81 + \cfrac{19}{37 + \cfrac{9}{95 - \cfrac{29}{58}}}}}}}\]

and that’s definitely a bit different from

\[1000 = [21, 13, 1, -5, 6, -11, 12, -6, 29, 4, 62]\] \[1000 = 21 + \cfrac{13}{1 + \cfrac{-5}{6 + \cfrac{-11}{12 + \cfrac{-6}{29 + \cfrac{4}{62}}}}}\]

I thought maybe I would try some fractions as well, just to be complete (and because floating-point targets, at least using this approach, are fractions of some power of ten). So:

\[1/1000000 = [0, 1, 1000, -4500, 3, -23, 7, -19, -29]\] \[\frac{1}{1000000} = 0 + \cfrac{1}{1000 + \cfrac{-4500}{3 + \cfrac{-23}{7 + \cfrac{-19}{-29}}}}\]

What’s that mean? Well, let me work it out by hand, since it’s not very big:

\[\cfrac{-19}{-29} = \frac{19}{29}\] \[\cfrac{-23}{7 + \cfrac{19}{29}} = \frac{-667}{222}\] \[\cfrac{-4500}{3 + \cfrac{-667}{222}} = 999000\] \[0 + \cfrac{1}{1000 + 999000} = \frac{1}{1000000}\]

Which, as they say in the business, is “obvious once you see it”.2 Here’s another version of \(\frac{1}{1000000}\), which gets even longer:

\[1/1000000 = [1, -1, 1, 1, -1, 10, 1, -1, 1, 1,\\-2, 11, 1, -1, 1, 1, 0, 9, 0, 1, 2, 9, 0, 1, 4, 9, 0, 1, 8, 2, 0, 1, 2]\] \[\scriptsize{}\frac{1}{1000000} = 1 - \cfrac{1}{1 + \cfrac{1}{-1 + \cfrac{10}{1 - \cfrac{1}{1 + \cfrac{1}{-2 + \cfrac{11}{1 - \cfrac{1}{1 + \cfrac{1}{0 + \cfrac{9}{0 + \cfrac{1}{2 + \cfrac{9}{0 + \cfrac{1}{4 + \cfrac{9}{0 + \cfrac{1}{8 + \cfrac{2}{0 + \cfrac{1}{2}}}}}}}}}}}}}}}}\]

You can see a few things in this. First, that surely this isn’t anything like “standard form” for these continued fractions. But also I suspect you can tell what sorts of initialization I used in my “random guesses” in the various hill-climbing passes, and maybe even see the numbers looking kind of opaque.

Imagine we’ve asked a software-synthesis system (aka “genetic programming”) to write an algorithm to “find constants for some \(c\)”, and it started pooping out weird-looking numbers like those. But those numbers are right (as long as I’ve typed them in correctly here), so you can’t blame the poor thing for doing what you asked. When I say that “making changes in the user’s head” is one of the core use cases of these artificial systems, this is what I mean. If those representations aren’t what you want, then what should you ask for in addition?

I mean think about it: Could it be that what you really want \(1000\) to be \([1000]\), not to be some smaller number plus a weird back-asswards way of representing “some other smaller number”? I think maybe. But is it the case that you want the “standardized” form? In all situations? I think not.

Or suppose you decide you don’t want to see different answers when you ask for \(1000\) twice in a row. Again, which of the infinite representations do you then want? Why? Do you want one which permits only integer constants, or one that also permits rational (non-integer) constants? Or one that might include negative floating-point constants? Or complex numbers? Why any of those? Why not?

Could it be that you’re interested (as many folks seem to be) with the relative rate of convergence of the representation to the target value? I mean when I write \([1000]\) vs. \([21, 13, 1, -5, 6, -11, 12, -6, 29, 4, 62]\) I can see that the latter converges at a noticeably slower speed to the target. But what I don’t actually know a good representation for \(c\) in this format? For instance, what if \(c = c_1 + c_2\), and I just want any value that satisfies that relation for two known summands?

That one’s particularly interesting, you may notice. Because arithmetic on continued fractions—without converting them to “normal” numbers—is a notoriously hard problem. But it would sure be great to have, because floating point numbers are crap in comparison….

More (including my code) in a bit. I just wanted to show and tell. And also play with MathJax’s support for the \cfrac{}{} function….

  1. Spoiler: it ain’t

  2. No it isn’t.