Draft

Can I blame Herb Simon?

Draft of 2009.02.01 ☛ 2015.03.17 ☛ 2016.07.22

May include: GPphilosophyadvice&c.

I’m working on a presentation and a chapter for the forthcoming GPTP Workshop, and trying to capture something that’s bothered me for… well, as long as I’ve been writing computer simulations and doing algorithmic search and optimization, which is (jesus) like 3/4 of my life. And moreso recently, when I went back to graduate school in Industrial & Operations Engineering, and was exposed to a suite of cultural norms I had only experienced indirectly when I was a biologist.

And I’m not sure how best to put my finger on it or sum it up, so let me just dump a little pile here to fester while I try to think more: A core myth of “modern” computer science and applied mathematics—a foundational one it seems—is that algorithms are autonomous and atomic.

And yes, this probably seems like a “yeah, so?” realization. But I sit here working on the Nudge system and designing it to be used interactively in exploratory settings (unlike, as far as I know, any other GP system). And I found myself rolling my eyes (again) at the senseless folderol a computer science graduate was saying about software development the other day at lunch, about how anything that “answers the question as fast as possible” is the best programming solution, QED. And so on.

I can’t think of a single example of a search, optimization, machine learning, neural net training, agent-based simulation, AMPL optimization or other programming project and “run”, in a 25 year span, where I didn’t watch what was happening, see a problem, stop the “run”, make changes, and re-start it. Not one. I’ve fiddled with training/test data breakdowns, seen symptoms of bugs and model deficiencies and statistical anomalies that lead me to intervene, or seen slowness (or over-eagerness) to converge that led me to improve my code, or seen transient patterns that were more useful or interesting than the “real” program paid attention to.

Well, OK: Maybe I’m not a very good programmer. This is a thing I would agree with.

I note that I haven’t written a paper or even an email without revision. I haven’t had an earnest conversation on a technical topic without some minor argument and restatement and analysis. I haven’t willingly programmed in maybe a decade without unit tests and a dynamic notion of “requirements” and “goals”. And I haven’t been in a seminar without questioning the direction of the research, asking about tangents and parallel tracks and the roads untraveled.

It’s what people do.

Yet in AI research, and in not just the little byway that’s genetic programming but also that broader world of computer science and operations research and machine learning and data mining and so on, people still act as if analysis, modeling, design and programming were something utterly, distantly separate from execution of code. As if there were a “right” algorithm in a general case, as if faster was always better, as if it is not the job of an engineer to know anything about domain, or to adapt in any way to “externalities”.

As if you could specify a problem up front, spell out everything in a nice three-ring binder, and “hand” this specification to some plug-compatible mechanistic “solver” or “programmer” that was optimally fast and provably convergent and correct in the limit, and the lights would flash and the bell would go “ding” and a little punch card would poke out at you like the pert tongue of Athena herself with the answer.

This is a problem for me.

Quite literally, since I gladly walked away from my last Ph.D. program (which was an excellent one in its field) for essentially this core difference. There’s something wrong, and I increasingly believe dangerous, about… well, something I can’t quite name. Call it “hubris” or “cowboy culture” or “objectivism” if you really want to get nasty. That suite of traits that includes financial engineering’s unquestioning reliance on stupid “simplifying” assumptions; and computer science’s interest in algorithmic complexity at the expense of finding answers to questions; and almost all of operations research, where “be wise, linearize” is a mantra; and my own technical specialty of metaheuristics, where even today people hand me charts labeled “average performance vs. time” no matter how many times I reject their papers and yell at them in print because I have never cared about average performance.

There’s a stink of mind:body duality in there. A kind of biased religio-mathematization that imagines there is a best, an ideal, a way of delimiting a idealized set of problems that is better and more tractable and more elegant than any single instance.

Than the real world, for example.

And increasingly, I think Herb Simon is to blame for it all.

Well, OK. I am tempering this a bit. Maybe I don’t blame Herb Simon, who is Blessed in Name and Deed. But I definitely blame the folkloric Herb Simon, as remembered and imagined and glibly glossed by people who ignore what he actually said. False Simonists, as it were.

When I’m designing a genetic programming system, or a multiagent simulation, or a software development (not computer science) project, or a meeting or a story for that matter, I’m not looking for autonomy.

The basis of my interest in genetic programming (and machine learning and statistics more generally) is how it aids people. The C programming language, as far as I’m concerned, is not automatically “faster” than Python, because I count the time it takes to think and write and debug and understand a C program and a Python program. If the same algorithm will take ten times longer to code in C than Python, and may hide secret bugs behind stupid pointer errors or strange type handling, and which blocks my ability to use test-driven development and emergent software design… that’s worse, not better.

And that same shortcoming is true, I realize, about the way academics approach nonlinear programming and bioinformatics and swarm-based computing and stuff, too. Papers are written, projects undertaken, grant monies spent, and graduates pooped out into the workplace as if people who haven’t even met me could determine what I wanted in a given situation.

They piss me off like the worst marketers do, in other words. [Ironically, the most beloved of my academic friends never watch TV, and the most beloved of my marketing friends never pay attention to the math….]

Here: No matter what your professor tells you, people still have to analyze and model a problem; spend time typing C or Python or AMPL code somewhere; debug semicolons or memory management or matrix definitions or recursion stacks; spend hours staring at results trying to concoct rules from their intuitions for acceptability (or risk re-running their experiments tenfold with different parameters in an attempt to “get better results”).

I count the conversations, the lab meetings, the code review and unit test writing, the peer review and the conferences and the late nights spent working waiting to see—like Kekulé—the devils dance in a circle before we understand benzene’s structure. I count how hard it is to talk about something, how long it takes to see a way of solving a problem, how hard it is to understand what you have in the end, to tell whether you’re “done” or not. And how hard it is to do it again, to re-use what you’ve learned. I count that as wall-clock time, as my own measure of “net computational complexity”.

I suppose my mental model is much more a kind of heuristic conversation, a partnership between mathematics, man and machine. Where software and mathematics are a simply ways of framing special parts of a conversation.

Value does not automatically come with speed, or even with rigor. I do not value rigor in my conversations; I find it cloying. I prefer exploration (of ideas and errors) and exploitation (of good ideas and cliches) in balance, not just one or the other.

Why do you think I blame Herb? Hint: pragmatism. And if not Herb, who should I blame?

update: Part of why I want to blame Herb Simon comes from conversations with Michael Cohen, some years ago. See, for example, his “Reading Dewey: Reflections on the Study of Routine” in Organization Studies (2007) vol 28 pg 773.