Draft

Humane scale

Draft of 2016.07.26

May include: futurismsecurityengineering&c.

Several sources send me to Bruce Schneier’s latest essay on security and the Internet of Things.

Security engineers are working on technologies that can mitigate much of this risk, but many solutions won’t be deployed without government involvement. This is not something that the market can solve. Like data privacy, the risks and solutions are too technical for most people and organizations to understand; companies are motivated to hide the insecurity of their own systems from their customers, their users, and the public; the interconnections can make it impossible to connect data breaches with resultant harms; and the interests of the companies often don’t match the interests of the people.

This arrives in the same week that several people sent me this excellent piece from Samuel Arbesman admonishing technologists to “think more like Biologists”.

It would be glib and unhelpful to say something about being hoisted by one’s own petard, but there are days when I need to be glib and unhelpful. I have no solutions, but I do have an observation or two.

The technical work I’ve been doing for a long time now revolves around human-engineered systems that are both functional and surprising. There was a wave of management consultant BS in the late 1990s admonishing firms to become “chaordic” or “balance on the Edge of Chaos”1. Their motivation was one part Dot Com Bubble enthusiasm and the beginnings of what’s since become Fetishized Conscienceless Disruption, one part first-hand awe at the realization that the real world does things we can only call emergent at least as often as it’s predictable, and one heavy helping of trying to make a living.

But a lot of that Edge of Chaos rhetoric, if you really read and think about it and look back at how it was received, was heard as a sort of causal narrative like: Complexity is inevitable. Big Things are complex. You want to become Big, right? The Goldilocks Point between Confusing Chaos and Stultifying Order is where a lot of complex systems seem to be. Emergence happens there. Emergence is Innovation, and leads to world-changing transformations sometimes. Therefore: Edge of Chaos for you! Only rarely was this backed up with any particular practical advice on what to do to one’s organization to make it “more chaotic”—because the assumption was always that old-school traditional hierarchical plan-driven work was always unhealthily “orderly”—but nobody ever seems to have mentioned that in nature those “world-changing transformations” are things like avalanches, and hurricanes, and extinction events, and you know mostly super bad shit.

So that advice plays out as: You should poise your organization so that it can cause damage on all scales.

I remember one day around the turn of the century that I apologized to John Holland about my peripheral involvement in the whole craze. I recall telling him I thought it would end badly, that all this rhetoric of “inevitable corrective feedback” would just give explicit permission to Bad Actors (executives, institutional heads, financial folks) to be assholes and claim “the system we designed was self-correcting, so it’s not my fault this happened” afterwards. He was such a nice guy, he let me off the hook immediately. But I don’t think he realized back then how bad it could all get… and so I still feel a sense of partial responsibility.

The only helpful and effective advisors to arise from the residue of that Santa Fe Worldview are, as far as I’m aware, Agile software folks. I don’t mean coaches as such, or the “Scaled Agile” Industry, I mean the principles and stance outlined in the Agile Manifesto. The Scaled Agile Industry seems to be adopting the same approach as the Edge of Chaos folks 20 years ago, telling things “to companies” in order to “help them” better use their people.

Emphasis on “use”.

I’m just going to state that “Scaled Agile” not in the spirit of the Agile Manifesto, and assume that anybody who wants to argue with me will remain unconvinced.2 If you read that, and you think carefully about how complex social systems work at all their various scales, you should see quickly that an agile philosophical stance is incompatible with large-scale institutions. That’s how it works—that’s how human beings work—and I think both the surprising success of agile approaches and the miserable complaints I see about companies forcing their employees to “become agile” are evidence that supports this.

Self-organization isn’t a fix. As old Per was willing to point out in his less boisterous moments, self-organization leads to change on all scales unless you’re careful to tune parameters. “Change at a very large scale” is always a catastrophe, especially for the ones being changed. When you’re engineering something, managing something, trying to do work and solve pragmatic problems, the capacity for emergent change to come along on a large scale at any moment and sweep all your work away is really really bad.

One mechanism for coping with these inevitable waves of emergent change is to tune down the self-organization. Impose strict discipline from above. Centralize authority. Restrict the frequency and kinds of interactions among your components. Monitor it all with nice, linear, mathematically-precise metrics. Analyze and then plan and then design and then implement and then test. Never let your different kinds of foods touch.

Another mechanism for coping is to limit the reach of change. Stop being a 5000-employee company. Stop having 193 other companies upstream from you in the supply chain. Stop getting Venture Capital money to hire fifty new employees, just build your business on sales. Stop trying to “scale agile”. Don’t stop letting waves of transformation wash away the structure you’ve built, but rather be mindful and use simple heuristics to limit the damage it causes when it does arrive.

The last mechanism for coping with these waves of emergent change is the worst, and it’s hard to say whether the people who adopt it are stupid, evil, or just both. This is the Breaker’s Way: Expect to grow forever, and promote a culture of competition and mutual disruption internally—but also open boundaries to the outside world, to your surroundings, your context, your “network”. When the inevitable damage begins, and an internal avalanche rips out a part of your institution, that’s when you open the floodgates. You offload the worst damage to the world: your consumers, the government, the polity at large.

I can’t pick between stupid or evil because I’m never quite sure whether the systems that adopt this last stance—Silicon Valley and most of its growth-focused moving parts, Neoliberalism with its twisted and ahistorical sense of “markets”, and Capitalist Finance and its increasingly automated and interconnected self-referential network of debt—have anybody in control. I don’t think there are decision-makers that “decide” that VC will invest in sociopathic social networking, or politicians that “decide” that everybody should work at least 80 hours a week or starve. These are, in a real sense, systemic strategies, not mindful ones.

But on the other hand, there are smart people in Silicon Valley, and in politics, and in finance. I can only imagine that some small but non-vanishing fraction of them must, even occasionally, look around themselves, and notice that these are systems for shunting risk and damage generated from within—by intrinsic dynamics—into an arbitrarily delineated “environment”. Silicon Valley Culture (whether in San Francisco or London or China) shunts risks to the surrounding civil infrastructure; Neoliberalism shunts risks from the Meritocratic Leadership to the citizenry; Finance shunts risk from banks to individuals and “demographic components”. And they are large enough, and growing fast enough, and just by pure probability theory they must contain some smart people… who look around, notice where the destruction is sent, and who choose to do nothing.

That’s what I was apologizing to John Holland about, way back when. He had just written Emergence, and handed me a copy, and I’d read it. And I asked him whether his optimistic emphasis—he was always such a pleasant optimist, unlike me—on creativity and innovation and discovering wasn’t maybe at odds with the way the SOC stuff seemed to play out in practice. And I apologized to him because I had been working as a consultant with the Edge of Chaos crowd, and was telling people about the benefits of emergent phenomena, and (being a biologist) one of my examples was about the “innovation” of photosynthesis. And (as you can tell from reading) I tend to talk a bit when I’m enthused, and so I said something about the origin of Earth’s oxygen atmosphere.

Which was a catastrophe, if you were almost any inhabitant of the planet at the time. Because oxygen is poisonous to almost every living thing that existed up to that point. And they said—the audience guys (they were always guys): Well, we’ll just have to be the oxygen ones, then. And we all had a good laugh, and they paid me Loads of Money and thanked me for the Very Interesting Talk that Gave Them Lots to Think About.

[Insert requisite advertisement here for hiring me to give a very different but also quite Interesting Talk that Will Give You Lots to Think About]

And then a few months later it came back to me when I was at a workshop with John. I felt the need to apologize. Because Emergence is in fact a lovely optimistic book about using self-organization and complex emergent behaviors, and is full of great insights. But the people we had been consulting with just heard (from the Edge of Chaos consultants) what they had already decided to do: They wanted to become the biggest sand piles of all.

They were smart enough to know that the big sandpiles overflow and cover up the little ones. They got that. They knew. That’s what I pointed out to John.

He was always such a pleasant optimist. He said he didn’t think they’d be that way, but said he forgave me anyway.

And so

I don’t know if it’s better for Technologists to think like Biologists. They are already—as Schneier’s piece on the coming catastrophes in Internet of Things points out in excruciating detail—acting like biology. Like the inventors of photosynthesis, in fact. I wonder whether teaching them to think in such a way that they see these openings as strategic opportunities is perhaps not the solution.

Not being quite so stupid or evil—on a systemic level, if not an individual one—might be a better place to start.

  1. My one-time thesis advisor was one of the leading proponents of this… “approach”, but I did what I could to try to ameliorate it. 

  2. Not least because almost certainly they have no fucking idea who I am or why they should listen to me. Since my solution to “Scaled Agile” is inevitably “dissolve the company down immediately and irrevocably into many small companies, and help those become agile and productive in themselves”, they probably don’t want me hanging around their employees anyway. Which is fine.