Draft

How might one compare “agile” to “other” software development usefully?

Draft of 2012.03.06 ☛ 2016.12.12

May include: other&c.

Two tweets from a conversation on modeling software development:

This is an interesting proposition, but having looked over the work of Abdel-Hamid a bit, I’m wondering if we might consider a more agent-based approach.

The organizational behavior and systems-dynamics framework we see in that corpus is admirable and interesting, but one has to admit it’s focused on aggregate behaviors of “developers” and “management” observed from the standpoint of a corporate planner stakeholder. I don’t feel obliged to have that conversation any more.

What does one want to explore? The things that set apart agile projects from inagile ones, I suppose: Risks of the many modes of failure; adaptability; scaling; work load; business value delivery rates and probabilities; subjective experience; knowledge sharing; something about the many stakeholders (“management”, “team”, “customer”, “users”)?

All possible. I’ve had some incidental conversations with Ron and Chet about this, too, over the years. Maybe it’s time to pursue it a little ways….

But what does one want to compare?

Who are you, and what do you want?


In a comment, dated 2012-03-06 08:42:02 gmt, Laurent Bossavit wrote:

I never got very far, mind you. Back when I was exploring those ideas, my thinking was along the following lines:

I’d start trying to model as closely as possible the assumptions embodied in the “waterfall” approach, then question my model from an Agile POV, looking for reasons why this model couldn’t possibly be the whole story.

For instance, I’d model a project as 100 “chunks” of value from a software project, and assume that these 100 chunks started out as being of the Specified type, and could be transformed without error into 100 chunks of the Designed type, then into 100 chunks of the Coded type, then into 100 of the Tested type, and so on.

There would be no error at any step along the way, and each transformation would be fully deterministic.

Given these assumptions it’s clear that Waterfall “always works”, and it’s an observational fact that Waterfall doesn’t in fact “always work”, so I’d go looking for which assumption didn’t reflect reality.

So, for instance, I’d set a probability p that a Coded chunk had an error, and the testing step would return it to the Coded state instead of advancing it to Tested. You’d need to iterate the testing step until you had 100% of the project value.

What do you know, Waterfall still always works under these assumptions.

So I’d have to introduce still more realistic elements: for instance distinguish the chunks and introduce a sequence of dependency, so that an undetected error in one of the chunks could propagate to chunks that depended on it.

The problem is that by that point I felt like an explosion in model complexity was inevitable, and didn’t know how to deal with it. Even this small assumption of dependencies between chunks can be implemented in any number of different ways, with no good reason to pick one over the other. Similarly how errors caused further errors seemed like a huge can of worms to open.

It felt to me as if this one thing, the “problem of quality” or the “problem of errors” played a major role in “Agile vs what came before” (insofar as that comparison made any sense). But it was also a scary problem, and I think the truth is that I retreated from approaching it, at least with simulation as an instrument of exploration.


And my reply, dated 2012-03-06 15:33:58 gmt:

Well, one of the things that comes immediately to mind is an investigation and comparison of the various risks of failure: risk of misspecification, risk of change in business needs over the lifetime of the project, risk of poor implementation, and so forth.

One useful simplification with a lot of philosophical weight might be to represent the project as a model of some complex and dynamic externality. The “customer” (read: “business”) would want to minimize the errors in their complex model of the world at the point where the model is put into production, and would only get value based on the amount of the business world’s environment it manages to capture.

So for example, consider the world is “really” some function \(\vec{x}_t = \vec{F}(\vec{x}_{t-1}); \vec{x}_t,\vec{F} \in \mathbb{R}^m\), where it depends on some variables \(x_i\) at time \(t-1\).

A “waterfall” project would spend time collecting a whole bunch of data, and modeling the entirety of the “world function”, and then produce a requirements document which would be implemented. Suppose the development team takes a simulated week for every “quantum” of the model to be implemented, and the testing team takes a simulated week for every “quantum” of the model to be tested, when does business value get delivered to the customer?

An “agile” project might spend less time collecting data, and would begin by providing the single “quantum” of model which returned the most value to the customer as soon as possible. For example, it might return the average of the last three \(x_t\) values seen, which would be better than nothing, and then work on subsequent weeks on trying to improve the approximation of the “world model” greedily, without losing any accuracy in something already explained.

Anyway, that’s my sketch. One would need to consider what a “quantum” of modeling is: maybe a certain amount of accuracy on one test case, or something? If the “world model” is so complicated that it would defy the toolkit available to either group (say it depends on unmeasured externalities, or noise, or changes over time, or is affected by the progress made by the “development process” itself, or any other realistic driver), then the comparison between early delivery of partial results vs. delayed delivery of rational results would be pretty straightforward.