Raising the Bar for Models in the Agile/Lean Community

R

setting-the-bar-highWe are just a few days after the amazing experience that was ALE 2013. I enjoyed being for three days in a large family of European Agile and Lean practitioners, and I learned a lot from the conference. I’ve seen many enthusiastic blog posts after the event, and I’m glad that it was so much learning happened.

But this blog post will not be another one praising the experience. Instead, it will be about this: we’ve done it, it was great, how can we make it awesome?

I haven’t yet gathered all my thoughts after the conference, but there is one that keeps following me. A few speakers presented models to the audience; Jurgen talked about the learning model, Vasco talked about a model without estimates and I talked about a model for incremental design. There were many others which I didn’t mention because I don’t remember them now.

Having models is excellent. It shows that agile and lean thinking is evolving and maturing. It’s the same thing that happened with our understanding of the Solar System and then of the Universe, or of the way the matter is structured. But I couldn’t shake the feeling that all these models (yes, including my own) lacked something. And that something is boundaries.

I propose that from now on, in the ALE community, when defining a model we follow a structure:

  • What the model is
  • Why is it helpful
  • Examples of application
  • Assumptions
  • When it doesn’t apply (examples or description)

I’ve seen that often models presented in the agile and lean community fall short of the last three points, although they are all equally important.

Having real examples is important for showing that the model was used before and it worked. First hand examples are better, but documented examples from serious books can also be used.

Documenting the assumptions will help everyone understand what works and what doesn’t. For example, “No Estimates” seems to assume that the stakeholders trust the team. But that’s what I assume from what I’ve heard about No Estimates, and I might be wrong.

Giving examples of when a model doesn’t apply helps define its boundaries. For example, retrospectives don’t work in an environment where people don’t trust each other, are not truthful or transparent to each other. As for incremental design, it is my hypothesis (not yet proven or disproven) that applying incremental design to a problem with a well-known solution makes no economic sense.

Being a speaker myself, I realize this is a tough requirement for speakers. After all, we speak at conferences to spread ideas. It’s easy to use rhetorical mechanisms like jokes, stories, theatrics, quotes to introduce an idea; it’s much less exciting to talk about the limits of your idea. I believe though that we should start trusting the ALE community that it’s mature enough to understand and appreciate when a speaker talks about limitations of a model and not only about the good parts of that model.

Some of the readers who are interested in science might recognize that the principles I’m advocating are far from scientific models. This is on purpose. I think we still have a lot to learn about conducting experiments in software development, and I think that this job is best left to people who have experience with it. I also believe collaborating with such people is something we need to develop for the future, if we want to advance the state of software development. But I also acknowledge that we need to take it one step at a time, and I believe this is a step we can make for the next few years.

By the way, although it might sound like I took a shot at No Estimates, I have to mention that Vasco Duarte was the only one that I saw at the conference who presented data gathered from real projects. That’s great, and I believe we should encourage other people to do the same.

In conclusion, I’m offering a model I believe we should use when we present models to the ALE community. It is up to the community to accept this model, encourage, challenge and help speakers to raise to its requirements. It won’t be easy, but we’ve already seen that ALE is a forgiving environment, so I’m sure we can do it.

5 comments

Leave a Reply to LucianAdrian Cancel reply

  • This is very, very close to pattern language (as taught to me by Linda Rising). The addition of examples isn’t explicitly called out in the Patterns Handbook as far as I can see, but most of the patterns I’ve seen do have one or two.

    I fully agree that we should be using this or a similar structure, and I agree that we don’t share failures and limits enough – this is something we were bewailing at the last Lean Systems Society; that we only ever hear about successes. Finding out where BDD fails was a massive leap forward for me. The only problem, of course, is that we don’t know what we don’t know.

    I do agree that we should be calling the limits of models out. I think it’s hard to do when you’re the person who’s created the model or communicated it widely. I feel lucky because some people did this for me, for models I was using, very respectfully, and very helpfully.

    I really hope this post helps other people get the same benefit!

    • Thank you Liz for your comment. It’s very interesting to hear that you went through this experience; I’d be curious to find out what you discovered about BDD and where it fails.

      It’s true that it’s hard to find the limits of a model when you create it. The science community does it all the time though. The researchers who create the theory find some limits because they ask the question, however hard it may be. Peer reviews are used to find the other limits. Of course, peer review has a downside: it’s very easy to fall in the trap of destroying a model with a review instead of improving it. My belief is that ALE is mature enough to work for improving models. I also believe calling out limits should become the norm inside ALE and recognized as an approach to improving our knowledge on software development.

      • BDD fails when things are very certain (3rd party or well-tested libraries, really boring web CRUD forms) or very uncertain (spikes, prototypes, A/B testing). But it can also be used as a sensemaking tool to spot when that’s happening, and that’s completely transformed the way I teach it now.

        The science community would like to be very good at breaking their models. However, in conditions of uncertainty, even statistics researchers exhibit small sample bias: http://psiexp.ss.uci.edu/research/teaching/Tversky_Kahneman_1974.pdf

        I was also reading “Surely you’re joking, Mr. Feynman?” and even in his day, he was bemoaning the dearth of people who were prepared to repeat experiments, both to ensure that they were repeatable, and to verify that their context was valid for the previous research they were using.

        I like to think that the science community does it most of the time. And I’d like to think that we do too.

  • Alex, I fully support your idea to add more the the model definition, and this is even more of a thing to appreciate because it comes from a person who presented a model by himself.

    The list of items to add to the model definition can not be prioritized, because all of them are needed, but if I were to choose the ones that should be mandatory, I would include the last three from your list , because most of the models and ideas I have seen presented at ALE or in any other context include a “model definition: and a reasoning on “why the model is helpful”. Most model presentations stop at the first two ones, because I think the persons who define and pitch the model get very intense in the “why it is helpful” part, and get stuck in the praising part. Your “blueprint of a model” idea raises the bar higher, in a community with already pretty high standards, and this is a good thing, pushing all of us towards improvement. Thinking a little bit, this is not surprising to come from a person who practices and preaches testing principles, as what you have said can easily be compared with the acceptance criteria for model definition. Congratulations!

    • Thank you Lucian, I appreciate your words.

      Please know that I am driven by one desire: to better understand software development. As the history of science showed us, it’s impossible to do as long as we describe only the model definition. We wouldn’t know as much about the Universe if Einstein wouldn’t have discovered that the classical laws of motion were just a good approximation for low speeds, for example. This discovery hasn’t cancelled the previously defined laws, it just enhanced them greatly. I’m looking at the same thing here: calling out the limits of models so that we can improve them.

      I like your comparison with testing. After all, the code we write is a theory of how an application should work; testing helps us find the limits of the theory and change it so that it better fits reality. From my experience, that’s also one of the hardest ideas to digest by programmers, since most of us are used to thinking there’s a solution for everything.

      On the same line, automated tests are the precise documentation of the theory. Any change in the tests (including deleting tests, something I’m often asked about), should be celebrated as an advance in the knowledge about reality.

      But I digressed. We can talk some more at one of our meetups :).

alexbolboaca.ro Reflections on design, craft and software

A new home for merging ideas about design

It is my strong belief that software design can learn a lot from other design disciplines. I wrote blog posts, a book and did talks on this topic, and it was time to group them all together. These ideas have now a new home: https://codedesigner.eu. My plan is to add more blog posts there, and to involve other people doing work in this area.