Is Agile still applicable?

I just read this interview article with Bob Martin and I have a few thoughts about it.

Uncle Bob is extremely code-focused and always has been in his career. This isn’t a criticism, far from it, but one must take that into account when reading his opinions on such matters. He is heavily involved in the Software Craftsmanship movement, which he sees as a follow-on to the Agile Manifesto itself.

Like any business practice, Agile has evolved and matured in directions the original 17 signatories never imagined . Others people beyond the original signatories have done a lot to further the principles into techniques like Scrum, Kanban, DevOps, etc. This is to be expected, imo, as something matures. Even Lean has evolved far beyond Demming’s orginal models.

Though the work of others in the field, we’ve learned that the original 4 values and 12 principles are applicable outside the context of software development. The primary direction it has evolved is toward frequent delivery with applied empiricism to use data in guiding decisions. We are beginning to do this quite well as an industry, imo. At GoDaddy, we are even using Agile practices to manage the work of our training team and that has been quite successful.

I would offer that his views are, of course, quite opinionated and one would likely hear different opinions from the other 16 original signatories. I would even call his views on Agile slightly dated.

In summary, Agile principles can indeed be applied to business practices within the right contexts and we’ve seen through evidence at other organizations (like Intuit, GE Medical, and others) that Agile practices can indeed scale. It’s not easy to do, of course, because scaling ANYTHING is difficult and more error prone than small scale efforts.

I respect and appreciate Bob Martin’s opinions, but in reading this particular interview I think they should be tempered with the evidence we have that shows Agile’s success beyond its original intent.

Memo on O-Ring and Software Erosion

One of the most fascinating documents I’ve read to date is the memo from Roger Boisjoly on O-Ring Erosion. The original target audience for this memo he’d written were the management folks of Morton Thiokol back in 1985, about six months before the Challenger disaster.

What I find so striking about this whole story is its resemblance to the field of software engineering. We software developers can relate to this story all too well. I’ve personally been in this situation more than once. Heck, some of us are in similar kinds of situations all the time.

Roger Boisjoly attempted to halt the launch of the Space Shuttle, unfortunately to no avail. In his particular case, a failure of the Space Shuttle would cost the lives of the astronauts on board. But what about the software that we are crafting? In most cases, a failure of the software would not result in the loss of human life. It would cost the company a particular amount of money. Worst case scenario, its reputation would be flushed down the drain alongside the money.

But what should we do when we encounter reliability issues in the software that we’re working on? Well, first of all, we make an assessment about the severity and the impact of these problems and fix them accordingly. But what if it takes us a couple of days, a week, two weeks or even a few months to fix things? Well, then we usually turn to the management staff, explaining in our best non-technical terms what’s going on and ask them for the time (aka budget) we need. But here comes the tricky part. What if they don’t want to listen? What if they don’t take us serious? What if they don’t trust us that our findings are correct? What if they simply ignore us, wondering why we still have the energy to even come talk to them after or during this dead march that we’re in? What if they do listen to our plea, wholeheartedly agree with everything we say, walk us back to the hallway and then simply don’t give us anything except more business requirements and feature requests?

But now on to the million-dollar question. What do we software developers do when we don’t get the time to do the right thing? Well, we could try to fix things in our spare time, sacrificing the precious time we have with our family and friends or perhaps even our sleep for the next couple of days, weeks, months, perhaps even years? I don’t think that this is very sustainable. Or we could simply ignore management, stop adding new features and business capabilities and start fixing things on our own behalf, like some kind of software development team mutiny? I guess this will end up with one or more developers getting fired. Or we could start a campaign on social media about how software developers are being wronged at this or that particular company. Would anyone even care? And this will probably end up in some layoffs as well.

So what should we as professional software developers do to make things right in these kinds of scenarios? Maybe the people of the software craftsmanship movement could draft a script with concrete actions that guide software developers when they find themselves in the same situation as Roger Boisjoly?

I can only wonder what would happen to our industry if only 10% of all software developers would follow the actions described in such a manifesto. Wouldn’t that be useful advice?

Definition of Ready

There is a practice in many agile conversations known as having a “Definition of Ready” for a Product Backlog item. That is, a given User Story (or other PBI) must meet the Team’s “Definition of Ready” to be considered actionable or even worthy to estimate.

This can be desirable from the team’s perspective when they are not getting enough information about the work they are being asked to do by the Product Owner.

While I understand the desire for this from many teams perspective, this as a potential struggle between precision and self-organization. Lest we forget, backlog items are invitations to conversations in which many of the attributes are detailed in collaborative discussion.

I know that having a Definition of Ready works for many teams. No denying that I’ve seen it be effective to helping clarify ambiguity. This is just a caution that it can go too far in some cases if we trade flexibility for a mandatory checklist.

The Definition of Ready is a practice that I believe makes sense to understand and explain to any team struggling with lack of clarity in requirements, but I also believe it is a practice that a team can elect into, just as we choose Planning Poker vs. some other estimation technique.

Agile 2016 Needs You

You know that new way of wielding JavaScript you’ve been using? Or that container deployment model your company is building? Or the way you’ve integrated UX design with the workflow of a Scrum team? Yeah, we want to hear about that.

It’s that time of year again. Time to submit proposals to the Agile Alliance’s annual conference. This year’s event will take place in July in Atlanta and promises to be a full week of jam-packed learning.

I’ve been going to the annual Agile conference for many years now, and been part of its planning for the last few. This year I am pleased to report that I am the program chair for the technical tracks at Agile 2016 and this means I am looking for you, oh developer who thinks of what they do as post-agile.

Although there are numerous learning opportunities at Agile 2016 including dozens of topic areas, I like to focus on what started the conversation in the first place: Delivering elegant software in a timely manner with high quality and satisfied customers. That’s why I am pleased to be soliciting sessions in the following areas:

  • Development Practices and Craftsmanship
  • DevOps
  • Testing and Quality Assurance
  • UX

These certainly aren’t the only things learn about at Agile 2016. The Agile conference is the biggest of its kind in the world and attracts presenters and attendees from all over the globe. Although registration is ready and open, this isn’t a call for attendance. This is a call for participation.

Come share what you’ve learned that’s making your team more effective. That’s what it’s really all about. Whether it’s that new stack of JavaScript tools you are using or the container hosting model your company is piloting or the empirical processes you’ve used to build something cool, come share it. I guarantee you’ll be glad you decided to submit a session and you will definitely learn something from someone else, too.

I have never left an annual Agile conference without a new bag of tricks that I want to try for myself, my team, or my company. I invite you to the fray.


I’ve had some great opportunities working with teams applying lean and agile techniques to domains beyond software development. Along with other areas, I’m having lots of conversations around applying Scrum in HR teams.

This is fairly unique because the core usage model for Scrum is to apply it when the work is fairly unknown and complex, and the team is working together to create a shared product. Many knowledge worker teams simply aren’t like this. HR tends to be one of them.

It helps to identify the different types of work happening in each team. I delineate three types of work: project work, ad-hoc work, and repeating work. Here’s what I mean by each.

Repeating Work

This is work that happens on a schedule and happens often. For example, new employees may start on any given Monday in our organization, and processing the paperwork of adding those new employees is something that happens each week, regardless of other plans.

Ad-Hoc Work

Ad-hoc work is what happens when something out of the ordinary occurs and must be dealt with immediately. In IT this would be akin to, “the server is down.” In HR it might be something like, “someone just anger-quit” or some other circumstance that must be dealt with immediately.

Project Work

This is work that fits into a project (has a beginning and end) and has a clear deliverable. This work is planned and is often amendable to being managed with lightweight Scrum. Project work is often the work that motivates people and offers them opportunities to go beyond their traditional responsibilities.

An example of project work might be creating a newsletter for a company’s new wellness program, planning an event, or implementing a new benefit program.

Teams vs. Teams

In agile software development teams, we try and break down silos between individuals and encourage shared accountability for work and output. While this is a worthy goal, it is more difficult in teams like those we find in an HR department. In these teams, we really do find teams of individuals rather than a single cohesive unit focused on one deliverable. While one can argue that it would be an improvement to grow the team into one with shared accountability, this is far easier said than done. Teams of individuals are the norm.

We All Want More Project Work

Project work is typically what moves a team, department, and ultimately a company, forward. Yes, the repeating work must be done to maintain current levels of service and excellence. Yes, the ad-hoc work is often very important and cannot be deferred.

Yet, project work typically is where we get to add new value. We get to create something that didn’t exist before, whether it is software, an education curriculum, a new HR program, or perhaps revamping of an old system.

Project work (sometimes ad-hoc) is where we get to exercise autonomy, mastery, and purpose. Predictably, it is the work that most people find fulfilling. It also happens to be the work Scrum is most adept at managing. Repeating and ad-hoc work are typically things that take away from a Scrum team’s capacity (whether we track it in the Sprint or not).

Lightweight Scrum for Teams of Individuals

Scrum perfectionists beware, I am about to run an Extract Interface on Scrum, leaving a few bits behind for software teams to implement.

We’ve used a lightweight Scrum-like process for managing our family’s work for many years and I’ve found that applying similar practices to non-software teams tends to work well as a starting point. Here is my base formula for an agile practice in environments like these.

  1. Plan work together no less often than every 3 weeks. Focus on project work first while doing so. Leave capacity for ad-hoc work.
  2. Show your work to each other and stakeholders at a Sprint Review. Solicit feedback.
  3. The team reflects on itself and how it does its work each Sprint.
  4. The team keeps a visual representation of the Sprint Plan current at all times. This is typically a wall of cards, Trello, or some other lightweight approach.

Note this is a starting point and not an ending point. Over time, we hope to see more cross-training and generalization in the team. We hope to see fewer incidents of ad-hoc work. We hope the team will actually learn to self-organize. All of these things can indeed happen and more.

There are lots of places to go once you have these basic elements in place.


Several teams I’ve worked with wonder why it is important to finish work within the Sprint timebox. “This is an artificial container for my work,” they explain. “Why can’t our work just take the time it takes? Why must it be get all the way done in one Sprint?” In my experience, these frustrated teams are likely looking to move away from poorly implemented Scrum toward what will be disastrously implemented Kanban and just-in-time work processing.

Sprints offer three things that just-in-time work processing does not.


Software development teams that process work in a just-in-time manner often find a lack of focus. The Sprint container is a way to provide enough room for the creative work software developers do. In other words, time must be provided for developers to focus instead of worrying about the next piece of work they must pull in order to decrease cycle times.


Slack time is a fundamental ingredient to innovation. Healthy Scrum teams make slack time by pulling a little less work than there is capacity for work to be done. When developers are working ticket to ticket in a continuous flow model, slack is considered waste and teams will strive to eliminate it, bringing innovation to a grinding halt.

With little slack time to grow the capabilities of the team through automation or introducing new practices, the team can stagnate and fail to improve over time.

Prescribed Inspection and Adaptation

When teams stop observing Sprint boundaries for the work they do, they often soon begin to drop the ceremonies of Scrum that made them successful in the first place: Sprint Planning, Sprint Review, and Sprint Retrospectives. Each of these events inspects something and adapts something and when we stop doing them, improvements we’ve made in the past begin to fade away.

These events from Scrum are the backbone of team improvement and ongoing excellence. Losing them will immediately begin chipping away at the improvements they introduced.

Wrapping Up

There are other ways in which Sprints help teams, but these are my top three. If you are tempted to eschew Sprints for just-in-time ticket processing, don’t forget the ingredients of innovation that you risk leaving behind: focus, slack, inspection, and adaptation.


Many software development teams, especially those creating or supporting a shared platform or infrastructure, find themselves overwhelmed by ad-hoc work requests. A frequent response to this situation is to move from an iterative, incremental practice to Kanban or some other flow-based model.

In many situations, this is the exact opposite of what should happen. When a team finds itself foundering under the weight of system support or ad-hoc work requests, it may be time to start batching that work instead of letting it flow more easily in and out of the team.

Learn to Say “Not Now”

“But, I could never do that because we have to work on the issues as soon as they come up,” I can hear you saying. “We could never tell our customers they’ll have to wait for a bug fix or system change.”

A wise person once said, “A lack of time is actually a lack of priorities.” Take a moment to ponder that statement and apply it to your context. Does your team feel responsible for closing all the issues that are brought to it? Probably. Do you (or your organization) measure your team’s success in volume of requests processed? If you are a software development team, probably not.

I get it. It is very hard to say no. So, don’t. Instead, learn to say, “Not now,” and put work on your backlog for a future Sprint. Make the backlog extremely transparent. Be prepared to explain (a lot) why a given request is lower in backlog than work that genuinely moves the team forward. The urgency around the issue will often dissipate before your next Sprint Planning meeting.

Making over Maintaining

As per the agile manifesto, the only real measure of success is working software. Success on a development team should never be measured in volume of tickets serviced. That sort of measure is appropriate for an operational support team.

Moving from iterative and incremental delivery models to continuous flow models is often a sign that the team no longer sees its main responsibility as creating software, but in keeping a system up and running. The move from development to operational team is self-fulfilling. In this world, the underlying system is wagging the team instead of the other way around.

If you find yourself in the situation of being expected to create new value through new software features AND being held accountable for servicing a high volume of ad-hoc requests, two things may be true.

  1. Your underlying system is chock full of technical debt. This is often why the ad-hoc requests are coming in in the first place.
  2. You are on a path to a death march. Even a patient executive will eventually demand the team focus on delivering new features or functionality instead of firefighting the old system. Unfortunately, the time available to do this will have been cut short by your unwillingness to say, “Not now,” to all the ad-hoc work you believed was unavoidable.

In a high functioning development team, the majority of time is spent looking ahead and adding new capabilities to a given system. If your team is spending less than 75% of its time doing this, you may be on the verge of “going operational” which ultimately means even less time available for new capability development. “Going operational” often represents defeat for a development team. That isn’t to say that you shouldn’t embrace a DevOps mentality and support your system in production. It just means that you should have more time available for adding new capabilities than servicing ad-hoc requests and defect management.

Learning F# – The Thunderdome Principle for Functions

Back in 2008, Jeremy Miller introduced the Thunderdome Principle, a technique he used for building maintainable ASP.NET MVC applications which later led to the FubuMVC open-source project. The basic premise of the Thunderdome Principle is to have all controller methods take in one ViewModel object (or none in some cases), and also return a single ViewModel object. One object enters, one object leaves!

What I like about F# is that the language designers took a similar approach when they implemented functions. In F#, every function accepts exactly one input and returns exactly one output. This is a different approach compared to C# where there is a distinction between functions that return values and those that don’t return values. This choice made by the designers of the C# programming language even bleeds into the .NET framework itself by incorporating a Func delegate and an Action delegate.

But F# is having none of that! Let’s have a look at the following function and it’s corresponding type annotation:

let multiply x y = x * y
> val multiply : x:int -> y:int -> int

At first glance this looks like a function that takes two arguments and returns a result. But the type annotation tells us a different story. Here multiply is bound to a function that takes an integer argument “x” and returns a function. This second function takes another integer argument “y” and returns an integer.

This approach of “one input value, one output value” has several benefits. One example is that the F# compiler can assume that the last expression in a function is also the return value. Therefore we didn’t need to use the return keyword in our example.

But another major benefit is partial application. Let’s have a look at the following code:

let multiplyByTen = multiply 10
> val multiplyByTen : (int -> int)

let result = multiplyByTen 5
> val result : int = 50

Here the F# compiler evaluated the call to the multiply function as far as possible, and simply bound multiplyByTen to the function for which it didn’t receive the parameter value. So partial application only works for function arguments from left to right.

That’s all fine, but what about functions that don’t take any arguments or don’t have a value to return? This is where the *unit type comes in.

let logSomethingUseful = printfn "Hi there"
> val logSomethingUseful : unit = ()

Here we have a function with no arguments and no return value. However, behind the scenes, unit is passed to the function whereas unit is also returned by the function.

In my humble opinion, this whole “one input value, one output value” approach is by far a more cleaner model that is easier to understand. On several occasions while developing C# code, I wished that the .NET framework provided me with only Func delegates and a first-class void type. F# grants me this wish.

Until next time.