Definition of Ready

There is a practice in many agile conversations known as having a “Definition of Ready” for a Product Backlog item. That is, a given User Story (or other PBI) must meet the Team’s “Definition of Ready” to be considered actionable or even worthy to estimate.

This can be desirable from the team’s perspective when they are not getting enough information about the work they are being asked to do by the Product Owner.

While I understand the desire for this from many teams perspective, this as a potential struggle between precision and self-organization. Lest we forget, backlog items are invitations to conversations in which many of the attributes are detailed in collaborative discussion.

I know that having a Definition of Ready works for many teams. No denying that I’ve seen it be effective to helping clarify ambiguity. This is just a caution that it can go too far in some cases if we trade flexibility for a mandatory checklist.

The Definition of Ready is a practice that I believe makes sense to understand and explain to any team struggling with lack of clarity in requirements, but I also believe it is a practice that a team can elect into, just as we choose Planning Poker vs. some other estimation technique.

Agile 2016 Needs You

You know that new way of wielding JavaScript you’ve been using? Or that container deployment model your company is building? Or the way you’ve integrated UX design with the workflow of a Scrum team? Yeah, we want to hear about that.

It’s that time of year again. Time to submit proposals to the Agile Alliance’s annual conference. This year’s event will take place in July in Atlanta and promises to be a full week of jam-packed learning.

I’ve been going to the annual Agile conference for many years now, and been part of its planning for the last few. This year I am pleased to report that I am the program chair for the technical tracks at Agile 2016 and this means I am looking for you, oh developer who thinks of what they do as post-agile.

Although there are numerous learning opportunities at Agile 2016 including dozens of topic areas, I like to focus on what started the conversation in the first place: Delivering elegant software in a timely manner with high quality and satisfied customers. That’s why I am pleased to be soliciting sessions in the following areas:

  • Development Practices and Craftsmanship
  • DevOps
  • Testing and Quality Assurance
  • UX

These certainly aren’t the only things learn about at Agile 2016. The Agile conference is the biggest of its kind in the world and attracts presenters and attendees from all over the globe. Although registration is ready and open, this isn’t a call for attendance. This is a call for participation.

Come share what you’ve learned that’s making your team more effective. That’s what it’s really all about. Whether it’s that new stack of JavaScript tools you are using or the container hosting model your company is piloting or the empirical processes you’ve used to build something cool, come share it. I guarantee you’ll be glad you decided to submit a session and you will definitely learn something from someone else, too.

I have never left an annual Agile conference without a new bag of tricks that I want to try for myself, my team, or my company. I invite you to the fray.

AGILE IN NON-SOFTWARE DEVELOPMENT TEAMS

I’ve had some great opportunities working with teams applying lean and agile techniques to domains beyond software development. Along with other areas, I’m having lots of conversations around applying Scrum in HR teams.

This is fairly unique because the core usage model for Scrum is to apply it when the work is fairly unknown and complex, and the team is working together to create a shared product. Many knowledge worker teams simply aren’t like this. HR tends to be one of them.

It helps to identify the different types of work happening in each team. I delineate three types of work: project work, ad-hoc work, and repeating work. Here’s what I mean by each.

Repeating Work

This is work that happens on a schedule and happens often. For example, new employees may start on any given Monday in our organization, and processing the paperwork of adding those new employees is something that happens each week, regardless of other plans.

Ad-Hoc Work

Ad-hoc work is what happens when something out of the ordinary occurs and must be dealt with immediately. In IT this would be akin to, “the server is down.” In HR it might be something like, “someone just anger-quit” or some other circumstance that must be dealt with immediately.

Project Work

This is work that fits into a project (has a beginning and end) and has a clear deliverable. This work is planned and is often amendable to being managed with lightweight Scrum. Project work is often the work that motivates people and offers them opportunities to go beyond their traditional responsibilities.

An example of project work might be creating a newsletter for a company’s new wellness program, planning an event, or implementing a new benefit program.

Teams vs. Teams

In agile software development teams, we try and break down silos between individuals and encourage shared accountability for work and output. While this is a worthy goal, it is more difficult in teams like those we find in an HR department. In these teams, we really do find teams of individuals rather than a single cohesive unit focused on one deliverable. While one can argue that it would be an improvement to grow the team into one with shared accountability, this is far easier said than done. Teams of individuals are the norm.

We All Want More Project Work

Project work is typically what moves a team, department, and ultimately a company, forward. Yes, the repeating work must be done to maintain current levels of service and excellence. Yes, the ad-hoc work is often very important and cannot be deferred.

Yet, project work typically is where we get to add new value. We get to create something that didn’t exist before, whether it is software, an education curriculum, a new HR program, or perhaps revamping of an old system.

Project work (sometimes ad-hoc) is where we get to exercise autonomy, mastery, and purpose. Predictably, it is the work that most people find fulfilling. It also happens to be the work Scrum is most adept at managing. Repeating and ad-hoc work are typically things that take away from a Scrum team’s capacity (whether we track it in the Sprint or not).

Lightweight Scrum for Teams of Individuals

Scrum perfectionists beware, I am about to run an Extract Interface on Scrum, leaving a few bits behind for software teams to implement.

We’ve used a lightweight Scrum-like process for managing our family’s work for many years and I’ve found that applying similar practices to non-software teams tends to work well as a starting point. Here is my base formula for an agile practice in environments like these.

  1. Plan work together no less often than every 3 weeks. Focus on project work first while doing so. Leave capacity for ad-hoc work.
  2. Show your work to each other and stakeholders at a Sprint Review. Solicit feedback.
  3. The team reflects on itself and how it does its work each Sprint.
  4. The team keeps a visual representation of the Sprint Plan current at all times. This is typically a wall of cards, Trello, or some other lightweight approach.

Note this is a starting point and not an ending point. Over time, we hope to see more cross-training and generalization in the team. We hope to see fewer incidents of ad-hoc work. We hope the team will actually learn to self-organize. All of these things can indeed happen and more.

There are lots of places to go once you have these basic elements in place.

3 REASONS TO SPRINT

Several teams I’ve worked with wonder why it is important to finish work within the Sprint timebox. “This is an artificial container for my work,” they explain. “Why can’t our work just take the time it takes? Why must it be get all the way done in one Sprint?” In my experience, these frustrated teams are likely looking to move away from poorly implemented Scrum toward what will be disastrously implemented Kanban and just-in-time work processing.

Sprints offer three things that just-in-time work processing does not.

Focus

Software development teams that process work in a just-in-time manner often find a lack of focus. The Sprint container is a way to provide enough room for the creative work software developers do. In other words, time must be provided for developers to focus instead of worrying about the next piece of work they must pull in order to decrease cycle times.

Slack

Slack time is a fundamental ingredient to innovation. Healthy Scrum teams make slack time by pulling a little less work than there is capacity for work to be done. When developers are working ticket to ticket in a continuous flow model, slack is considered waste and teams will strive to eliminate it, bringing innovation to a grinding halt.

With little slack time to grow the capabilities of the team through automation or introducing new practices, the team can stagnate and fail to improve over time.

Prescribed Inspection and Adaptation

When teams stop observing Sprint boundaries for the work they do, they often soon begin to drop the ceremonies of Scrum that made them successful in the first place: Sprint Planning, Sprint Review, and Sprint Retrospectives. Each of these events inspects something and adapts something and when we stop doing them, improvements we’ve made in the past begin to fade away.

These events from Scrum are the backbone of team improvement and ongoing excellence. Losing them will immediately begin chipping away at the improvements they introduced.

Wrapping Up

There are other ways in which Sprints help teams, but these are my top three. If you are tempted to eschew Sprints for just-in-time ticket processing, don’t forget the ingredients of innovation that you risk leaving behind: focus, slack, inspection, and adaptation.

DEVELOPMENT TEAMS AND OPERATIONS

Many software development teams, especially those creating or supporting a shared platform or infrastructure, find themselves overwhelmed by ad-hoc work requests. A frequent response to this situation is to move from an iterative, incremental practice to Kanban or some other flow-based model.

In many situations, this is the exact opposite of what should happen. When a team finds itself foundering under the weight of system support or ad-hoc work requests, it may be time to start batching that work instead of letting it flow more easily in and out of the team.

Learn to Say “Not Now”

“But, I could never do that because we have to work on the issues as soon as they come up,” I can hear you saying. “We could never tell our customers they’ll have to wait for a bug fix or system change.”

A wise person once said, “A lack of time is actually a lack of priorities.” Take a moment to ponder that statement and apply it to your context. Does your team feel responsible for closing all the issues that are brought to it? Probably. Do you (or your organization) measure your team’s success in volume of requests processed? If you are a software development team, probably not.

I get it. It is very hard to say no. So, don’t. Instead, learn to say, “Not now,” and put work on your backlog for a future Sprint. Make the backlog extremely transparent. Be prepared to explain (a lot) why a given request is lower in backlog than work that genuinely moves the team forward. The urgency around the issue will often dissipate before your next Sprint Planning meeting.

Making over Maintaining

As per the agile manifesto, the only real measure of success is working software. Success on a development team should never be measured in volume of tickets serviced. That sort of measure is appropriate for an operational support team.

Moving from iterative and incremental delivery models to continuous flow models is often a sign that the team no longer sees its main responsibility as creating software, but in keeping a system up and running. The move from development to operational team is self-fulfilling. In this world, the underlying system is wagging the team instead of the other way around.

If you find yourself in the situation of being expected to create new value through new software features AND being held accountable for servicing a high volume of ad-hoc requests, two things may be true.

  1. Your underlying system is chock full of technical debt. This is often why the ad-hoc requests are coming in in the first place.
  2. You are on a path to a death march. Even a patient executive will eventually demand the team focus on delivering new features or functionality instead of firefighting the old system. Unfortunately, the time available to do this will have been cut short by your unwillingness to say, “Not now,” to all the ad-hoc work you believed was unavoidable.

In a high functioning development team, the majority of time is spent looking ahead and adding new capabilities to a given system. If your team is spending less than 75% of its time doing this, you may be on the verge of “going operational” which ultimately means even less time available for new capability development. “Going operational” often represents defeat for a development team. That isn’t to say that you shouldn’t embrace a DevOps mentality and support your system in production. It just means that you should have more time available for adding new capabilities than servicing ad-hoc requests and defect management.

Learning F# – The Thunderdome Principle for Functions

Back in 2008, Jeremy Miller introduced the Thunderdome Principle, a technique he used for building maintainable ASP.NET MVC applications which later led to the FubuMVC open-source project. The basic premise of the Thunderdome Principle is to have all controller methods take in one ViewModel object (or none in some cases), and also return a single ViewModel object. One object enters, one object leaves!

What I like about F# is that the language designers took a similar approach when they implemented functions. In F#, every function accepts exactly one input and returns exactly one output. This is a different approach compared to C# where there is a distinction between functions that return values and those that don’t return values. This choice made by the designers of the C# programming language even bleeds into the .NET framework itself by incorporating a Func delegate and an Action delegate.

But F# is having none of that! Let’s have a look at the following function and it’s corresponding type annotation:

let multiply x y = x * y
> val multiply : x:int -> y:int -> int

At first glance this looks like a function that takes two arguments and returns a result. But the type annotation tells us a different story. Here multiply is bound to a function that takes an integer argument “x” and returns a function. This second function takes another integer argument “y” and returns an integer.

This approach of “one input value, one output value” has several benefits. One example is that the F# compiler can assume that the last expression in a function is also the return value. Therefore we didn’t need to use the return keyword in our example.

But another major benefit is partial application. Let’s have a look at the following code:

let multiplyByTen = multiply 10
> val multiplyByTen : (int -> int)

let result = multiplyByTen 5
> val result : int = 50

Here the F# compiler evaluated the call to the multiply function as far as possible, and simply bound multiplyByTen to the function for which it didn’t receive the parameter value. So partial application only works for function arguments from left to right.

That’s all fine, but what about functions that don’t take any arguments or don’t have a value to return? This is where the *unit type comes in.

let logSomethingUseful = printfn "Hi there"
> val logSomethingUseful : unit = ()

Here we have a function with no arguments and no return value. However, behind the scenes, unit is passed to the function whereas unit is also returned by the function.

In my humble opinion, this whole “one input value, one output value” approach is by far a more cleaner model that is easier to understand. On several occasions while developing C# code, I wished that the .NET framework provided me with only Func delegates and a first-class void type. F# grants me this wish.

Until next time.

Learning F# – Passing Parameters to Functions

One of the first issues I faced when learning F# was finding out how to specify multiple parameters to a function. While this might sound obvious when learning a functional programming language, I had a few confronting moments that forced me to unlearn things before I could make any progress.

I wanted to create a function that wrapped the Contains method of the String class. In C#, it would be implemented like this:

public class StringHelper
{
    public static Boolean Contains(String substr, String str)
    {
        return str.Contains(substr);
    } 
}

Calling this simple function is quite obvious:

StringHelper.Contains("pdf", "document.pdf")

My first attempt at writing the equivalent function in F# looked like this:

let contains (substr, str: string) = 
    str.Contains substr

I had to make a type annotation for the second argument because otherwise the F# compiler was unable to infer its type as a string. Calling this function looks like this:

contains ("pdf", "document.pdf")

All very nice. The code works and life is good. But as it turns out, there’s something going on with this implementation that I didn’t realize at the time. Turns out that the contains function isn’t a function that accepts two string arguments, but a single tuple argument of two strings!

Executing this code in the F# REPL shows the following type annotation for the contains function:

val contains : substr:string * str:string –> bool

I noticed a lot of examples where F# functions were being called without the braces and without commas separating the parameters. Calling the current implementation of the contains function this way gave me the following error:

contains "pdf" "document.pdf"

error FS0003: This value is not a function and cannot be applied

So instead, I removed the braces and comma from the function definition like so:

let contains substr str: string = 
    str.Contains substr

The compiler then gave me the following error message:

error FS0072: Lookup on object of indeterminate type based on information prior to this program point. A type annotation may be needed prior to this program point to constrain the type of the object. This may allow the lookup to be resolved.

I had quite some head scratching going on before I was finally able to figure this out. Apparently the second argument needs to be enclosed with braces because of the explicit type annotation.

let contains substr (str: string) = 
    str.Contains substr

contains "pdf" "document.pdf" 

This time the F# REPL shows the following type annotation for the contains function:

val contains : substr:string -> t:string –> bool

This wasn’t quite what I expected at the time. As a developer who mostly writes C#, JavaScript code, I noticed how using comma-separated parameter/argument lists within braces was so engrained in my ability to read and write code. Even when dabbling with Clojure in my spare time I got no shortage of braces either. Even when writing Ruby code, I held on to the habit of using comma-separated parameter/argument lists enclosed in braces. I told myself that this would improve the readability of my code. But in fact it was my brain trying to keep me in the comfort zone.

At this point I’m quite comfortable with this syntax in F#. But it definitely took some time getting used to not adding braces/commas all over the place. I must say that the Troubleshooting F# page on Scott Wlaschin’s website was a great help!

Until next time.

The Burden of Features in Software

I’ve been removing a couple of dead features this week. You know, those features that senior people in organisations like to tell epic war stories about. Those mighty conversations at dinner parties, where a person involved talks about all the pain and sorrow, about how a particular capability ended up in the software, how (crappy) it got implemented, etc … . Everyone at the table is laughing. Some of them who were also there, adding more personal anecdotes to make the story more tragic.

But the good news is that these features are no longer relevant to the end-users. In some cases, these features don’t even work correctly anymore. But they are still there to haunt you as a software developer. They wander throughout the application as ghosts. Usually they are totally at odds with the current architecture and/or design. Whenever I come across something like that in the code, I turn it into a personal crusade to get the code out as soon as possible. Removing dead weight is equally important in our profession as adding new code. But how on earth did this feature get into the code base in the first place?

In a previous post I wrote about product and project focused software. In all cases, adding features to software adds a burden. There’s not only the cost of implementing and deploying it, but there’s also the (hidden) cost of maintaining it and further evolving it as the capabilities of the software grow or when the architecture evolves through time. Then there’s also the added complexity for the end-users, need for documentation, etc. …

In my experience, there tends to be more awereness in organisations where there is some form of a product focused mindset compared to “project” organizations. In “project” organizations, a particular feature might be necessary in order to succeed with a project or two. Afterwards, there might no longer be a need for this particular capability, but a (new) need for something else might arise. They pile up feature after feature after feature until they end up with a Big Ball of Mud. In these kind of organizations there is no threshold for adding new features.

I consider being cautious before adding new features a good thing. Adding capabilities over time that are harvested from actual needs tend to be more useful anyway compared to “we need this thing, and we need it by yesterweek”. As described in my previous post, things are usually no that absolute. Most organizations sit somewhere between completely “product” focused and completely “project” focused. But the really excellent ones refuse to add new capabilities to their software until not adding it becomes almost unresponsible.

The thing is that we should create awereness that crafting software is quite a fragile proces, both with ourselves as with our stakeholders.