3 REASONS TO SPRINT

Several teams I’ve worked with wonder why it is important to finish work within the Sprint timebox. “This is an artificial container for my work,” they explain. “Why can’t our work just take the time it takes? Why must it be get all the way done in one Sprint?” In my experience, these frustrated teams are likely looking to move away from poorly implemented Scrum toward what will be disastrously implemented Kanban and just-in-time work processing.

Sprints offer three things that just-in-time work processing does not.

Focus

Software development teams that process work in a just-in-time manner often find a lack of focus. The Sprint container is a way to provide enough room for the creative work software developers do. In other words, time must be provided for developers to focus instead of worrying about the next piece of work they must pull in order to decrease cycle times.

Slack

Slack time is a fundamental ingredient to innovation. Healthy Scrum teams make slack time by pulling a little less work than there is capacity for work to be done. When developers are working ticket to ticket in a continuous flow model, slack is considered waste and teams will strive to eliminate it, bringing innovation to a grinding halt.

With little slack time to grow the capabilities of the team through automation or introducing new practices, the team can stagnate and fail to improve over time.

Prescribed Inspection and Adaptation

When teams stop observing Sprint boundaries for the work they do, they often soon begin to drop the ceremonies of Scrum that made them successful in the first place: Sprint Planning, Sprint Review, and Sprint Retrospectives. Each of these events inspects something and adapts something and when we stop doing them, improvements we’ve made in the past begin to fade away.

These events from Scrum are the backbone of team improvement and ongoing excellence. Losing them will immediately begin chipping away at the improvements they introduced.

Wrapping Up

There are other ways in which Sprints help teams, but these are my top three. If you are tempted to eschew Sprints for just-in-time ticket processing, don’t forget the ingredients of innovation that you risk leaving behind: focus, slack, inspection, and adaptation.

DEVELOPMENT TEAMS AND OPERATIONS

Many software development teams, especially those creating or supporting a shared platform or infrastructure, find themselves overwhelmed by ad-hoc work requests. A frequent response to this situation is to move from an iterative, incremental practice to Kanban or some other flow-based model.

In many situations, this is the exact opposite of what should happen. When a team finds itself foundering under the weight of system support or ad-hoc work requests, it may be time to start batching that work instead of letting it flow more easily in and out of the team.

Learn to Say “Not Now”

“But, I could never do that because we have to work on the issues as soon as they come up,” I can hear you saying. “We could never tell our customers they’ll have to wait for a bug fix or system change.”

A wise person once said, “A lack of time is actually a lack of priorities.” Take a moment to ponder that statement and apply it to your context. Does your team feel responsible for closing all the issues that are brought to it? Probably. Do you (or your organization) measure your team’s success in volume of requests processed? If you are a software development team, probably not.

I get it. It is very hard to say no. So, don’t. Instead, learn to say, “Not now,” and put work on your backlog for a future Sprint. Make the backlog extremely transparent. Be prepared to explain (a lot) why a given request is lower in backlog than work that genuinely moves the team forward. The urgency around the issue will often dissipate before your next Sprint Planning meeting.

Making over Maintaining

As per the agile manifesto, the only real measure of success is working software. Success on a development team should never be measured in volume of tickets serviced. That sort of measure is appropriate for an operational support team.

Moving from iterative and incremental delivery models to continuous flow models is often a sign that the team no longer sees its main responsibility as creating software, but in keeping a system up and running. The move from development to operational team is self-fulfilling. In this world, the underlying system is wagging the team instead of the other way around.

If you find yourself in the situation of being expected to create new value through new software features AND being held accountable for servicing a high volume of ad-hoc requests, two things may be true.

  1. Your underlying system is chock full of technical debt. This is often why the ad-hoc requests are coming in in the first place.
  2. You are on a path to a death march. Even a patient executive will eventually demand the team focus on delivering new features or functionality instead of firefighting the old system. Unfortunately, the time available to do this will have been cut short by your unwillingness to say, “Not now,” to all the ad-hoc work you believed was unavoidable.

In a high functioning development team, the majority of time is spent looking ahead and adding new capabilities to a given system. If your team is spending less than 75% of its time doing this, you may be on the verge of “going operational” which ultimately means even less time available for new capability development. “Going operational” often represents defeat for a development team. That isn’t to say that you shouldn’t embrace a DevOps mentality and support your system in production. It just means that you should have more time available for adding new capabilities than servicing ad-hoc requests and defect management.

Learning F# – The Thunderdome Principle for Functions

Back in 2008, Jeremy Miller introduced the Thunderdome Principle, a technique he used for building maintainable ASP.NET MVC applications which later led to the FubuMVC open-source project. The basic premise of the Thunderdome Principle is to have all controller methods take in one ViewModel object (or none in some cases), and also return a single ViewModel object. One object enters, one object leaves!

What I like about F# is that the language designers took a similar approach when they implemented functions. In F#, every function accepts exactly one input and returns exactly one output. This is a different approach compared to C# where there is a distinction between functions that return values and those that don’t return values. This choice made by the designers of the C# programming language even bleeds into the .NET framework itself by incorporating a Func delegate and an Action delegate.

But F# is having none of that! Let’s have a look at the following function and it’s corresponding type annotation:

let multiply x y = x * y
> val multiply : x:int -> y:int -> int

At first glance this looks like a function that takes two arguments and returns a result. But the type annotation tells us a different story. Here multiply is bound to a function that takes an integer argument “x” and returns a function. This second function takes another integer argument “y” and returns an integer.

This approach of “one input value, one output value” has several benefits. One example is that the F# compiler can assume that the last expression in a function is also the return value. Therefore we didn’t need to use the return keyword in our example.

But another major benefit is partial application. Let’s have a look at the following code:

let multiplyByTen = multiply 10
> val multiplyByTen : (int -> int)

let result = multiplyByTen 5
> val result : int = 50

Here the F# compiler evaluated the call to the multiply function as far as possible, and simply bound multiplyByTen to the function for which it didn’t receive the parameter value. So partial application only works for function arguments from left to right.

That’s all fine, but what about functions that don’t take any arguments or don’t have a value to return? This is where the *unit type comes in.

let logSomethingUseful = printfn "Hi there"
> val logSomethingUseful : unit = ()

Here we have a function with no arguments and no return value. However, behind the scenes, unit is passed to the function whereas unit is also returned by the function.

In my humble opinion, this whole “one input value, one output value” approach is by far a more cleaner model that is easier to understand. On several occasions while developing C# code, I wished that the .NET framework provided me with only Func delegates and a first-class void type. F# grants me this wish.

Until next time.

Learning F# – Passing Parameters to Functions

One of the first issues I faced when learning F# was finding out how to specify multiple parameters to a function. While this might sound obvious when learning a functional programming language, I had a few confronting moments that forced me to unlearn things before I could make any progress.

I wanted to create a function that wrapped the Contains method of the String class. In C#, it would be implemented like this:

public class StringHelper
{
    public static Boolean Contains(String substr, String str)
    {
        return str.Contains(substr);
    } 
}

Calling this simple function is quite obvious:

StringHelper.Contains("pdf", "document.pdf")

My first attempt at writing the equivalent function in F# looked like this:

let contains (substr, str: string) = 
    str.Contains substr

I had to make a type annotation for the second argument because otherwise the F# compiler was unable to infer its type as a string. Calling this function looks like this:

contains ("pdf", "document.pdf")

All very nice. The code works and life is good. But as it turns out, there’s something going on with this implementation that I didn’t realize at the time. Turns out that the contains function isn’t a function that accepts two string arguments, but a single tuple argument of two strings!

Executing this code in the F# REPL shows the following type annotation for the contains function:

val contains : substr:string * str:string –> bool

I noticed a lot of examples where F# functions were being called without the braces and without commas separating the parameters. Calling the current implementation of the contains function this way gave me the following error:

contains "pdf" "document.pdf"

error FS0003: This value is not a function and cannot be applied

So instead, I removed the braces and comma from the function definition like so:

let contains substr str: string = 
    str.Contains substr

The compiler then gave me the following error message:

error FS0072: Lookup on object of indeterminate type based on information prior to this program point. A type annotation may be needed prior to this program point to constrain the type of the object. This may allow the lookup to be resolved.

I had quite some head scratching going on before I was finally able to figure this out. Apparently the second argument needs to be enclosed with braces because of the explicit type annotation.

let contains substr (str: string) = 
    str.Contains substr

contains "pdf" "document.pdf" 

This time the F# REPL shows the following type annotation for the contains function:

val contains : substr:string -> t:string –> bool

This wasn’t quite what I expected at the time. As a developer who mostly writes C#, JavaScript code, I noticed how using comma-separated parameter/argument lists within braces was so engrained in my ability to read and write code. Even when dabbling with Clojure in my spare time I got no shortage of braces either. Even when writing Ruby code, I held on to the habit of using comma-separated parameter/argument lists enclosed in braces. I told myself that this would improve the readability of my code. But in fact it was my brain trying to keep me in the comfort zone.

At this point I’m quite comfortable with this syntax in F#. But it definitely took some time getting used to not adding braces/commas all over the place. I must say that the Troubleshooting F# page on Scott Wlaschin’s website was a great help!

Until next time.

The Burden of Features in Software

I’ve been removing a couple of dead features this week. You know, those features that senior people in organisations like to tell epic war stories about. Those mighty conversations at dinner parties, where a person involved talks about all the pain and sorrow, about how a particular capability ended up in the software, how (crappy) it got implemented, etc … . Everyone at the table is laughing. Some of them who were also there, adding more personal anecdotes to make the story more tragic.

But the good news is that these features are no longer relevant to the end-users. In some cases, these features don’t even work correctly anymore. But they are still there to haunt you as a software developer. They wander throughout the application as ghosts. Usually they are totally at odds with the current architecture and/or design. Whenever I come across something like that in the code, I turn it into a personal crusade to get the code out as soon as possible. Removing dead weight is equally important in our profession as adding new code. But how on earth did this feature get into the code base in the first place?

In a previous post I wrote about product and project focused software. In all cases, adding features to software adds a burden. There’s not only the cost of implementing and deploying it, but there’s also the (hidden) cost of maintaining it and further evolving it as the capabilities of the software grow or when the architecture evolves through time. Then there’s also the added complexity for the end-users, need for documentation, etc. …

In my experience, there tends to be more awereness in organisations where there is some form of a product focused mindset compared to “project” organizations. In “project” organizations, a particular feature might be necessary in order to succeed with a project or two. Afterwards, there might no longer be a need for this particular capability, but a (new) need for something else might arise. They pile up feature after feature after feature until they end up with a Big Ball of Mud. In these kind of organizations there is no threshold for adding new features.

I consider being cautious before adding new features a good thing. Adding capabilities over time that are harvested from actual needs tend to be more useful anyway compared to “we need this thing, and we need it by yesterweek”. As described in my previous post, things are usually no that absolute. Most organizations sit somewhere between completely “product” focused and completely “project” focused. But the really excellent ones refuse to add new capabilities to their software until not adding it becomes almost unresponsible.

The thing is that we should create awereness that crafting software is quite a fragile proces, both with ourselves as with our stakeholders.

Product or Project Focused

A software development team in an organization should be able to focus on the core domain that reflects the business it’s serving. Developers on the team should be able to iterate and further refine the domain model based on the evolving input and feedback of the domain experts. The business people, domain experts and developers treat the software as a product, further evolving throughout a long period of time so far as it provides real value.

Sounds like an ideal world. As we all experienced during our careers, reality is almost never that shiny. Lots of businesses don’t see their software as a product. Instead they pride themselves as “project” organizations, always ready to sacrifice long-term thinking and quality for impossible deadlines. The result is usually a code-base with a lot of technical debt spread with project specific features. This is what we developers call the Big Ball of Mud.

What I just described here are two complete opposites. There are lots of businesses that have a product mindset, and there are lots that have a project mindset. But most organizations sit somewhere in between. In these organizations there is some kind of balance.

A while ago, fellow Elegant Coder David Starr wrote one of the best articles I’ve read in quite some time which is related to this topic. I urge you to read this excellent writing at least a couple of times.

“As per the agile manifesto, the only real measure of success is working software. Success on a development team should never be measured in volume of tickets serviced. That sort of measure is appropriate for an operational support team.”

In a project organization, development teams are commonly treated as an operational support team, usually not given the right amount of time to focus on the core domain of the business.

“If your team is spending less than 75% of its time doing this, you may be on the verge of “going operational” which ultimately means even less time available for new capability development. “Going operational” often represents defeat for a development team.”

As with all things in life, there should be a nice balance between product and project thinking with a majority of the focus on long-term development of the core domain model. But then again, there are things that science knows, and then there are things that companies do.

Premature Abstraction

The first time I read the GoF book, I didn’t understand it. This was because I didn’t had a decent understanding of the principles of object-oriented programming at the time. A while after, I read the book Design Patterns Explained. In this excellent book the author formulated the core thought behind the design patterns in the GoF book. He states that when a particular concept varies, it should be encapsulated (by means of an abstract class or and interface). This was a real eye opener for me and guided me to understanding the design patterns described in the GoF book.

But what I also learned after several years is that introducing abstractions in code is not a free ride either. There is a time and place where you want abstractions. I used to add abstractions in domain models all over the place, as soon as I possible could. Now I try to avoid them until the last responsible moment. Only when I gain a deeper understanding, when I start to see variations of the same concept popping up, when concrete implementations start to cause pain, only then am I prepared to consider adding a new abstraction.

Suppose that we have the concept of a communication channel in a particular domain model. If we have more than one communication channel, say SMS, telephone and email, then an abstraction is warranted. Then we might want to introduce an ICommunicationChannel interface. But if we only have SMS as communication channel, then no abstraction is needed. If new communication channels are being added over time, only then do we refactor the code.

Sometimes my brain tricks me into the premature abstraction path. It also happens that I’ve postponed introducing an abstraction a bit too long. I do make mistakes. Who knew? But I do gained some awareness over the years.

The Quest for the One True Static Site Generator

When I started blogging back in 2005 I created my personal blog on Blogger. I’ve been using this excellent blogging service over the years, enjoying the luxuries of not having to deal with the intricacies and complexities of hosting providers and blogging engines. Somewhere last year I started to get interested by the rise of static site generators. These days every self-respecting programming language has one or more associated static site generator tools.

What particularly piques my interest in static site generators is their promise of simplicity. Just spitting out some static HTML, CSS and JavaScript files. Fast to serve by any web server. Back to basics.

So at some point I started my quest to move away from Blogger. The tool I used for the job was Octopress, built on top of the ever popular Jekyll. This meant that I could host my personal blog on GitHub. What’s not to like?

Everything migrated smoothly and I got things set up in no time. But somehow I lost my excitement when everything went online. During the migration I constantly had to Google questions and answers, never having the feeling that I knew what I was doing. As soon as I started working on a new blog post, I got frustrated by the impediments it posed on my writing flow. Having a backlog of 10 years of blog posts ultimately seemed to have a negative impact on the performance of the output generation process. Who knew?

So after almost a year of not blogging, I decided to migrate again to another static site generator. This time I went for raw performance by adopting DocPad. At least, that’s what I thought at the time.

I spent a lot of time in order to get things working. I chose Jade as my templating engine, but got really frustrated by the terseness of its syntax, constantly struggling with the significant whitespace issues. I’ve used Jade in a couple of other projects in the past but I did not remember that it caused me so much pain.

Furthermore, some DocPad plugins were generating errors and/or warnings because they were out-of-date or using some part of the internal DocPad API that had been rendered obsolete in a more recent version, etc. … I constantly had to make compromises regarding the blog features that I wanted to implement because I didn’t get things to work just the way I wanted. After a frustrating couple of weeks I got something going. I pushed it to a brand new server at DigitalOcean and lost interest again.

When I decided to write another blog post, I noticed how very slow the generation process went. Again the impediments and negative impact on my writing flow. Not cool.

A couple of months ago I ran into yet another static site generator named Nanoc via One Thing Well. This one has been around for several years now. I started reading the documentation and instantly got a prototype working. Before migrating my personal blog again, I decided to use it for another static website that I created for my son’s soccer team in order to discover its features and applicability.

I totally loved how simple and easy it was to get things set up. For this website I had a couple of specific issues that I wanted to solve, like quickly adding photo albums and news items. I was able to implement a new version of the website in no time, growing ever more enthusiastic about Nanoc. After this successful venture, I decided to migrate my blog yet again, this time to Nanoc.

Having created two static websites with Nanoc, I couldn’t be happier about how everything turned out. One of the things I like the most is that I can just use plain old HTML interspersed with some erb. Layouts, partials and captures are all at your disposal.

Compilation and routing rules contained by the rules file are pretty amazing and easy to set up. Suppose I want to set up custom routing for all articles on my blog, using the year, month an the slug. Here’s the code to accomplish this:

route '/articles/*' do
      year, month, day, slug = /([0-9]+)\-([0-9]+)\-([0-9]+)\-([^\/.]+)/
          .match(item.identifier).captures
      "/#{year}/#{month}/#{slug}/index.html"
end

Processing markdown files and/or HTML files is as easy as adding the following compilation rule:

compile '/articles/*.{html,md}' do
    filter :erb
    filter :kramdown, :enable_coderay => false, :input => 'GFM', :hard_wrap => false
    layout '/article.*'
end

Here we first process the erb code, then process the markdown syntax using the GitHub dialect and render the output using the article layout. This is all I had to do in order to correctly process all legacy blog posts (HTML) as well as the newer ones (markdown).

Oh, and rendering the entire blog including 10 years of blog posts takes about 30 to 40 seconds on my machine, which is pretty awesome compared to Jekyll and especially Docpad.

Generating output pages from virtual items added from code is a pretty advanced scenario, but almost as easy to set up as the basic stuff. An example of this is generating a page for every unique tag used by a blog article.

def create_tag_pages
    get_list_of_tags.each do |tag|
        sanitizedTagName = sanitize_tag_name(tag)

        @items.create(
            "= render('_tag_page', :tag => '#{tag}')",               
              { :title => "#{tag}", :tag => "#{tag}", :is_hidden => true }, 
              "/tags/#{sanitizedTagName}/",
              :binary => false
        )
      end
end

If you’re in need for a decent static site generator, then look no further. Using Nanoc is an easy and above all, a very fun way to generate static HTML content.

Until next time.