28 Nov
2008

Genesis: Bridging The Gap Between Requirements And Code

Category:General PostTag: :

I just read the following post by Robert C. Martin. In it, he says the following:

If you can enumerate the states, and events, then you know the number of paths through the system. So if Given/When/Then statements are truly nothing more than state transitions, all we need to do is enumerate the number of GIVENs and the number of WHENs. The number of scenarios will simply be the product of the two.

To my knowledge, (which is clearly inadequate) this is not something we’ve ever tried before at the level of a business requirements document. But even if we have, the BDD mindset may make it easier to apply. Indeed, if we can formally enumerate all the Givens and Whens, then a tool could determine whether our requirements document has executed every path, and could find those paths that we had missed.

The company i work for (Item Solutions) actually has a tool for this: Genesis. After reading Robert’s post and discussing it here at the office, we decided to introduce this tool to a wider audience.

Genesis is more than just a tool. It is a methodology that we’ve been using for our internal projects for over two years now and that we are starting to commercialize. Before i start explaining how it all works, i’d like to show you how we can monitor the coverage of the requirements:

This is the global overview of one of our projects where we use the Genesis methodology. Each circle represents a functional module of the application we’re working on. Some of these circles are entirely green, which means that the requirements of that functional module are completely covered. Some of the circles still have a red part in them. This shows us that some of the requirements for that part have not been implemented yet.

The Genesis web application allows you to click on each circle to drill down into that functional module. In the screenshot, you can see that the circle for the Project Management module still has a red part in it. As we drill down into the module and the parts it consists of, we can get to the following view:

As we can see here, we have a couple of use cases within this part of the application that are entirely green. We also have two that are entirely red, which means we haven’t done anything for those use cases yet. And then we have a couple which are mainly green, but still with a bit of red in it. If i click on one of those use cases, this is what shows up on my screen:

As you can see, there are a couple of business requirements which are green, and below them we list the automated tests which cover this specific requirement. You can also see a red requirement, and the tool tells us that are no tests that cover that specific requirement.

That’s just a small overview of how we can monitor the development progress of our projects (there’s actually a lot more that we can see, but i might cover that in future posts about Genesis).

By now, you’re probably wondering: so how does it all work? Obviously, i can’t give away too many implementation details, but i can tell you how we use the tool and the methodology.

It all starts with a requirements document obviously. The requirements of the use case i showed earlier are written down like this:

Every time our continuous integration build runs, Genesis analyzes all of the test results and all of the data in our requirements document. In the Genesis tool, we can click on each business requirement to generate a specific code attribute for that requirement. When we write our tests, we put those attributes on top of the test methods. When Genesis analyzes the results and requirements, it processes the links between the requirements and the tests. If all of the tests that are linked to a requirement succeed, then the requirement shows up in green. If there are no tests for a requirement, or if existing tests for a requirement fail, the requirement shows up in red with some information about the problem. Genesis will either tell us that the tests for that requirement are no longer working, or that they are missing. Not only that, it will also notify us if the requirement changed since the test was written.

All of this gives us a tremendous amount of information and feedback, which is easily consulted by developers, analysts, project managers, clients, and pretty much everyone who’s been given access to the data. It’s also possible for a developer to ‘comment’ on the requirements… those comments will also be shown by the tool so the analyst and/or the project lead immediately know that something might be wrong with the requirements. This actually enables an effective line of communication when team members aren’t sitting together in the same physical location. We actually use this methodology with remote team members as well, and we’ve had less communication-related delays ever since we started doing this.

Finally, i would like to stress that this methodology can be a supplement to your existing process, instead of an “all-or-nothing” approach. It imposes no restrictions on the way you write your automated tests. You can place the requirement attributes on your unit tests, on integration tests, or whatever other kind of tests you write. All we need is data from the test results, which we then match to the data in the requirements document.

Also, i’d like to note that we currently only support .NET and Java but the system is open for extension so we can add support for other development environments as well.

Finally, you can get some more information about Genesis here.

3 thoughts on “Genesis: Bridging The Gap Between Requirements And Code

  1. This looks very pretty – cool factor 100% 🙂

    I like the fact that the diagram depicts project scope at a high level. I also like how the relative size of the bubbles indicates module size. Are the units of size relative to LOC or use case points (effort)?

    Sticking my Tufte hat on, I have a few questions. Are you intentionally keeping the diagrams quite sparse? I feel like I want to see, for example, how much effort & schedule remains on these modules.

    Had you considered putting the data less funky tabular format, to help make it more concise and to make comparison easier?

    Total respect for the level of traceability you’ve got going on. I was thinking about how I’d get tracability between use cases and code the other day, and it looks like you’ve got a great solution.

  2. I have to agree with the coolness factor of this approach. This strikes me as primarily a visualization of the mapping between implemented functionality and business requirements.

    Given the association between complete business requirement and passing tests, do the visualizations show 100% completion as the tests are being added incrementally?

    For instance, if I have only one test, bu it is passing, does the visualization show the feature as passing? Just curious. Thisis a very hard problem to solve as it gets to the idea of bridging the gaps between code and requirements with understanding of implementation on both side.

    Cool stuff, Davy.

  3. @Tobin

    the size of the bubbles is currently not related to the module size or anything like that… it just tries to fit all of the bubbles on the screen as clear as possible. But that’s a good idea though.

    a tabular format was never really considered because the tool not only had to be valuable, but it certainly had to be cool… the ‘wow factor’ was a requirement :p

    I don’t know if you’ve looked at the linked PDF (at the bottom of the post) but there are some more graphs in there… for instance, there’s one where you can compare the uncovered requirements with the covered requirements on a timeline

    @David

    Yes, if you only have one test linked to a requirement then the requirement will be green.
    We’ve kinda made a sport out of linking as many good tests as possible to each requirement though 😉

    Although we have thought about adding some kind of way to define some sort of coverage percentage to a specific test… but we haven’t really come up with a good solution to that yet.

Comments are closed.