Here’s the situation: a couple of months ago we started developing according to a new architecture. Obviously, you need infrastructure code for this. For the first project, we just put the infrastructure code into the project’s solution and everything was easy. We could make changes as we needed them, and it enabled us to ‘grow’ the infrastructure into what we really needed. Then came the second project. I was reluctant to extract the infrastructure code in a separate reusable assembly because i felt it would lead to less flexibility to make changes. So i copied the infrastructure classes into the new project. Obviously, some changes were made in the classes of the second project, which weren’t ported to the classes of the first project. Add another project or two, and you can see the problem 🙂
So now we’re trying to figure out how best to move forward. I’ve got 3 options in mind:
- Infrastructure code as a separate project, binary ‘framework’ dependency per ‘client’ project
- Infrastructure code as a separate project, ‘framework’ dependency (in source form) per project (as in: copying the code of a specific version of the ‘framework’ into the project’s own repository)
- Each project just contains the infrastructure code in their own project and there is no specific ‘framework’
The way i see it, each approach has its pro’s and con’s:
- Infrastructure code as a separate project, binary ‘framework’ dependency per ‘client’ project
- Pro’s
- The code only has to be maintained in one place
- Everybody can benefit from changes
- Con’s
- Can make debugging harder because you can’t step into the framework code
- Requires a lot of discipline for versioning and distributing updates to ‘client’ projects
- The infrastructure code has to have a lot of extensibility points so each application can add extra functionality
- Infrastructure code as a separate project, ‘framework’ dependency (in source form) per project (as in: copying the code of a specific version of the ‘framework’ into the project’s own repository)
- Pro’s
- Does not have the debugging issue
- Code only has to be maintained in one place (in theory)
- Everybody can benefit from changes
- Con’s
- The infrastructure code has to have a lot of extensibility points so each application can add extra functionality
- If people change the infrastructure code in their project, all changes should be sent upstream to the ‘real’ infrastructure repository, or extension points need to be provided in the original infrastructure code so upgrades of the infrastructure library still offer the same possibilities for the specific project
- Still requires versioning discipline, although it probably wouldn’t need to be as strict as with Option 1
- Each project just contains the infrastructure code in their own project and there is no specific ‘framework’
- Pro’s
- Highly flexible… each project can freely make changes to make the infrastructure behave exactly as it needs to for the project
- Con’s
- Leads to multiple ‘versions’ of many of the classes… when a new project starts, which versions of each class should be used?
- Starting a new project contains boring set-up work which is basically just copy/pasting existing classes from previous projects
The reason i’m posting this, is because i’d love to get your feedback on this… what other pros/cons can you think of for each approach? Which approach would you recommend? Is there another approach we haven’t thought of?
Hi,
We use option 1. It is the only way for us to ensure security requirements are fulfilled.
Bye,
Xav
The first option is going to be the best. The only real issue I’ve found doing this is how it integrates with TFS and pulling out the binaries so you can link them in with other solutions.
This follows the same model as .NET except their stuff is placed in the GAC. It can’t be all that bad. 🙂
/paul
Software Architect
Use option #1, although someone will probably try to get you to convert the stuff to services instead of binaries due to the current fad of misapplying SOA.
“I was reluctant to extract the infrastructure code in a separate reusable assembly because i felt it would lead to less flexibility to make changes.”
I would also submit that this is the root problem you need to address. There is almost always a better way to go than copying over the source code into a new assembly. The moment you have the second project that is going to be reusing the infrastructure code is actually the perfect moment to start creating the shared assembly to house the infrastructure code.
Good post. I think these “this is the stuff we run into in our day to day work, what is the best way to handle” are among the most helpful of what you find in the blogosphere.
“misapplying SOA.”
Probably a little to provocative and a poor choice of wording. Should probably be something more like “using services where possible”.
well, the reason why i was reluctant to already put it in a different library at that time was because we were still making modifications rather frequently at the time
at a previous job i was responsible for maintaining the ‘one framework to rule them all’ and it was such a pain in the ass to safely make changes due to different requirements for a couple different types of applications that i may have been too eager to dismiss this option in this situation (where there is less complexity than in the situation of the previous job)
but the other alternatives also seem to lead to lots of PITA
“well, the reason why i was reluctant to already put it in a different library at that time was because we were still making modifications rather frequently at the time”
That’s pretty much exactly the case where you would not want the code duplicated, though.
We use option #1. That iss really the only way to go.
To make things simpler across many machines, we create a system environment variable and point it to our “root tools” folder. In subfolders, ie xunit, my.framework, etc. we place the specific binary assemblies as appropriate. Finally in our project files we add references to the files where the hint path is as follows
[HintPath]$(ToolsEnvVarName)\XUnit\xunit.dll[/HintPath].
Now this works on all dev machines, and the build machine too.
@Eddie
i actually prefer to use a ‘lib’ folder for each project where i keep all of the binary dependencies… dependencies are something i’d much rather upgrade manually than getting caught off guard by some kind of automatic update which could break things unexpectedly
@Davy
Yours is a valid option, but if you are trying to create a reusable framework, used on more than one project, then having one true source of depends is the way to go.
We currently have over 20 different applications running off of one framework. The fact that they all reference the same libs means that our build process can build a new framework version, and all apps on that framework, and tell us when we did something stupid.
The other thing is we have a ‘Deploy’ target in our framework project files, so no auto deployment by accident happens. Crutial when working on the framework libs themselves.
Option 1.
I’ve recently seen option 2 attempted and it quickly degraded to something like option 3 and went down hill from there.
I generally tread carefully around the concept of a “framework”. I prefer more of a common component library or service provider approach. Frameworks are like closet organizers. They’re conceived to be infinitely flexible, built to the dimensions of a rather confined space, and before you know it you’ve got “something” but with nowhere to put your socks. 😉
Sorry, but I don’t quite understand the options you mention.
Option 1) Do you mean that
The infrastructure code is in a project on it’s own, you build it. And in your multiple solutions, you add a reference to the build assembly?
Option 2) Do you mean that
The infrastructure code is in a project on it’s own, but you include the project and source in your multiple solutions and build the infrastructure with the rest of your solution?
@Karsten
yes and yes 🙂
I’d go for option 1.
You could set up a source server to avoid the debugging problem.
@StevePy: “I prefer more of a common component library or service provider approach.”
We started off looking at the Global Application Framework To End All Frameworks but quickly realized that this wasn’t going to be productive – and instead changed that into a collection of common library components, minimizing or removing dependencies on each other. So you can pick and choose the components a la carte as suitable for the project. For example, assemblies for Windsor or for Spring.Net.
This reduces (but does not eliminate) one of your Cons for option 1 – “The infrastructure code has to have a lot of extensibility points “.
It also helps with a problem of the Global Framework – you have Project X that could use 99% of whats in the GloboFramework, but due to one little thing like they use a slightly older build of NHibernate, you can’t include the GloboFramework dependencies at all.
If you go with option 1, there is a quick way to debug the framework code. Whenever you want to step into framework code, simply overwrite all of the framework’s Release output (in bin/Release) with the Debug output (from bin/Debug). This way, you don’t have to make any reference path changes in your projects and you can still benefit from debugging.
It is entirely possible that this only works because of the way we reference our common code: ‘client’ projects reference the build output of our common code. We don’t check the binaries into source control, so it means each developer has to build the common project. This, however, comes with its own bag of issues.
Option 1, clearly. In Java projects, use Maven as the build system, and version different builds of your infrastructure project in your local Maven repository. Dependent projects can specify a dependency on whatever version of the infrastructure project they require in their master POM definition file and everything just works.
I have the same problem. I still haven’t made a decision, but I’m thinking about this (Java):
1. Integration code: code to ‘glue’, say, Wicket to Spring to Hibernate. Separate project, new projects just import the binary. Should be minimal and stable. No forks.
2. Common components: rich components that don’t exists in open source libraries. Separate project, may import (1), new projects import the binary. Grows as time goes, without affecting the core (1). New projects may fork components to adapt them to their contexts, if really necessary.
3. Common code that is very likely to change: base templates, configuration, css files, etc. Separate project, copied source. If I use Maven as build tool, it may be an archetype (a kind of quickstart code generator).
This structure keeps the core stable, what makes the chances of breaking stuff lower. Components are shared, but any major flexibility needs are satisfied by forking some code., These components should be kept as much ‘vertical’ as possible, without too much dependencies between them. And, the code that every project has to write again and again (very similar, but tweaked to the specific application) is just copied and never breaks other projects.
It’s very tempting to just say “Centralize! DRY! Are you stupid or something? It’s so obvious!”, but dependencies and version control is not a problem you can take lightly. And keep ‘one framework to rule them all’ IS a very difficult and painful task. But sometimes necessary. My answer to this is to make this framework as little as possible, to minimize this pain.
Well, for now, I guess 🙂