I had a comment on my last post Tips for ORM Data Access which i would like like to address with this blog post.
I have been trying to wrap my head around the role of the DTO in DDD. My reading of Fowler and Evans seems to indicate that you ought to have your domain objects themselves mapping into your database, rather than dedicated function-less DTOs. Relying on DTOs that are then handled by Services seems to lead to what Fowler calls The Anemic Domain Anti-Pattern: http://martinfowler.com/bliki/AnemicDomainModel.html
However, I have a tough time writing Entity classes that operate in that manner that don?t end up rather painful to change and extend.
Since you recommend the practice of using DTOs, do you have any thoughts on the subject?
Thank you for the question Scott, of course I have thoughts on this 🙂
Disclaimer: This sort of architecture is not applicable to many systems, use the right patterns and tools for the job!
I agree that an anemic domain model is bad, if there is no behavior then what?s the point right? Let?s make sure I am on the same page here:
DTO: To me, a DTO moves data between ?tiers?. They are the packaged data ready for transport. A WCF data contract is a perfect example of a DTO, I also see a JSON object as a DTO.
Read model: This would be a different model than your real Domain model. A Read model is very lightweight, ?thin? and anemic. Its purpose is to serve aggregated data to a specific screen or message. A DTO, to me, can be a read model, as too could be a View Model.
The domain model is rich and full of behavior. This model is most valuable when performing complex business rules during the saving and updating of data within a given transaction. It can also be used to read data too, but consider this contrived example:
Let?s say that we want to display the top 10 products. The products include the Manufacturer Name, Product Name, Vendor Name, Product Price, Customer Ranking. Being that we are good modelers we come up with something like the following entities: Vendor, Manufacturer, Product, & ProductRanking (maybe localization & currency support tables too).
If I use my domain model to get this data, I am going to end up retrieving quite a bit more data than I actually need, which could degrade performance. Not to mention having to deal with dot notation everywhere foo.Name =a.b.c.d.
We only need 5 fields, and they are immutable for this operation.
My preference is to materialize the read model (DTO, View Model) by projecting from the Domain Model, or by using a Stored Procedure for more complicated recursive, spatial or temporal queries.
Splitting the models allows the reads & writes to fluctuate independently, so which leads to higher maintainability. These models can also run on different tiers/nodes to increase scalability (read/cache tier, write tier).
At some point, whether off a view or an inbound DTO, there will be mapping back into the domain model. This ?friction? or ?impedance? is pretty easy to manage using an assembler/translator, or a tool like AutoMapper.
Greg Young & Udi Dahan take this concept further and apply a programming principle called Command-Query Separation with distributed programming and SOA. I think it is very good stuff.
Here are some posts that are all somewhat related:
http://jonathan-oliver.blogspot.com/2009/03/dddd-and-cqs-getting-started.html
http://www.udidahan.com/2008/08/11/command-query-separation-and-soa/
http://codebetter.com/blogs/gregyoung/archive/2009/08/13/command-query-separation.aspx
https://elegantcode.com/2008/04/27/dtos-or-serialized-domain-entities/
https://elegantcode.com/2008/04/30/altnet-seattle-takeawayddddresources/
Check out this post from Greg Young on the matter:
http://codebetter.com/blogs/gregyoung/archive/2009/07/15/the-anemic-domain-model-pattern.aspx
I recently wrote about the various data access mechanisms (NHibernate, Entity Framework, LINQ to SQL, etc) and how they relate to a read model:
http://jonathan-oliver.blogspot.com/2009/11/cqrs-reporting-database-access.html
This seems to just be a lot of confusion about transactional vs analytical services.
Domain models are largely based on transactional (CRUD and workflow) type uses.
Asking a question like ‘display the top 10 products’ is an analytical use case that transcends the specific entities of the domain model to deliver additional insights and knowledge.
One common pattern is to combine and encapsulate an analytical use case with a subsequent retrieval of the domain entities indicated by the analysis. This is only useful if the purpose is to consume those entities in a transactional manner.
But, as you mention, this is a waste for many analytical use cases. The two approaches I see most often are to either create light-weight domain entity proxies, i.e. ProductInfo, or to serve up analytical data as something completely different but entirely consistent across the board so it can be consumed uniformly by analytical tools, for example in the Windows world serving up oData so it could be consumed by excel, SharePoint, etc.