2015-05-31

Advancing Enterprise DDD - Moving Forward

In the last essay of the Advancing Enterprise DDD series, we wrapped up our discussion on immutability, as well as the technical discussion overall. In this essay, we wrap up the series by reflecting on what we've learned, and discussing how to apply these lessons as we move forward.

Our technical discussions in this series have been focused in two different directions. We examined what can we do within the Java/JPA/RDB (relational database) framework in order to do Domain Driven Design better. And we considered the ways in which doing DDD would be easier if we were using a different toolset. Let's sum up both of these in turn.

I've been reflecting a lot over what I've written in this series as I have been writing it, and considering the feedback I've gotten. If there was one thing I could go back and change, it would be the way I presented my ideas on what you can do within JPA to make DDD better. I would leave the technical content more or less the same, but I wish I had put more emphasis on the fact that these are not necessarily recommendations for people using JPA. My thinking has been more along the lines of, "What would we have to do to remove this stumbling block that comes up using JPA and RDB?" In some cases, the answer to this question involves quite a lot of added infrastructure and effort. If we were talking about a greenfield JPA project, you could probably make use of most or all the recommendations here. But with an existing project, many of these changes would be very difficult to achieve.

If you are already using JPA, then most likely, you have an established code base, with all the quirks and foibles that come along with it. You also have limited resources, and new features to add. Many of the design changes I discuss would require a major refactoring effort, and would simply not be worth the effort. To some extent or another, we are working with legacy code here, and large scale refactoring efforts in legacy code are typically not cost effective. However, I would recommend that you consider the recommendations here, at least as a thought experiment. It may reveal a lot of things about your codebase that hadn't occurred to you. You may find some of them relatively easy to apply, and you may choose to give them a try. And you may have some ideas of your own on how to make improvements. But any work you want to do along these lines should be specific to the needs of your particular project.

If you are starting a new project, and considering using JPA, I have to advise you against it. You are trapping yourself into some very arcane technologies. RBD is 45 years old this year. That's two years older the the C programming language. Would you choose to program in C if you were starting a new project? Of course not, unless you were targeting some highly specialized hardware. In the past decade or so, database technologies have undergone a miniature revolution, and there are many, much better technologies out there today than RDB. These NoSQL technologies are designed to meet many different kinds of needs, but a document database, such as MongoDB or CouchBase, seems the best suited for building a typical DDD application.

We've talked a great deal in this series about how DDD might be easier if we were using a functional/OO hybrid (F/OO) language, such as Scala, and a document database, such as MongoDB. We've seen that many of the difficulties that came up with JPA would be resolved by moving to this new toolset. But there is an obvious missing ingredient in this story. If we are replacing Java with Scala, and RDB with Mongo, what are we replacing JPA/Hibernate with?

Out of the many persistence libraries available for Scala, Slick is perhaps the most mature and well developed. Unfortunately, Slick specifically targets an RDB back end. There are two different MongoDB libraries for Scala that are worth your consideration. The first is Casbah, which is officially supported by Mongo, and basically provides a Scala-like facade for the MongoDB Java driver. It's a great library, but it is essentially a driver, and a lot of the features we are so used to in JPA are missing. For example, when we retrieve rows through the Casbah library, we get our Mongo documents back as nested maps with strings as keys. There is no comparable feature to the JPA annotations that allow us to save or retrieve our domain objects directly.

The second option for using Mongo with Scala is ReactiveMongo, which also presents a driver-like interface to the database, but with an API that adheres to the principles of reactive programming. Reactive programming is a powerful extension to the base F/OO programming model, and I highly recommend you give it a look. But once again, the inputs and outputs for the library are not your domain classes, but BSON documents - essentially nested associative maps. And neither Casbah nor ReactiveMongo were designed with any DDD-specific concerns in mind.

Late 2014, I started working on a little CRM application. I knew I wanted to use Scala and MongoDB. And I wanted to apply the principles of Domain Driven Design as well. I have many years of experience using JPA, and I'm quite familiar with the things about JPA that I love, and the things about it that drive me crazy. Putting all these pieces together, my little CRM project was more or less sidelined while I started working on a Scala/Mongo framework that would take the best parts of JPA, avoid its difficulties, and also help people code and think in terms of DDD. To a large extent, early design and implementation work on this framework has motivated this series of essays.

I've been working on longevity since mid December 2014. I have really enjoyed working on it, and it feels very promising to me. I've since put out multiple public releases, and I'm pressing forward towards the 1.0 release. It's currently ready to use, and I'm looking forward to getting back to my CRM project soon. It has a wide variety of implemented features, including streaming query results. And it presently has MongoDB and Cassandra back ends.

Longevity is a relatively simple, but ambitious project. Similar to JPA, the API inputs and outputs are your domain entities, written as Scala case classes. Unlike JPA, where persistence state sits naked in your POJO objects, longevity encapsulates all the persistence data within the persistence layer. The innumerable difficulties (see here, here, here, here, and here) we encounter designing and implementing DDD aggregates in JPA have gone away, thanks in large part to NoSQL's document storage model. And it fully supports the use of immutability in your domain model. Longevity also provides a reactive API using Scala Futures.

Whereas JPA/Hibernate is an ORM - an Object-Relational Mapper - longevity is none of these things. It doesn't deal with objects in the traditional object-oriented sense, because traditionally in OO, objects have mutable state. It doesn't target relational databases, but document databases. And it isn't a "mapper". Rather than simply provide assistance in mapping your persisted data into objects in your software, and vice-versa, longevity encapsulates the whole persistence layer behind a facade. It presents a simple, non-leaky abstraction for persistence concerns. And it assists you in constructing your domain classes following Domain Driven Design principles.

Thanks for reading! I lied a little bit when I called this the final post. The final post includes acknowledgements of all the people, books, and tools that helped me produce this series.

1 comment:

Note: Only a member of this blog may post a comment.