What are Dependency Injection Frameworks and Why are they Bad?

What is Dependency Injection?

Let’s suppose we are writing some code. This is part of a big legacy code base and we are using an object oriented language like Java or C#. Say we would like to value some stocks.

Suppose we have a class, StockValuer, that takes as its inputs, various interfaces, say, an IInterestRateProvider, a INewsFeedReader, and a IHistoricPriceReader. These interfaces are implemented by theInterestRateProvider, NewsFeedReader and IHistoricPriceReader classes respectively. Now each of these types in turn will have arguments they depend on, but we will elide that for now. So we will set it all up, with something like:

IInterestRateProvider interestRateProvider = new InterestRateProvder(...);
INewsFeedReader newsFeedReader = new NewsFeedReader(...);
IHistoricPriceProvider historicPriceReader = new HistoricPriceReader(...);

IStockValuer stockValuer = new StockValuer(interestRateProvider, newsFeedReader, historicPriceReader);

So we have one top level class, the StockValuer, and we pass various other objects into it, that provide the functionality it needs. This style of programming is called Inversion of Control or Dependency Injection.

Usually, when we write code like this, we test it by passing fakes or mocks of the various interfaces. This can be really great for mocking out the kind of low level stuff that it is usually hard to test like database access or user input.

These style of programming goes hand in had with the factory pattern. We can see as much above, we have written a factory to create our StockValuer!

What are Dependency Injection Frameworks?

There is another way for us to create our StockValuer. We can use a dependency injection Framework like Spring in Java or Castle Windsor in C#. We will no longer have a factory that explicitly builds up our StockValuer. Instead, we will register the various types that we wish to use, and the framework will resolve them at runtime. What this means, is that rather than using the new keyword and passing arguments to constructors, we will call some special methods from our dependency injection library.

So in our StockValuer example we would write something like:

var container = new DependencyInjectionContainer();

container.Register<IInterestRateProvider, InterestRateProvider>();
container.Register<INewsFeedReader, NewsFeedReader>();
container.Register<IHistoricPriceReader, HistoricPriceReader>();

container.Register<IStockValuer, StockValuer>();

Then, when the stock valuer is used in your real code, say, a function like,

double GetPortfolioValue(Stocks[] stocks, IStockValuer stockValuer)
{
...
}

The dependency injection framework will create all these types at run time, and provide them to the method. We have to explicitly provide the very bottom level arguments, things like the configuration, but the framework resolves everything else.

Why is it bad?

I think this is a pretty awful way to program. Here are my reasons why.

It Makes Our Code Hard to Read and Understand

One of the guiding principles behind dependency injection is that it doesn’t matter what specific implementation of the IThingy interface you get, just that it is an IThingy. This is all very well and good in principle, but not in practice. Whenever I am reading or debugging code, and I want to know what it actually does, I always need to know what specific implementation of IThingy I am dealing with. What’s even worse, is that DI frameworks break IDEs. Because various types and constructors are resolved at runtime, semantic search no longer works. I can’t even look up where a type is created anymore!

It Encourages Us to Write Bad Code

Dependency injection frameworks encourage us to write our code as a mish-mash of dependent classes without any clear logic. Each individual piece of real working code gets split out into it’s own class and divorced from it’s actual context. We end up with a bewildering collection of types that have no real world meaning, and a completely baffling dependency graph. Everything is now hidden behind an interface, and every interface has only one implementation.

It turns Compile Time Errors into Runtime Errors

For me this is an absolutely unforgivable cardinal sin. Normally, in a statically typed language, if you don’t provide the right arguments to a method this is a compile time problem. In fact normally this is something that is picked up by your IDE before you even try to build your code. Not so with dependency injection frameworks. Now you will not discover if you have provided the correct arguments until you run your code!

To give an example, I was working on a web app that was built using dependency injection. One day we merged some changes, built and tested our code, and deployed it to the test environment. While it was running, it crashed. We had forgotten to register one of the arguments a new method was using, and, the dependency injection framework couldn’t resolve this at runtime. This is something we could have easily spotted if we were writing our code without dependency injection magic. Instead our type error was only discovered when a specific end point of our service was hit.

It is Outrageously Verbose

The sort of code we write with DI frameworks is naturally very verbose, lots of interfaces and lots of classes. I once stripped castle windsor out of a C# project and halved the number of lines without changing the functionality at all. The problem with really verbose code is that it is harder it is to maintain. Indeed the more lines of code you have, the more bugs you will have.

Worse than this though, is the tests. There is a solution of sorts to the issue with runtime errors mentioned above. We write unit tests to validate the type correctness of our code. This to me is a pretty mad solution. It only catches the type errors that you remember to test for and it bloats our code base even more.

It is Far Too Complicated.

Using a DI framework requires us to learn an entirely new meta-langauge that sits on top of C# or Java. We need to understand all kinds of tricks and gotchas. Instead of building up programs with a few basic tools, we are now writing them with a complex unintuitive meta language that describes how to evaluate dependency graphs at runtime. Dependency injection takes something simple but inelegant, a factory, and turns it into a incomprehensible mess.