Musings on Technical Debt: Part 1

This series of three blogs arose from a request within a large enterprise to estimate the cost of "technical debt" and how can we as developers make sure that all projects aren't infected, and ultimately slowed by technical debt. The idea being if we got the business to understand our world, they would pay for the debt upfront.

Basically, the high level spec I had was:

Development - Technical Debt = Money

What is Technical Debt?

Taking the definition from Martin Fowler doesn’t seem unreasonable. He talks about "doing things the quick and dirty way sets us up with a technical debt". I am more keen on his analogy of a loan that you are paying interest on. Until you have paid off the capital you are paying the cost of your technical debt in interest over time.

However after feedback on the first draft of this article I watched a 4 minute video on the debt metaphor by Ward Cunningham. Ward’s thoughts in the video are great and there are several things I love that I took from it.

  • He calls it debt not technical debt.
  • He defines it nicely, my summary would be - Debt is the price of coding quickly without fixed design/requirements, as our domain expands, assumptions will be highlighted, and debt needs to be paid off to make the domain consistent with how we know the world to be, not how we thought the world would be in an earlier iteration.
  • He gives you the sell to the business. - Because we deliver early, and allow you make changes whenever you want, we need to pay a price called “debt” that reflects the cost of keeping our underlying domain model is up to date, if we do not pay this debt, we will incur interest in the future that will mean functional delivery will slow/stop.
  • He has given me a nice solution which I will get to in final article in this series.

The one thing I would suggest though, is that if your team (either developers or developers and business) is discussing “technical debt” there is probably a difficult conversation to be had where you agree or at the very least understand what everyone means by technical debt. 

There is more discussion on technical debt on c2.com which I found interesting, not all of which I agreed with.

In this article I am probably guilty of talking about technical debt in a very broad sense. “Stuff that people call technical debt, but often shouldn’t be”. In the conclusion I will narrow my definition down a bit.

Signs That You Probably Have Technical Debt

Projects that have a large amount of technical debt suffer from the broken window theory, where developers don’t feel the need/peer pressure to code in a neat/socially responsible manner as the code base is already messy.

Here are a few I can think of, all of which I have come across in the wild. Obviously not on any project I was running.

  • People are involved.
  • People are involved on an agile project. Whatever agile may or may not mean.
  • You either do not understand or have chosen not to follow the practices laid out in XP. The stuff here: XP rules is just right. Seriously, I will argue with you about the relative agility of your not quite scrum process compared to my not quite scrum either process. But the XP stuff is just right.
  • New functionality written by developers takes longer than estimated. (Not necessarily indicative of debt, could be lots of others reason here, e.g. Estimation process, ego, scrum process, Project Managers or business imposing estimates, fear, engagement model, and lots of others.)
  • Developments in certain areas have a tax against them in terms of time.
  • Support unable to be done by support teams, developers having to support at weekends, Early Life Support before support team accept the code release, developers heavily involved in releases to live
  • NFRs not defined, or if not defined not implemented, or if implemented not tested.
  • Manual tests/lengthy regression pack (manual pack the testers runs through before a release to live allowed).
  • Non-automated/expensive/difficult regression tests.
  • Separate functional/non-functional testing cycles. First we check it works. Then we place it under load. We don’t check it actually works under load.
  • Long gaps between releases to live.
  • Developers scared to release.
  • Broken builds/failing test in CI are expected.
  • Background of bugs in live that the business have grown accustomed to. Is it meant to do that? No, but it always has. Is that a bug introduced by the release, no I think it has always done that.
  • Developers unhappy to modify/extend certain areas of code. Developers blame specific areas. We can’t add the doohicky functionality, as the fingle component is shit.
  • Complicated deploy story. Complicated may mean manual, perhaps more surprisingly it can also mean overly engineered automated process (i.e. shit or imposed). 1 week to write the new component. Three weeks to write the deployment script for the new component.
  • Developers are meant to pair, but never do.
  • Log files in live that contains errors/exceptions.
  • Log files that support can't easily follow, and have to be analysed by developers.
  • No re-use of code/components within projects or across programs or enterprise. Be careful here, I am advocating appropriate reuse usually at the (well tested) code level. It is very rare that I will advocate shared enterprise resources which can impact many (even two) unrelated projects especially with different business owners. This probably needs a separate article. Working title “Why SOAs/ESBs/Enterprise Singletons will screw you up and are usually architectural aberrations".
  • Iterations delivering no business value.
  • Iterations being cancelled.
  • Merges being time consuming, even requiring separate cards on the board.

Coming Next

I will talk about some of the reasons you may have technical debt. And the cost of this debt.