You are here

Does TDD lead to inefficient algorithms?

Bob Martin showed in a recent blog post "Echoes from the stone age" how TDDing a sorting algorithm would lead to good ol' bubble sort. That stirred up a heated discussion if TDD is really worth it, if it leads to "inefficient" algorithms.

Nobody ever claimed that TDD is supposed to find algorithms that are optimized. TDD at function level defines the behaviour of functions. The behaviour of a sorting algorithm is that an initially unsorted list is sorted after the algorithm has been applied. Well, that's exactly what bubble sort does.

What are you optimizing for anyway? Speed? Memory consumption? Those are behaviour too (temporal and spatial), but they were not specified in Bobs requirements (a.k.a. tests). If you need a sorting algorithm that sorts a certain number of elements with a certain initial order in a certain time, you can specify that in a test (some testing frameworks support performance test, e.g. AUT). Bubble sort might fail that test. So what? That's the reason why we write tests first. We want to see it fail so we can fix it.

Because we have all the nice tests in place that test the external, non-temporal behaviour we can go and read about sorting algorithms and eventually throw out all that bubble sort code in favour of quick sort. Our tests will find out if we made a mistake. We will know if we break something and our latest test tells us if the chosen algorithm is fast enough for our requirements.

The test might even tell us that bubble sort is good enough. Why waste time on a super efficient algorithm if you need to sort a small number of items that are usually in order, but sometimes a new element is added to the list out of order. Bubble sort does that in one pass. You wont need quick sort, if the average number of items is 10.

Oh, by the way: You don't want to be caught committing premature optimization anyway, do you?