Ex Machina

Clean Code Revisited

The perils of following rules blindly

October 23, 2020 — 🕐 14 min read

In a previous post, I recommended the popular book Clean Code. Since then I’ve noticed that the book has been criticized quite often in recent years (e.g., see this post), so I took the time to review it again—it’s been many years since I first read it—and I found that I do agree with some of the criticisms raised against it.

Though I won’t do a full review of the book here, I do want to point out a couple of issues with the book as a warning for less experienced developers:

  • Dogmatism
  • Bad Examples of Clean Code

Dogmatism

Something you find when you read any book by Robert C. Martin is that he tends to be very entrenched in his opinions, sometimes almost dogmatic. You can notice this in his books (The Clean Coder, Clean Architecture, etc.), but also in his talks and other expositions in social media (e.g., “Those who don’t practice TDD cannot be considered professionals!”)

As any other individual, he’s entitled to have strong opinions and to argue passionately for what he believes in. But as engineers with a growth mindset, we should try to avoid becoming part of “engineering cults” and take his opinions with a grain of salt. As Alan J. Perlis once wrote:

Above all, I hope we don’t become missionaries. Don’t feel as if you’re Bible salesmen. The world has too many of those already…

Over time I’ve learned that listening to diverse opinions can give you a much stronger engineering foundation. From Kent Beck—known for popularizing the technique of Test-Driven Development—I learned that software development is almost always about trade-offs, not absolute rules. This means we must learn to see things in context and realize that “rules” are usually just “rules of thumb”.

Speaking of rules, one that stuck with me when I first read Clean Code was that of “writing short functions that do one thing”. In the beginning, I took it quite literally, and I strove to always reduce my functions to just a few lines of code even if, in retrospective, that sometimes hurt readability by creating too many levels of indirection or incoherent partitioning. As I’ll try to show in the next section, it is easy to take the idea in that rule too far, usually with negative consequences.

Bad Examples of Clean Code

To illustrate how taking rules quite literally can be disastrous sometimes, I’d like to reproduce an example from the book, and then comment on the refactoring that the author proposes.

The code in question is a program to generate a list of prime numbers (it is written in really unidiomatic Java style, apparently because it was auto-generated by some tool):

public class PrintPrimes {
    public static void main(String[] args) {
        final int M = 1000;
        final int RR = 50;
        final int CC = 4;
        final int WW = 10;
        final int ORDMAX = 30;
        int P[] = new int[M + 1];
        int PAGENUMBER, PAGEOFFSET, ROWOFFSET;
        int C, J, K;
        boolean JPRIME;
        int ORD, SQUARE, N;
        int MULT[] = new int[ORDMAX + 1];
        J = 1;
        K = 1; P[1] = 2; ORD = 2; SQUARE = 9;
        while (K < M) {
            do {
                J = J + 2;
                if (J == SQUARE) {
                    ORD = ORD + 1;
                    SQUARE = P[ORD] * P[ORD];
                    MULT[ORD - 1] = J;
                }
                N = 2;
                JPRIME = true;
                while (N < ORD && JPRIME) {
                    while (MULT[N] < J)
                        MULT[N] = MULT[N] + P[N] + P[N];
                    if (MULT[N] == J)
                        JPRIME = false;
                    N = N + 1;
                }
            } while (!JPRIME);
            K = K + 1;
            P[K] = J;
        }
        PAGENUMBER = 1;
        PAGEOFFSET = 1;
        while (PAGEOFFSET <= M) {
            System.out.println("The Prime Numbers --- Page " + PAGENUMBER);
            System.out.println("");
            for (ROWOFFSET = PAGEOFFSET; ROWOFFSET < PAGEOFFSET + RR; ROWOFFSET++) {
                for (C = 0; C < CC;C++)
                    if (ROWOFFSET + C * RR <= M)
                        System.out.format("%10d", P[ROWOFFSET + C * RR]);
                        System.out.println("");
            }
            System.out.println("\f");
            PAGENUMBER = PAGENUMBER + 1;
            PAGEOFFSET = PAGEOFFSET + RR * CC;
        }
    }

And here’s the proposed refactoring in the book:

public class PrimePrinter {
    public static void main(String[] args) {
        final int NUMBER_OF_PRIMES = 1000;
        int[] primes = PrimeGenerator.generate(NUMBER_OF_PRIMES);

        final int ROWS_PER_PAGE = 50;
        final int COLUMNS_PER_PAGE = 4;
        RowColumnPagePrinter tablePrinter =
            new RowColumnPagePrinter(
                ROWS_PER_PAGE, COLUMNS_PER_PAGE,
                "The First " + NUMBER_OF_PRIMES + " Prime Numbers");
        tablePrinter.print(primes);
    }
}

So far so good. The responsibility of generating the list of primes (PrimeGenerator) has been clearly separated from that of displaying them on the screen (RowColumnPagePrinter.)

Let’s have a look at the RowColumnPagePrinter class:

public class RowColumnPagePrinter {
    private int rowsPerPage;
    private int columnsPerPage;
    private int numbersPerPage;
    private String pageHeader;
    private PrintStream printStream;

    public RowColumnPagePrinter(int rowsPerPage, int columnsPerPage, String pageHeader) {
        this.rowsPerPage = rowsPerPage;
        this.columnsPerPage = columnsPerPage;
        this.pageHeader = pageHeader;
        this.numbersPerPage = rowsPerPage * columnsPerPage;
        this.printStream = System.out;
    }
    public void print(int data[]) {
        int pageNumber = 1;
        for (int firstIndexOnPage = 0; 
             firstIndexOnPage < data.length; 
             firstIndexOnPage += numbersPerPage) { 
            int lastIndexOnPage = Math.min(firstIndexOnPage + numbersPerPage - 1, data.length - 1);
            printPageHeader(pageHeader, pageNumber);
            printPage(firstIndexOnPage, lastIndexOnPage, data);
            printStream.println("\f");
            pageNumber++;
        }
    }
    private void printPage(int firstIndexOnPage, int lastIndexOnPage, int[] data) {
        int firstIndexOfLastRowOnPage = firstIndexOnPage + rowsPerPage - 1;
        for (int firstIndexInRow = firstIndexOnPage; 
             firstIndexInRow <= firstIndexOfLastRowOnPage; 
             firstIndexInRow++) { 
            printRow(firstIndexInRow, lastIndexOnPage, data);
            printStream.println("");
        }
    }
    private void printRow(int firstIndexInRow, int lastIndexOnPage, int[] data) {
        for (int column = 0; column < columnsPerPage; column++) {
            int index = firstIndexInRow + column * rowsPerPage;
            if (index <= lastIndexOnPage)
                printStream.format("%10d", data[index]); }
    }
    private void printPageHeader(String pageHeader, int pageNumber) {
        printStream.println(pageHeader + " --- Page " + pageNumber);
        printStream.println("");
    }
}

Overall, this class reads reasonably well to me. One objection I have is whether we really need such long names (e.g., firstIndexInRow in the printPage function), which are harder to read, and sometimes can create unnatural line wrappings (such as in the for statement above.)

Ironically, the author himself offers some advice in his book:

The length of a variable name should be proportional to the size of its scope”

However, I will not fuss much about this particular issue because I’d like to focus on what I think is more problematic in the PrimeGenerator class.

Go ahead and spend some minutes trying to understand it before reading on.

public class PrimeGenerator {
    private static int[] primes;
    private static ArrayList<Integer> multiplesOfPrimeFactors;

    public static int[] generate(int n) {
        primes = new int[n];
        multiplesOfPrimeFactors = new ArrayList<Integer>();
        set2AsFirstPrime();
        checkOddNumbersForSubsequentPrimes();
        return primes;
    }
    static void set2AsFirstPrime() {
        primes[0] = 2;
        multiplesOfPrimeFactors.add(2);
    }
    static void checkOddNumbersForSubsequentPrimes() {
        int primeIndex = 1;
        for (int candidate = 3; primeIndex < primes.length; candidate += 2) {
            if (isPrime(candidate))
                primes[primeIndex++] = candidate;
        }
    }
    static boolean isPrime(int candidate) {
        if (isLeastRelevantMultipleOfNextLargerPrimeFactor(candidate)) {
            multiplesOfPrimeFactors.add(candidate);
            return false;
        }
        return isNotMultipleOfAnyPreviousPrimeFactor(candidate);
    }
    static boolean isLeastRelevantMultipleOfNextLargerPrimeFactor(int candidate) {
        int nextLargerPrimeFactor = primes[multiplesOfPrimeFactors.size()];
        int leastRelevantMultiple = nextLargerPrimeFactor * nextLargerPrimeFactor;
        return candidate == leastRelevantMultiple;
    }
    static boolean isNotMultipleOfAnyPreviousPrimeFactor(int candidate) {
        for (int n = 1; n < multiplesOfPrimeFactors.size(); n++) {
            if (isMultipleOfNthPrimeFactor(candidate, n))
                return false;
        }
        return true;
    }
    static boolean isMultipleOfNthPrimeFactor(int candidate, int n) {
        return candidate == smallestOddNthMultipleNotLessThanCandidate(candidate, n);
    }
    static int smallestOddNthMultipleNotLessThanCandidate(int candidate, int n) {
        int multiple = multiplesOfPrimeFactors.get(n);
        while (multiple < candidate)
            multiple += 2 * primes[n];
        multiplesOfPrimeFactors.set(n, multiple);
        return multiple;
    }
}

Did you understand clearly how and why it works? I’d be pretty surprised if you did, but if you’re like the rest of us, this code probably made you scratch your head more than it should.

If we take the idea of “writing short functions that do one thing” quite literally, we could argue that all of this code is indeed good. But as we dig in to truly understand its inner workings, very soon we notice quite a few problems. Let’s dissect it method by method to see this:

public class PrimeGenerator {
    private static int[] primes;
    private static ArrayList<Integer> multiplesOfPrimeFactors;

    static int[] generate(int n) {
        primes = new int[n];
        multiplesOfPrimeFactors = new ArrayList<Integer>();
        set2AsFirstPrime();
        checkOddNumbersForSubsequentPrimes();
        return primes;
    }
    static void set2AsFirstPrime() {
        primes[0] = 2;
        multiplesOfPrimeFactors.add(2);
    }
    static void checkOddNumbersForSubsequentPrimes() {
        int primeIndex = 1;
        for (int candidate = 3; primeIndex < primes.length; candidate += 2) {
            if (isPrime(candidate))
                primes[primeIndex++] = candidate;
        }
    }
    // ...
 }

These first few methods contain simple logic, but it’s not obvious why they’re doing certain things. An explanation of the underlying algorithm is missing (e.g., what is the role of multiplesOfPrimeFactors?) Let’s continue:

public class PrimeGenerator {
    // ...
    static boolean isPrime(int candidate) {
        if (isLeastRelevantMultipleOfNextLargerPrimeFactor(candidate)) { 
            multiplesOfPrimeFactors.add(candidate); 
            return false; 
        }
        return isNotMultipleOfAnyPreviousPrimeFactor(candidate);
    }
    // ...
}

For me, this is where the major confusion begins. The highlighted lines makes me feel like we’ve just taken a leap of logic:

  • What does “relevant multiple” mean and how does it relate to the “next larger prime factor”?
  • Why does this function have a side effect on multiplesOfPrimeFactors if its name suggests that it should be a pure function?

It seems like the author is hoping that the code will explain itself, but unfortunately that’s just not the case. Any hope that the definition of “relevant” would be clarified in the next method is then lost:

public class PrimeGenerator {
    // ...
    static boolean isLeastRelevantMultipleOfNextLargerPrimeFactor(int candidate) {
        int nextLargerPrimeFactor = primes[multiplesOfPrimeFactors.size()];
        int leastRelevantMultiple = nextLargerPrimeFactor * nextLargerPrimeFactor;
        return candidate == leastRelevantMultiple;
    }
    // ...
}

The implementation reveals some clues (e.g., a relevant multiple apparently has something to do with squares of primes), but the mere fact that we have to make conjectures is already a strong “smell” in this code.

The next method is straightforward and follows the principle of least astonishment well:

public class PrimeGenerator {
    // ...
    static boolean isNotMultipleOfAnyPreviousPrimeFactor(int candidate) {
        for (int n = 1; n < multiplesOfPrimeFactors.size(); n++) {
            if (isMultipleOfNthPrimeFactor(candidate, n))
                return false;
        }
        return true;
    }
    // ...
}

Unfortunately, the last two methods seem like they could have been just one, and the last one is doing some strange operations that are not explained at all by the name smallestOddNthMultipleNotLessThanCandidate:

public class PrimeGenerator {
    // ...
    static boolean isMultipleOfNthPrimeFactor(int candidate, int n) {
        return candidate == smallestOddNthMultipleNotLessThanCandidate(candidate, n);
    }
    static int smallestOddNthMultipleNotLessThanCandidate(int candidate, int n) {
        int multiple = multiplesOfPrimeFactors.get(n);
        while (multiple < candidate)
            multiple += 2 * primes[n];
        multiplesOfPrimeFactors.set(n, multiple);
        return multiple;
    }
    // ...
}

Despite seemingly following “the rules” of “clean code”, this code has fallen victim to a common problem when writing short functions: forcing a “narrative” that is unnatural and hard to follow (partly due to the excessive indirection) simply in the name of creating very short functions.

This is, of course, not an issue with the principle itself, but rather with the programmer’s ability to modularize the code appropriately, striking a balance between simplicity of part and coherence of the whole.

Recapping the Issues

To recap, here are the problems I see with this example:

  • Lack of Documentation. It fails to ease the reader into a nontrivial algorithm with documentation or comments that explain why some odd things are happening (i.e., the side effects in some methods).

  • Verbosity. It uses overly verbose method names in an attempt to compensate for the lack of documentation.

  • Forced Small Functions. It assumes that many tiny functions always means better readability, but this is not true when the resulting partitioning causes excessive logical indirection, or when it is not coherent (again, sometimes manifested as convoluted function names.)

The advice to write “short functions that do one thing” needs to be explained with more nuance. When followed literally, it is easy to arrive at code that is more difficult to understand than its everything-in-a-single-function counterpart.

There is an intrinsic difficulty in partitioning code with straightforward, intuitive boundaries that have clear inputs, outputs and side effects. This explains why—despite all good intentions—a refactoring attempt can sometimes still leave a lot of room for improvemement.

A Proposed Refactoring

After studying the code, the underlying algorithm (the sieve of Eratosthenes) and its optimizations in detail, I decided to take a stab at a refactoring to address the concerns I mentioned above. I wrote this version in Python but I don’t think the choice of language is really a major factor in its improved clarity, as I’ll comment below:

# TODO: review whether the optimizations are truly worth the additional complexity
class PrimeGenerator:
    """
    Generates a list of the first `n` primes using the "sieve of Eratosthenes"
    algorithm (https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes) with some
    optimizations:

    1. Instead of testing `candidate % prime == 0`, we use repeated sums:
       `prime + ... + remainder == candidate` (where `remainder` is zero if
       `candidate % prime == 0` is true).
       Hence `prime_multiples[i] == k*prime[i]` always holds for some `k`

    2. We start testing divisibility by `prime` at `prime^2` because
       smaller multiples will have already been considered by tests with
       smaller primes. Hence the phrase "relevant multiple" in the method
       `add_next_relevant_prime_multiple_if_needed`
    """
    def generate(self, n):
        self.primes = [2, 3]
        self.prime_multiples = [2**2, 3**2]

        prime = 3
        for _ in range(n):
            prime = self._find_next_prime(prime)
            self.primes.append(prime)
        return self.primes

    def _find_next_prime(self, previous_prime):
        prime_candidate = previous_prime
        while True:
            prime_candidate += 2 # skip even numbers which cannot be primes
            self._add_next_relevant_prime_multiple_if_needed(prime_candidate)
            if not self._has_any_prime_divisor(prime_candidate):
                return prime_candidate

    def _add_next_relevant_prime_multiple_if_needed(self, prime_candidate):
        # If we're testing primality of N, we only need to test divisibility
        # with primes up to sqrt(N)
        m = len(self.prime_multiples)
        prime = self.primes[m] if m < len(self.primes) else 0
        if prime_candidate == prime**2:
            self.prime_multiples.append(prime**2)

    def _has_any_prime_divisor(self, odd_number):
        return any(
            self._is_divisible_by_nth_prime(odd_number, n)
            # self.prime_multiples[0] (=2) is even, so we can skip its multiples
            for n in range(1, len(self.prime_multiples))
        )

    def _is_divisible_by_nth_prime(self, odd_number, n):
        # This is the implementation of the first optimization described above.
        # Notice that we don't ever need to reset `self.prime_multiples[n]`
        # because during prime generation we test only increasingly larger numbers.
        while self.prime_multiples[n] < odd_number:
            # `self.prime_multiples[n] + self.prime[n]` is even so we can safely
            # skip to the next multiple
            self.prime_multiples[n] += self.primes[n] + self.primes[n]
        return self.prime_multiples[n] == odd_number

def compute_primes(n):
    return PrimeGenerator().generate(n)

While I don’t claim this version is “perfect”—the _is_divisible_by_nth_prime method still has a side-effect, and the explanation of the optimizations is not detailed enough—I do hope you can agree that this version is much easier to understand for a future reader.

Using Python helps in making the code less verbose, but the true improvements are in the introductory documentation, the more adequate partitioning, and the comments that explain what the code cannot explain on its own.

If this was part of an actual codebase, I would write a detailed post to explain how the optimizations work and remove all the explanation comments, replacing them with just links to the documentation.

Conclusion

While I still think that Clean Code can be a useful book for many novice programmers (when read with a critical mind), I believe there are now several books and resources that can be just as useful to get a strong foundation in techniques to write more maintainable code, and students or beginner programmers would do well to read them to get a well-rounded perspective.

The final takeaway from this analysis is that we should always beware of the perils of blindly following “rules” whose implications and limitations we haven’t fully internalized.

Further Reading

The following resources are good alternatives and complements to Clean Code if you’re interested in learning more about this topic:

  • 📖 99 Bottles of OOP by Sandi Metz. This book explains the process of writing good code, covering pros and cons of various approaches while walking you through detailed code examples.

  • 📖 Code Simplicity by Max Kanat-Alexander. This book is a compilation of many blog posts by the same author, where he exposes his theory of how complexity in software is the bane of all programmers, and encourages people to struggle to keep things simple.

  • 📖 The Art of Readable Code by Dustin Boswell. Very similar in spirit to the Clean Code book, worth reading for the additional insights and examples it presents.

  • 🎥 The Clean Code Talks. This is a series of talks by Miško Hevery, Joshua Bloch and present and former Googlers on “clean code” practices. While it doesn’t cover many topics found in the books recommended above, it explores some other topics that are equally important when writing maintainable code.



Written by W. Gómez, software engineer and machine learning enthusiast.