When I was young and first got into programming seriously, I reveled in working at the lowest levels of code. I loved (loved!) assembly language and could think of no greater happiness than doing clever things with registers and writing tight, fast code that squeezed into the minimum amount of RAM and delivered great performance.
White PapersMore >>
- IT and LOB Win When Your Business Adopts Flexible Social Cloud Collaboration Tools
- How to Stop Web Application Attacks
As time passed, the lack of portability and the burden of learning the ins and outs of every new architecture got me interested C. In C, I thought I'd finally found the gateway to paradise: portable assembly language. What could possibly be better?
But after nearly 10 years of working in C on my personal projects, I tired of never being able to bring anything to a close, unless the project was a minor utility. Anything bigger never bore the full elements of my original ambitions. Feature lopping was a common practice. And even then more modestly resized projects, under unsparing hours in the debugger, would eventually be consigned to the groaning shelf of unfinished projects.
I had to move on. I went to Java, which drew me with its siren song of portability, performance, excellent tools, and enormous libraries. Not having to code one more linked list had remarkable appeal. So did the superior testing tools that would allow me to write code and test it quickly. This enabled me to avoid spending lots of time in the debugger. Ever since, Java has been my base language for personal projects, both large and small. (I am not implying that it will be my final destination. I continue to be troubled by the inconvenience of delivering software to non-developer users, who are not at all comfortable downloading or updating Java. A garbage-collected, portable, binary language with good libraries that is not as complex as C++ has great appeal to me. Which is certainly why I feel so interested in Go at the moment.)
As time has passed, though, I've found myself constantly frustrated by the feeling that no matter how much I test my code, I can't be sure with certainty that it's right. The best I can say is that the code is probably right. But when I write code for others to use, I want it to be completely reliable. This concern has led me to embrace tools that enforce the correctness of the code.
Long ago, for example, I adopted Bertrand Meyer's concept of design-by-contract (DBC), which suggests that every function should test for pre-conditions, post-conditions, and invariants. In Java, I do this with the excellent Guava library. So my methods tend to have tests, especially at the beginning where each parameter is carefully checked. I test invariants and post-conditions primarily in unit tests, which is probably not ideal, but it moves some of the validation clutter out of the code.
I'd like to go beyond this. First, I wish the languages provided better support for correctness. For example, I think floats should default to an uninitialized value of NaN (not a number). I wish integers had a NaN equivalent and that all primitive data types and strings had an invalid value to which they would be initialized by the language. A whole class of errors would go away. (In the absence of this, the Java compiler's ability to identify uninitialized values is undeniably helpful.)
But moving beyond these simple levels is where things become a lot more difficult. I have essentially two major options. The first would be to code in some kind of modeling language. Then use a tool, such as Conformiq's products or others, to generate thousands of tests. Then, automatically generate the code from the models and run all the tests. This is a highly effective way of writing solid code, and it is favored widely in Europe. However, I'm a coder at heart, and I'm not sure coding at a diagrammatic level is really my cup of tea.
The second alternative is to use languages that strongly support correctness. There are not many. Ada goes farther than Java in this regard. And Eiffel perhaps farther yet. But probably the one that does the most is Spark, an Ada subset. Developed in the U.K., Spark uses a system of annotations that are embedded in comments at the start of each method. The code can then be verified by assorted tools that check it against the annotations. Free and commercial versions of SPARK tools exist today.
Specifying code functionality in this way is undoubtedly an extra step with lots of additional typing involved. But it has advantages outside of proving correctness: To start with, it more fully documents the code. And, as Edmond Schonberg (formerly a professor of computer science at NYU and now an executive at AdaCore) mentioned to me in a pick-up conversation, by writing out the specifications before the code, most of the same benefits that tests deliver to TDD aficionados accrue to coders the tests or specifications force you to spell out what you're doing before you begin to code. "Lots of defects get caught by this simple step," according to Schonberg. I believe it.
And I rue that in the U.S., only a few industries (mostly embedded, automotive, and avionics) are interested in high levels of code correctness. It's a topic that is dismissed as a luxury by mainstream programmers because it appears to interfere with the ability to deliver software quickly.
I have some sympathy for this point of view. If I were to indulge my desire for correctness more deeply, I would certainly want to have a scripting language as a secondary tool for testing out ideas and banging out code that doesn't need to be perfect. I don't want correctness to push me back to the days of coding in C when nothing was ever completed. However, as the recent Knight Capital disaster on Wall Street demonstrates, the general resistance to correctness felt by business programmers is a conceit that cannot always be accommodated without considering its occasionally dire consequences.