In my 20 years in the IT industry, I've noticed that although things like the level of programming abstraction have changed significantly, other challenges have remained essentially the same.
I've seen development progress logically from a low granularity to a high level of abstraction. Early in my career, we were happy to move from the instruction level of assembly language into C. C functions served as units of programming, and we could reuse code by calling existing functions. The granularity improved with the transition to C++ and OOP, where objects became the units of programming. This let us have objects operating on objects, permitting an even higher level of abstraction.
Java offered developers many utility libraries that handled common functionality. This promoted more code reuse, and an even higher level of abstraction.
Web services let development occur at the system level. Functionality is represented by existing systems, and the developer is mainly responsible for adding logic that ties the existing systems together in a way that produces the desired result. Again, this enables an increased amount of reuse at a higher level of abstraction.
But developers today struggle with the same essential challenges that troubled us 20 years ago. When I worked at the assembly level, every time I changed even a few instructions, I had to determine how these changes impacted the applicationor else worry that my tiny changes might have broken the application's existing functionality. Assessing the impact of changes continued to be a struggle in C and C++.
With web services, this already difficult feat became even more complicated. Before, the entire application was controlled by me and my team, so it was reasonable to assume that with thorough testing, I could understand the full impact of my code changes. Now, any modification might impact anyone connecting to my web service. Consequently, it's both more difficult and more critical to understand the impact of every modification.
Adding to the challenge, modifications are now expected faster and more frequently than ever. Previously, software didn't change frequently and nobody expected us to reprogram the system overnight. Developers are now being asked to significantly modify part of a system, then redeploy it in a matter of daysor sometimes even hours. This might have been feasible when a system was one machine. However, with web services, such an update is likely to impact 20 different parts of your own system's infrastructure, plus the infrastructures of 500 others you've never met.
Many people have responded to these problems by chasing silver bullets. They fear that their changes will introduce bugs, and so want tools to find these bugs automatically. After 20 years of examining how and why errors occur, I believe this is the wrong response. Only a small class of errors can be found automatically; most bugs are related to functionality and requirements, and cannot be identified with just the click of a button.
At Parasoft, we've been struggling with this same problem for two decades, and learned that the only way to understand how each modification impacts functionality and requirements is to have robust regression test suites. Such test suites can alert you when code behavior changes, but they can't tell you whether each change results from a mistake or an expected functionality change. The human brain needs to review the results in contextby comparing the impacted code's current behavior to the expected behavior defined in the requirements.
Our current mission is to address this problem by inventing technologies and strategies to support the brain as it performs this evaluation. We are building automated infrastructures that provide maximum automation for mundane tasks (compiling code, building/running regression test suites, checking adherence to policies, supporting code reviews, and so on) in such a way that each day the brain is presented with the minimal information needed to determine if yesterday's code modifications negatively impacted the application. Over the years, we've also learned that this automated infrastructure must also be accompanied by a disciplined process, which forces the brain to simultaneously look at code and verify its correctness. This isn't easy, but it is nonetheless possible. If accomplished, it can significantly improve developer productivity as well as product quality.