Recovering the architecture
Here's a situation that I've seen so many times that I've lost count. Someone builds a system with a clean architecture. After a while, users realize that there are important things that the system cannot do. They ask for new features. Those features do not fit well in the architecture, but they get shoehorned in anyway as a patch--because they're important.
Lather, rinse, repeat. After a while, the system has so many patches on top of patches on top of patches that no one understands it any more. By then, no one is left who understands the original architecture, so someone gets the unenviable task of "recovering the architecture" -- i.e. figuring out by inference how the original system must have been designed.
Of course, the next task, once the architecture has been recovered, is to come up with a new architecture that supports no only everything the original architecture did but also all the features that have been patched into the system in the meantime. Sometimes that effort fails because it is so complicated that no one understands it. However, the more usual outcome is a system that mostly works, but doesn't quite.
Such systems tend to fail in two ways. First, every large system has bugs, which means that some users of the new system will find that things they were doing with it, which used to work, no longer do so. Second, and perhaps more important, is that there are probably bugs in the architecture recovery itself: The new architecture is fundamentally incapable of doing something that the old architecture could do, and some users consider that capability to be important. So the old capability is patched into the new system in the name of backward compatibility, and the cycle begins again.
This architectural process seems always to result in systems of ever-increasing complexity. Not only do such systems require ever-increasing effort to keep working, but the chance of serious security problems keeps going up as well. Food for thought: Is there any way of avoiding this complexity creep in practice? Some people will surely say "Don't worry about architecture! Do the simplest thing that works and keep refactoring as you go! Just be sure your tests cover all the bases." But how can even the most systematic testing prove the absence of security problems?
Can anyone convince me that we have made any real progress toward solving this problem in the past few decades?