Square Pegs, Round Holes, and Playing Tennis with Hockey Sticks
I'm known and often teased for missing a patch or two. I get caught using Browser 6.x when everybody else is running Browser 9.x. Its definitely the case that my version of the Linux Kernel will be a tad bit older than this year's model. Alas, the pain and humiliation of marching to a different drummer. First, there are the security concerns.If you're running Browser 6.x, you put the entire company at risk because version 6.x of the Browser has such an such security exploit. Then there are the compatibility concerns. If you're running version X we won't be able to exchange programs and documents and you put the entire company at risk, because in some unspecified mission critical scenario we won't be able to exchange or share work. What's not even a consideration is why am I still using Browser 6.x or why is my kernel not the latest release. It's all about cost savings and the company standards. Make no mistake about it, these days the latest upgrade for the specified product on the list is the company standard. I've seen companies and clients that are so busy standardizing on Vendor A's product and then so busy patching and updating that product, that they lose customers, profit, and sight of why they have the software in the first place. I'm aware of more than a few organizations that have entire departments dedicated to narrowing the software tools that the organization uses down to one, and then once they have the one right software tool, their next responsibility is to keep it patched and upgraded. In fact they get so efficient with the narrowing, then upgrading, and patching that they turn it over to the vendor, outsource it, or off shore it and the whole thing is just done automatically (right down to the money transfer). Never mind that the organization's mission evolves or expands and the software no longer or perhaps never did do the work that it was envisioned to do. The important thing is that we're all on the same version, that version is the latest version, and that latest version has been patched. We've got the latest-patched-version of Vendor A's enterprise-one-size-fits-all tool, and better yet, the latest-patched-version of Vendor A's enterprise-one-size-fits-all-tool is the organization/department standard.
But folks, here's the deal, the software tools, techniques and standards have to the follow the requirements, specifications, and problem solutions. If the next version-upgrade-patch diverges from our original software requirements, specifications, or problem solution what should we do? Actually what choice do we have? Upgrade to the next version of course! Even if it means we have to change our original software requirements, specifications, and problem solutions. Well, at least I've seen it go down like that in more than a few organizations. One of the reasons we use open source software is because we make customizations that solve serious problems, or that do special work. We like open source because it gives us a solid framework to start from. Now if we've modified the kernel to do some special work and we've added special capabilities to our Browser 6.x, and then a new version of the kernel and the browser that has significant divergence comes out, what are we to do? Well, our software requirements, specifications, and problem solutions trump any new version, upgrade, or patch. Of course, if we can upgrade or patch without violating our software requirements, specifications, or problem solutions, then we consider it. Similarly, the new-enterprise-one-size-fits-all-tool has to be able meet our software requirements not the other way around. The fact that its become the new company standard is irrelevant.
You've probably guessed it by now, there's a mismatch going on. Your software requirements, specifications, and problem solutions change at a much slower rate than the latest new fangled version of the enterprise-one-size-fits-all-tool. That enterprise-one-size-fits-all-tool often becomes the standard because of discount licensing, industry politics, or simply keeping up with the Jones'. Many times the software requirements, specifications, and problems solutions are either secondary considerations, or not considerations at all. So what the software development group ends up with are nice shiny new tools that may be useful on some aspects of the project, but extremely difficult to use in others. But because its the organization/department standard its used anyway. Obviously, this is a bad situation whenever it occurs. But when we are talking about solving problems that involve concurrency and parallelism, or building multithreaded or multi-processing architectures, the problem is greatly magnified. Here are some of the reasons:
- Software designs that require concurrency have all of the same challenges that sequential designs have plus a new set of vocabulary, implementation techniques, new sources of errors and exceptions, new algorithms, new data structures, totally new testing methodologies, etc.
- Software maintenance of systems that contain concurrency is more difficult than systems that don't require concurrency because of the potential to negatively impact performance if the model is incorrectly modified (or not well understood), or the introduction of data race (result corruption), dead lock. Translation of a non-deterministic correct system into a non-deterministic incorrect system. Also any new code (if related to the parallelism) tends to exponentially increase the complexity of testing and debugging. Which means that test case data grows dramatically.
- The architecture of systems that require concurrency are very resistant to change. The problems that were solved by object-oriented design, name procedure driven complexity is re-introduced in many architectures that require concurrency.
As we are involved with more and larger multithreaded or multi-processing systems, Tracey and I are encountering multiple paradigm parallelism. That is scenarios where MIMD or SIMD are both used in a context of say a Boss-Worker architecture, or Boss-Worker architectures that are pipe-lined and some stages of the pipe use MIMD while others use SIMD. This combining of parallel paradigms can lead to exotic architectures that are very very resistant to change. What happens when your requirements or specifications call for one of these exotic architectures but your organization/department standard enterprise-one-size-fits-all-tool is not up to the job? The inertia of getting to the next version, the next upgrade and patch, and standardizing on some politically-installed tool is so great that we are seeing developers that are forcing square pegs into round holes and playing tennis with hockey sticks. Actually with enough practice, money spent on specialized hockey sticks, and enough power point presentations, the tennis game starts to improve a little, and with enough overtime and re-interpreting the original requirements and specifications, the hockey stick and tennis thing starts to produce results.
I came up on a steady diet of right-tool-for-the-job and measure twice cut once. From my vantage point, the software requirements, specifications, and problem/model solutions dictate what the standard tool set should be, what version of software we should be using and what release of the kernel we should be at. Now that we dealing with multi-paradigm massively parallel requirements, specifications, and designs, we have to really slow down and take a second look at the vicious cycle of consolidation to a single vendor, to a single tool, and then a constant schedule of version upgrades and patches. Our software systems are already to complex for those kinds of shenanigans. The addition of massive parallelism and concurrency just makes it out of the question. I'll take tennis rackets for my tennis, please! no matter how politically correct or seductively licensed the latest version of hockey sticks.