Not so long ago, if you coded natively and expected to offer your software on multiple operating systems, you knew how hard the road you'd chose was. You knew you'd have to test under various OSs and use macro identifiers to selectively include or exclude code for a given platform. You also knew that you might have to debug differing problems on disparate platforms. To minimize those issues, C and C++ developers learned a long list of techniques to avoid inadvertently platform-specific code.
The sizes of data types, the packing of structures, the alignment of union members, the need for explicit casting all these topics were well known and an automatic part of coding for experienced multiplatform developers. So was a principal best practice: coding "down the middle of the language," as Brian Kernighan put it elegantly. You didn't noodle around in the dark corners of any language unless you really had to. Undefined behaviors lurked there poised to sink your code.
All the rules and caveats had one overarching goal: Namely, that a single code base could generate the right product with the correct look and feel on all the different platforms. Beyond this, it was expected that when you shipped the glorious, multiplatform package, it installed correctly on every platform with all dependencies resolved. Toolkits, such as X or Qt, handled the UI, and OS peculiarities were addressed by the façade pattern. Developers wrote an API that hid the OS implementation and then ported that API to various platforms. (I discussed C libraries that do this recently.)
There are still applications today that follow this simple philosophy. The Lua language and runtime environment, for example, are written in Lua and C. They compile on most any platform and (gosh!) run without modification. Now, then, let's try to find another widely used app that makes the same claim. Just compile and run…Let's see…There are others, but in fact they're few.
Far more common today is the practice of shipping a partial solution and expecting the user to provide the missing features. This trend is particularly visible in software written on Linux and later migrated to Windows. Very often, the packages require installation of Cygwin or its derivative, MinGW, to run. That's not a port. That's saying, "We coded it in Linux, made no attempt to know what GCC and OS features were not supported on other platforms, so here, use a kludge and figure it out for yourself; we're too busy doing other things." The rationalization of this poor work is frequently bolstered by lavish attacks on Windows. Such is the case, for example, with Google's "systems language," Go. (See one of many such threads here.) While this is going on, the lack of adoption of Go is emerging as an issue inside Google.
I submit this lack of traction is partially because 90% of the PC market and 50% of the server market, give or take, doesn't have access to a native Windows port. And they're not willing to reset compilers, add utilities, change environment variables, and scatter outdated link libraries on their disks just to kick the tires on a language. For those users, Go has indeed not been ported.
You have to assume the choice is intentional on Google's part. They've decided that Go won't be a universal systems language in the way that C was, but rather, it's the language Google will (might?) use internally for its systems. Any other users are outside the language's core constituency.
My point, though, is far wider than Go or even Linux. Other platforms have to play along, too. For example, Microsoft's compiler is feature-wise behind open-source platform tools such as GCC. It, too, shares in the blame by obstructing ports to Windows. And as you can see from my recent interview with Microsoft's Herb Sutter about the state of C and C++, tools that compile code ported from other operating systems are not even a minor priority. It's not a goal for Microsoft, even though it would mostly require filling in the missing features in the implementation of language standards. (For a fuller language implementation with equal-to-better optimization, use Intel's C++ compiler. It's plug-compatible with Visual C++ and with Microsoft's binary formats.)
The one realm of effortless portability has been in VM-based environments, such as the maligned JVM. While small tweaks are very rarely needed to make Java bytecode run ideally on multiple platforms, by and large, the JVM does deliver true portability. "Write once, run anywhere" is a goal that has largely been achieved.
The trouble is that, as many pundits put it, we're just entering into a C++ Renaissance (the Herb Sutter interview I referenced above describes the causes and nature of this phenomenon). The renaissance is in part driven by the need for computing power on mobile devices, where portability counts for nothing. Most vendors who ship products on several mobile platforms go native with each one.
If portability continues to be accorded the little value it receives today from Windows, Linux, and mobile developers, I expect this sudden renaissance to be short-lived. Eventually, the balance, which by nature favors the cost of developers' time, will return towards productivity. Then, native code, for the lack of portability skills in developers, will return to being a throwback, a curiosity for speed freaks who are happy running on just one platform.
— Andrew Binstock,
Editor in Chief