Two issues would emerge as crucial pinions of program design: the task itself and the synchronization of tasks. Right now, we don't think in tasks unless we're working on sever apps. We don't have a good sense of how big or small tasks should be, how specific or generic to make them, etc. We also don't quite know how to apply the rules of task design and implementation. What are the smells and anti-patterns when it comes to tasks? All this we don't know well, because we don't yet think in these terms except for narrowly circumscribed activities.
The second skill that would become necessary is the synchronization of results. Given that many programs ultimately seek a result that appears to be sequential, coordinating asynchronously generated results requires a recombinational step to produce the correct output. Sequencing is inherently opposed to parallelism, so techniques that diminish the penalty of synchronizing would become increasingly important. Likewise algorithms that optimize recombination results, possibly in novel ways. Such algorithms are in use today in most modern processors. Today's CPUs execute instructions out-of-order, but generate results that are guaranteed to accurately reflect the behavior of the instructions had they been executed in sequential order.
The prime benefits of these skills when applied to programming will, of course be performance and scalability. However, I think debugging will be facilitated, too, as tasks become the primary programming and debugging unit. And certainly testability will greatly improve.
The biggest obstacle to this approach, which I project will eventually become mainstream, is our current lack of familiarity with thinking about problems or solutions this way. Right now, true task-based programming is consigned to data-intensive or I/O-intensive programs on servers, rather than all programs. As we begin to shift our thinking and as languages add more assistance for parallel programming, we will, I expect, migrate substantially from objects to tasks as the core unit of software development.
Reader John Revill describes his implementation of task-based programming in response to my previous editorial:
"I have arrived at a very similar point of view over the last few years.
I have several implementations in place designed and built around fine-grained tasks driven by a thread pool. One application, for example, distributes mine-scheduling tasks across a WAN utilizing desktop & server idle time and has been in production for approximately 4 years, another is a peer to peer camera sharing application, another a package manager. To date, all have been built in C++ on Windows. The choice of C++ has been happenstance and I don't see any reason why a .NET implementation wouldn't work.
All the applications use I/O completion ports for task queuing. The general flow is that a task arrives in the I/O completion port queue and is picked up by the next available thread, based on information contained in the Completion Key and Overlapped data structure, and is routed to the appropriate task handler. The task consists of a number of steps in a sequence. When the sequence is complete, or yields, the current thread returns to the queue for more work. Tasks can be both externally generated (for example, receipt of a HTTP request) as well as internally generated by currently running tasks (such as a progress/status message posted to an output queue while processing continues).
Tasks encapsulate relatively fine-grained logical actions. For example, receive HTTP request, resolve request handler, post response. The steps within a task are a 'micro' grain, if you will. Each step is a function that performs a simple function, such as open a WinHttp session, create a file. Deep call chains are avoided as much as is practical. Each step returns to the dispatcher, which transfers control to the next step. A task can be visualized as a circular arrangement of steps around the dispatcher.
In addition to better matching the application to the processing capabilities of the machine, I have found other benefits. The dispatcher is a natural central point to attach additional smarts, such as instrumentation, exception handling, and diagnostics. Over the course of several implementations, the dispatcher has evolved into a small VM with data structures for describing the tasks and their constituent steps. A useful side effect of this evolution has been the separation between the application protocol, that is, the set of logical tasks versus the implementation of the micro-grained steps. I've found [that] the step routines tend to stabilize fairly quickly and remain so. Most of the change is at the protocol level, which seems to be appropriate. The protocol headers have also become a useful shared design and documentation artifact.
At present, the VM supports some simple flow-control constructs; throw, catch, and finally; and assertions. As it has evolved as an adjunct to native C++ code, it also uses native data structures and the syntax doesn't support data definition. Its focus has been on defining sequencing & flow control for native functions using native data structures. It is driven by a set of macros and the resulting logic is compiled directly into the executable. This is a useful capability, but there is a plan to also develop an external compiler with a cleaner syntax.
I've been fortunate in a number of cases to have customers willing to let me do as I see fit, and have been able to drive the design and implementation. In other cases, I've had to work much harder to get the concept across. Even with some successful implementations as a basis for discussion, the approach wins few friends. As you allude to, the sense of profound unfamiliarity engendered by anything not sanctioned and supported by the chosen framework or language seems to be a fairly effective barrier to adoption. Being able to add another advocate can only help."