To this point we have taken the coordinating processes for granted; we haven't described how they are created and controlled. We could make use of an external mechanism for launching and managing processes, but because this is so fundamental, NWS includes its own with Sleigh. Sleigh also provides functionality to handle certain kinds of "embarrassingly parallel" master/worker-style problems directly, absolving users from the need for direct engagement with NWS.
Sleigh launches generic "worker" processes on machines, which then wait for work to do. Here is the simplest scenario:
>>> from nws.sleigh import Sleigh >>> s = Sleigh()
By default, Sleigh creates three workers. Let's see where the processes are actually running by asking each worker pulling the sleigh for its hostname:
>>> from socket import gethostname >>> s.eachWorker(gethostname) ['newt', 'newt', 'newt']
The Sleigh method eachWorker runs the specified function once on each worker process, and returns the values in a list. We see that, by default, Sleigh starts three worker processes on the local machine (newt). Running our workers on the local machine is useful for testing, or if we happen to have a multicore computer, but we would like to be able to use other machines, too. So let's shut this down, and try something different:
>>> s.stop() >>> from nws.sleigh import sshcmd >>> s = Sleigh(nodeList=['hippo', 'newt', 'python', 'rhino'], launch=sshcmd) >>> s.eachWorker(gethostname) ['rhino', 'hippo', 'newt', 'python']
This example assumes that users have correctly configured ssh to permit password-less login to the computers in the node list.
eachWorker is typically used to initialize workers (for example, by importing packages or loading common data from a file), but the real work in Sleigh is usually done by eachElem. This method takes as input a function and a list, and returns a list of the results of applying the function to each element in the input list. The number of tasks in the list need not equal the number of workers. Normally there are more tasks than workers, and the workers cooperate to compute all the tasks. We use eachElem to compute the same table of cubes as above:
>>> r = s.eachElem(lambda x: x*x*x, range(100)) >>> len(r) 100 >>> r[2:5] [8, 27, 64]
We see that the results are returned in order. In contrast to the original program, Sleigh takes care of the details of handing out tasks, collecting results, and starting/stopping the workers. eachElem can handle general functions with multiple fixed and varying parameters. For example:
would invoke f(1,11,99), f(2,12,99), and f(3,13,99). This functionality (which also extends to permutations of the argument list) can accommodate a variety of function prototypes without needing to write wrapper codes or reworking the functions themselves.
Each task wakes up in a generic worker, whichever happens to be free. The worker is given the definitions from the module that contains the function being eachElem'd, plus any definitions built up by previous invocations of eachWorker (in the example, we used eachWorker to initialize some global variables). This implies that workers maintain their state across different tasks, which can be useful, but also a potential source of bugs.
By default, eachElem is blockingthat is, the invocation will not return until all results return from the workers. As an option, eachElem can be invoked in asynchronous mode in which it immediately returns not the list of results, but a SleighPending object. This object includes methods for querying the progress of the computation and for obtaining the final results.