searchtraffic = T + (0.4 * searchtraffic) + (0.2 * specifytraffic) + (0.6 * addtraffic) specifytraffic = (0.2 * searchtraffic) + (0.3 * specifytraffic) addtraffic = (0.1 * specifytraffic) specifytraffic = 0.2/0.7 * searchtraffic addtraffic = 0.1 * 0.2/0.7 * searchtraffic
So searchtraffic=T/(1-(0.4+0.057+0.017))=T/0.526=10000/0.526 =19,011.41 arrivals per second. We'll call this 20,000 arrivals per second to give us some wiggle room.
specifytraffic=20,000*0.2/0.7=5,714 arrivals per second, which we'll call 6,000 arrivals per second.
addtraffic=20,000*0.1*0.2/0.7=571.4 arrivals per second, which we'll call 600.
Because each task requires 0.1 seconds on a single server, we want an arrival rate of roughly 9 per second (we could actually deal with more since 2.9/3=9 2/3). we need roughly 2955 servers with 2220 going to search, 660 to specify, and the remaining 75 going to add. If you got any numbers like this, you are on the right track.
In the case of a service time that is appreciable, e.g. 2 seconds, a slightly more complex analysis is required (see http://cs.nyu.edu/courses/fall09/G22.2434-001/capplanrulethumb.html, but the idea is simple. There are two components to response time waiting time + service time. Buying N servers decreases waiting time by a factor of N, but doesn't decrease service time whenever each task must be handled by one server. Therefore, to derive the response time, compute waiting time as if you had a single server that was N times as fast and then add in service time. The notes should make it clear.