Channels ▼


Adventures in Offshoring: Global Success Stories

In just five years, offshoring has shaken the U.S. IT industry out of hibernation into new speculation and activity. Unlike the roaring '90s, however, this new push eschews gold-plated projects and packaged solutions in favor of theoretically low-cost, high-return custom work done in countries where developers earn a fraction of U.S. salaries. While most offshoring has been dominated by software services giants in India's Southern technology center, Bangalore, these stories also show that some take an even more do-it-yourself approach, working with ad hoc partnerships or firms that are far smaller and cheaper than Tata Consultancies, Wipro or Infosys—not to mention shops in Russia, Eastern Europe and Latin America.

Be forewarned: Some readers may find the clinical, euphemistic approach to staffing described herein offensive. However, our duty is to inform as well as opine. We think the value of publishing these articles exceeds the controversy they may inspire. Do the authors' economic and practical arguments for global development hold water? Why or why not? Write us at [email protected]

The Editors

Pumping Up Productivity

Managing projects in Mexico, Moscow and India

While holding several senior management positions in software development organizations, I've experienced both sides of the outsourcing equation. In the late '90s, I built contracting teams to create Web solutions for customers. Then, in 2000, I joined a medium-sized public company in the mobile technology space, and my focus shifted to the sourcing and management of partners to support the firm's various product groups. I'll withhold the company names to protect the innocent and allow myself to speak freely.

Our initial motivation for engaging a five-member team as a partner in India was the prospect of working with engineers experienced in the directory space. Starting in 2001, we were under pressure from declining revenues and a bearish economy, so we expanded in India and began searching for additional engagements in Mexico and Russia. I've worked with outsourcing partners in Pune, India—where my initial engagement grew to 150 engineers; Monterey, Mexico, for a trial engagement in handset testing; and Moscow, where one business unit performed test execution. I handled projects for carriers in the mobile messaging space, which included test-plan development, automation and execution, components for mobile and messaging products, and product feature extensions. The fact that the remote teams grew to carry nearly a quarter of all product development efforts in the company I had worked for (revenue equaled $300 million) indicates substantial success.

Our key drivers for going overseas were typical: We sought to reduce costs, speed time to market with global 24/7 work schedules, free-up internal staff, gain immediate access to skilled staff, and dip into a flexible pool of workers that could match the ebb and flow of our business needs.

Team Composition
A typical team included nine junior to senior software engineers led by one technical manager per product area in projects with development and QA aspects. The complex QA for carrier-grade applications often required test automation developers. Some localization projects also used additional partners in Europe, creating true global teams where time zones became hard to cross. These teams depended on close collaboration with the core teams in the U.S. for design and architecture decisions.

After my initial success with three projects that I'd managed directly, the goal was set to save $5 million in annual operational expenditure by expanding the offshore teams. While I have yet to see a complete, convincing ROI analysis, our base price per engineer was low enough to realize immediate cost savings.

In Search of ROI

To find a valid return-on-investment figure, consider a host of factors.

To determine ROI, I attacked multiple dimensions of the outsourcing equation. The question of how offshore teams compare to U.S.-based teams in productivity can be answered only by examining three types of engagements: Staff augmentation projects, subcontracted projects and joint ventures.

For these three models, we found it took up to two engineers in India to do the work of one U.S. engineer. The calculated leverage nevertheless indicates a two to four times more cost-effective team in India, based on an average annual U.S. salary of $110,000 (fully loaded $187,000) versus an Indian contractor salary of $12,000 (a $33,000 billing rate, or $42,000, including travel and a percentage of U.S. operational costs).

In other words, the cost of outsourcing the work is calculated as the U.S. salary rate/(offshore cost*efficiency).

For example, the cost of staff augmentation model is calculated as $187,000/$42,000*2, or 2.2 times more cost-effective than the U.S.

Staff augmentation Efficiency1:2 Leverage 1:2.2
Subcontractor Efficiency1:1.5 Leverage 1:3
Joint venture Efficiency1:1 Leverage 1:4.4

We measured the productivity of remote teams using a set of metrics based on performance (the number of bugs closed, test cases developed, lines of unit-tested code and so on), quality (the number of defects) and schedule adherence. Unfortunately, there's little comparative data available in most U.S. software companies. So we established the efficiency based on our review cycles and feedback from U.S. managers and looked at the overhead created compared to U.S.-based teams.

—M. Joss

Quality of Work Life
I implemented a balanced scorecard system along 15 best practices, with which the leads of the global team rated each other (the manager or lead in India rated the engineering manager or senior engineer in the U.S., and vice versa). This data provided the basis for two review/improvement discussions per year among key team members, vendor executives and me. Measurable metrics such as schedule adherence, quality and team productivity were tracked in weekly meetings. Pressured by Indian salary hikes and competitive market conditions leading to employee turnover rates of more than 25 percent, I also implemented a team-based bonus system in which 3 percent of the billing rate was directly paid to successful teams. Seventy-five percent of the teams received the bi-yearly bonus, which indicated satisfaction from the U.S. manager as well as rewarding the most experienced Indian team members with the highest share.

Initially, the quality of our Indian staff was impressive, but with increasing rate pressure and the obvious disparity between candidate screening (only the best) and assigned work (low-level QA and coding), the partner added more and more junior engineers. As the teams matured, the skill/assignment ratio started to even out, and the right combination of talent stabilized turnover and team satisfaction. In 2004, 24 teams participated in software deliveries and testing. Of these, 30 percent had more than two years' experience building our products, 62 percent had three or more years' experience in the software industry, and 25 percent had postgraduate degrees.

While visiting India, I appreciated the communal atmosphere in the partner offices, which grew from barely 100 engineers when our engagement started, to close to 2,000 today. Everyone met for the free lunch and afternoon snack. Art, hiking and theater groups organized events, and I found myself sometimes feeling more at home there than in the individualistic, profit-focused culture of Silicon Valley.

Overhead Headaches
One hurdle was that we had to custom-program workflow systems for initiation, staffing and tracking of projects. If you work with a Wipro or Tata, however, you'll pay 30 to 80 percent more per person, but can expect industrial-strength project management, metrics and reporting systems.

We faced the most unexpected workload from enterprise integration issues. The HR department, for example, didn't understand the difference between a short-term U.S. contractor and a long-term Indian partner. The finance department lacked important procurement mechanisms in that top-of-the-line systems, such as Oracle applications, didn't handle the project-based procurement and budgeting beyond the core organization. The IT departments objected to our need for fine-grained security, firewalls and tracked individual accounts. These problems were eliminated only when the partner's office was fully connected like our U.S. office locations, including access to the intranet, and each Indian worker was treated with a regular, employee-like onboarding process.

Over the years, I've learned not to try to replace employees in the U.S. You're better off defining product areas or components that can be transferred with full ownership to the remote team. And, as with every relationship, it takes continual attention and adjustment to make it fruitful and lasting.

Michael Joss is a consultant in Fremont, Calif., to startup and high-growth companies, with a focus on international partner networks.

Taking Off with Open Source

JBoss's model is a natural for global development.

Like many open source projects, JBoss was a global collaboration effort from the start, with founding developers from France, the U.K. and the U.S. The project now has 500-plus active contributors from more than 20 countries, and JBoss Inc., the Atlanta, Ga.-based services company that sponsors and supports the JBoss open source project, employs more than 50 project leaders and core developers from all over North America, Europe and India. Nearly 200 of those contributors have committer rights to the JBoss CVS code repository, allowing them to review, approve and incorporate new code into the JBoss code repository. The open source JBoss Enterprise Middleware System (JEMS) has close to 6 million lines of code, with more than 2 million lines of code added in 2004.The result of such active global collaboration is a sophisticated and successful suite of enterprise software products. In fact, the JBoss Application Server is downloaded more than one million times a year and is the top-ranked Java app server in production.

JBoss's success is rooted in its structured approach to the open source development model. While that model's strength comes from an active user community, its ad hoc nature can be a liability. Developers, especially new participants, often don't communicate sufficiently with their peers; project leaders are often overburdened trying to keep the code base safe and consistent while incorporating community contributions; and there's frequent confusion over the project road map outside of the few core developers.

The JBoss project is saved from these dangers thanks to its corporate arm, JBoss Inc., which serves as the nerve center for communication and efficient resource allocation. For example, the company hires professional product managers to work out the road maps based on customer needs, and project managers to coordinate development tasks. The core developers, paid by the company, execute the plan and delegate as needed to other contributors in the community. JBoss Inc. also facilitates direct communication by organizing and paying for conferences to promote face-to-face meetings among key developers and users. In addition, the company builds business development, product support, marketing and PR teams to make the project more visible and better accepted beyond its developer community. This structured approach reduces the communication friction that traditionally plagues large open source projects.

Michael Juntao Yuan has a Ph.D. in astrophysics from the University of Texas at Austin. He's a consultant for JBoss Inc. and the author of three books on mobile computing technologies and software development from O'Reilly, Addison-Wesley and Prentice Hall.

Outsourcing QA

Despite turnover, we're ready for round two.

I'm the executive vice president for engineering and founder of Silicon Valley-based NearSoft Inc. At a previous company, in 2000, we received initial funding to develop an enterprise-class software product to address a major challenge uncovered by the Sarbanes-Oxley requirements. It was a complex, multitier system, and it had to be developed quickly—without breaking the bank.

From the start, we knew we'd outsource QA. Why? First, we wanted a low ratio of development engineers to QA engineers (that is, two to one, or less). Mature products typically have a ratio of three to one or higher, but our product was complex, written from scratch with POJO (Plain Old Java Objects) by a newly assembled team—early releases would likely have a high bug count. Second, we needed licenses for large ERP databases (for example, Oracle Apps, PeopleSoft) that we didn't have. Third, we didn't want to spend money on extra capital equipment if we could help it, and we had very limited office space and wanted to keep the rent to a minimum. Finally, we had a good experience with offshore outsourcing before.

After some research, we outsourced QA to a group in Chandighar, Punjab, in Northern India. Going offshore met all our requirements—at a price we could afford.

The Team
The QA team consisted of six people: a U.S.-based QA lead and, in India, a project leader, three manual testers/automation engineers and one unit tester (who created only negative unit tests, as the developers wrote positive unit tests to test their own code). The U.S. lead acted as bridge between the developers and the Indian QA team; he also ran weekly bug reviews, generated weekly bug reports, selected software development lifecycle tools, and made the weekly builds. We measured this team's success based on the number of generated regression tests, the ratio of automation and, of course, cost.

The Tools
In addition to the usual cast, we used several tools that were new to us: Microsoft Project for planning and tracking, Eclipse/Ant for the IDE and build system, Bugzilla for bug tracking, and Clover to measure code coverage. For white-box, GUI-based testing, we used qftestJUI, an inexpensive tool written in Java that has equal support for Microsoft Windows and Unix/Linux platforms; for unit testing, we used JUnit.

We also used the open source TestLink and found it to be a useful test management and execution tool, particularly for remote, off-sync teams (one team works while the other sleeps). TestLink let us create and maintain test specifications online from day one. And, in spite of a clunky user interface, it was easy to see which tests were automated and which were manual, as well as what tests were run for each build and their results. Finally, we used IM and VoIP to stay in touch 24/7.

The good news? In the end, we created more than 600 regression tests and automated over 70 percent of them, while creating processes to balance manual testing and test automation. The team was able to keep up with a weekly build regime, and we got good reports from them. They also were effective in catching regression bugs and in pointing out not only bugs, but product-level problems.

Turnover Trouble
The bad news? India's outsourcing industry is booming, and the side effect is high turnover and a shortage of experienced engineers. Three months in, our project lead quit. His loss was a terrible blow, and it took us several months to find a suitable replacement. Then we lost our two most experienced white-box automation engineers on the same day. They simply found jobs closer to their home state during a holiday visit. In less than 10 months, we experienced 60 percent turnover. Finally, the replacement lead had to be let go only a few months after he joined.

Room for Improvement
Next time, we'll look for a locale that doesn't suffer from high turnover. If we can find the right team, we'll outsource to a near-shore or onshore locale. In many cases, their total cost of engagement is the same as or less than offshore. We'll also make sure that at least one team member, preferably the lead, has experience working in the U.S. or Western Europe.

Outsourcing is here to stay: The key is to learn from past mistakes and not repeat them.

Matt M. Pérez is the executive vice president of engineering and the founder of Nearsoft Inc. in San Jose, Calif. A math and computer science graduate from the University of Illinois, Chicago, Pérez also worked at Sun Microsystems for nine years.

Related Reading

More Insights

Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.