The presidential reelection campaign for Obama broke new ground in 2012 by developing almost all of its software in-house. Previously, presidential campaigns had relied primarily on outside vendors for their software and operations. However, the president's team felt it needed to have faster access to the data it was generating and better control over how the campaign was run. It also wanted to write completely custom software for its volunteers in the field. So, about 18 months before the election, it started assembling a group of programmers to develop software for nearly every part of its operations: website, mailing list management, donations processing, canvassing, and field worker support. I recently had the opportunity to attend a panel of some of the campaign's lead developers and speak with them one on one.
White PapersMore >>
- How to Mitigate Fraud & Cyber Threats with Big Data and Analytics
- How to Stop Web Application Attacks
Campaigns are unusual efforts. A team comes together once every four years for a short, intensive effort that culminates in the period between the party convention and election day (the first Tuesday in November). Due to this intensely ad hoc orientation, there is no concept of long-term planning or writing code with the goal that it will need to be maintained for years to come. The principal criteria for code are that it works, it scales, and is secure. And most crucial of all, it needs to be working before election day. The inflexible deadline causes unusual distortions in the coding, delivery, and deployment of applications.
The campaign developers were allowed to choose whatever language and platform they felt met their needs and scaled well. Because all the campaign software was hosted on Amazon Web Services, the code also had to play nice in the cloud. (At the height of activity, the campaign was running 2,000 AWS instances, so scalability was a fundamental requirement.) The primary platforms were Python, PHP, and to a lesser extent, Java. Because money was invariably tight for anything but media buys, the campaign chose to rely primarily on open-source tools, with a few exceptions, such as NewRelic's app-monitoring software, which monitored the applications in the cloud.
As Ryan Kolak, who built much of the common infrastructure, discussed, software design works differently in a campaign. When you collect requirements, you find that not only do your contacts have clear ideas about what they want, everyone orders you to do it their way. To sort through the massed tangle of conflicting requirements and desiderata, the team did the Agile thing: It put Post-It notes on a wall and called back the users to identify features that fit these criteria. Did the feature contribute directly to getting the president reelected? And would it be needed in 12 weeks (the estimated time for delivery)? The team had one more level of triage that it performed internally: Could the feature be delivered before election day? Actually, the goal was to have everything done by early October so that testing and scalability verification could be performed. As Kolak stated, "Having an inviolable deadline provides a lot of clarity about what you can and cannot do. We were ruthless in cutting projects that could not be delivered in time."
Due to the time pressure, design decisions were generally made quickly by small teams and almost never revisited. Some of the panelists said that once coding had begun, there was very little desire to improve designs in ways that required rewriting existing code. It was faster to do what needed to be done and live with an inferior design if it got the working app out the door in time.
Chris Gansen, who designed a dashboard for managing the incoming data, said that perhaps the greatest effect of the deadline was on the coding process. All the apps were for short-term use and would be maintained by the teams that wrote them. Consequently, many of the investments developers normally make in code readability, documentation, and so on were given far less importance. Said Gansen, "We realized early on that we had to understand that 'good enough' was indeed good enough. Often, 90% of the way there was the same as 100% and we just didn't have the bandwidth to put a lot of time into the final 10% or polishing every last detail."
One area that the teams did put a lot of effort into was testing. As Scott VanDenPlas, who headed up DevOps on the team, explained: "The software had to work and had to scale. We didn't have a lot of time to go back and rewrite things that didn't work. So our team was chosen primarily on members' ability to deliver working code." And added Gansen: "We used TDD where we could and insisted that code be delivered with working tests. We didn't have a lot of inclination to spend time debugging."
The development rhythm was purely iterative with releases delivered every week. The biggest problem the team faced is that campaign staff was unused to iterative development. They had difficulty accepting partial functionality and took a while to come around to understanding that they were being shown early releases to get feedback. This aspect was somewhat more problematic when the team consulted users in the field for feedback, as senior campaign staff were more accustomed to dictating features to vendors, rather than collecting user feedback. The difference turned out to be crucial, as many features were changed in ways that became critical to their smooth operation on election day.
The campaign had one other constraint not normally faced by dev teams. Nick Leeper, who headed the donation project, explained that the regulatory environment for cash handling is completely unforgiving. "If you screw up, the errors are potentially felonies. So despite the need for speed, there were some design and implementation details that have to be done just so. We spent a lot of time consulting the regulatory folks."
Most development was frozen in early October. The team then went into testing mode. Testing had of two principal targets robustness and scalability. The need for assured scalability was imperative: VanDenPlas pointed out that the campaign had 25-million followers on Twitter, 35-million followers on Facebook. The email list had several million people. As a result, one email could generate activity that would spike from a few hundred people using an application to over half a million.
Testing for robustness included tests of running the donation Web app with the database disabled. (This test was valuable, as the team later had to switch databases while the production app was running. Because the test had shown them that they could do so without disruption, they were able to resolve a problem in real time.) One by one, they tested components under stress loads. The team also took down entire cloud zones to make sure that AWS's now well-known outages would not bring the apps to a halt. The testing was at times confounded by actual outages in AWS, which served as good test case, but threw off analysis of defects by creating unexpected results.
Due to the election day deadline, the team spent the rest of their time fixing problems revealed during the October testing. No new features were added, unless mandated by regulation or indispensable to the operation. And in some cases, when an app didn't work as hoped, team members reassessed its criticality and simply dumped the app if it was no longer truly essential.
Election day proceeded mostly as expected. Volunteers and workers in the field used software with location identification to report on voting trends, help voters get to the polls, and canvass voters to monitor how things were proceeding. Despite running at full scalability, the campaign was able to see in real time how things were unfolding, where help was needed, and to respond to problems in real time. This was a first for a presidential campaign. In previous campaigns, workers had to call in data or file text reports, which then had to be passed up the line to headquarters.
After the Election
Presidential elections happen only every four years a long time in technology. The team was unequivocal in its conviction that by 2016, all the code would have to be written from scratch again, due to change in technology and how people communicate. They saw no possibility of reusing their code. In fact, they felt that part of their opponents' software difficulty was tied to reworking code from 2008, rather than writing the apps from scratch. To facilitate the inevitable rewrite for the 2016 campaign, the team performed a lengthy post-mortem. According to Gansen, it spent a long time documenting what it had done, the key design decisions, what had worked, what had failed, and what it would have done differently.
In almost every aspect, the campaign had to embrace a disciplined approach of "quick and dirty." If things couldn't be done quickly, they generally weren't undertaken at all. Still, within the "just get it done" mentality, great care was applied to testing, security, and of course, regulatory compliance. The success of this approach might indeed be a useful model for commercial development and a valuable addition to our common notions of methodology.