Dallas Thornton is the Division Director, Cyberinfrastructure Services for the San Diego Supercomputer Center at the University of California, San Diego. His division is responsible for delivering high-performance, stable, at-scale cyberinfrastructure services that support UCSD faculty, researchers, staff, students, and departments; UC system-wide collaborators; industry partners; and multiple federal agencies.
Along with managing the CI Services organization, Thornton oversees SDSC's datacenter sustainability efforts, including energy efficiency initiatives to enhance current datacenter operations, commissioning a new LEED silver equivalent datacenter in 2008, and participating in industry/academic sustainability partnerships aimed at greener datacenters for the future.
SDSC supports research-oriented computational and data environments designed to help scientists apply technology to advance science. The center maintains a diverse array of data-oriented computing, storage, and web resources dedicated to specific programs and development efforts. These include projects in various scientific disciplines, data preservation, digital libraries, and software development for grids and clusters.
Prior to joining UCSD, Thornton developed and managed information technology projects for the University of Southern California and Vanderbilt University, as well as serving business in private consulting. Thornton received a B.Eng. in Computer Engineering, M.Eng. in Management of Technology, and M.B.A in Strategy from Vanderbilt University.
Q: SDSC will soon be moving into a new building addition, with a new machine room. Can you describe how the new design will encourage energy efficiency? In the building? In the datacenter?
A: The new building is a wonderful example of how functionality and design can coexist, meeting the demanding needs of today's researchers in an energy-efficient manner that takes advantage of the local La Jolla microclimate. The building as a whole utilizes a natural hybrid ventilation system that leverages filtered outside air to provide appropriate conditioning to the space. The building, through the natural rise of warm air, "breathes." The design uses unaltered outside and inside air to provide adequate temperature and humidity to the building over 95 percent of the days, supplementing with cooling or heating coils when needed on the remaining 18 days of the year.
SDSC's datacenter expansion includes a first-of-its-kind deployment in the United States. The room will be outfitted with a Liebert/Knurr rack system called CoolFlex that encloses traditional cold aisles, completely separating the cold air supplying equipment from the hot air exhausted from the gear. This separation allows for tremendously more efficient cooling, saving large amounts of fan energy as well as opening doors to water-side cooling economizations planned for the future. Able to host traditional, unaltered servers, the rack design alone will more than double the cooling efficiency of the room.
Q: What impact does IT have on the environment, and how can it be improved?
A: IT energy efficiency is a critical driver to national energy issues as well as corporate cost containment. Most people do not realize that datacenter loads in the U.S. consume over 1.5 percent of our national power supply, and loads are growing by 12 percent every year. A single rack-mount server costs around $600 per year to power and cool, and that's if it is done efficiently! Companies are wise to be mindful of the energy impacts of IT and the methods to more efficiently provide IT services.
Federal and state governments, as well as many power companies, are addressing the issue, providing incentives for more efficient IT practices and deployments. Identifying these opportunities, and taking advantage of them, is not always straightforward and requires a different perspective than that of the traditional facilities, finance, or IT person. A key step companies can take to bridge this gap is to more tightly integrate datacenter facilities staff with their IT operations. This melding enables more efficient energy and facilities planning activities.
Q: What's on the horizon for making IT more energy efficient over the next few years?
A: For a company deploying IT infrastructure, acquisition cost has become secondary to the ongoing operational costs of systems. Existing vendors are working to change their technology offerings to better align with and address this paradigm shift, and many new vendors have entered the arena with products that allow better utilization of resources.
Most IT services are over-provisioned and undersubscribed. Virtualization of services, servers, and storage provide opportunities for sharing unused resources and reducing the IT equipment and power needed. Many of the software and hardware technologies that enable virtualization also provide features that ease the lives of IT administrators, simplifying arduous tasks and providing redundancy options not possible before. Virtualization is here to stay and will only increase its prevalence and reach in the future.
Cloud computing promises to deliver Web 2.0 software as a service concept paired with computational and storage resources that could be geographically located anywhere in the world. The goal is to hide the technology complexities from the users and local IT administrators and provide computational capabilities as a service. Not only would the service and hardware stacks be virtualized from the end users and local IT, but entire systems and storage deployments could be virtualized and reside at various datacenters around the world, providing redundancy and scalability simply not possible at a single site.
At the datacenter level, the rise of cross-site virtualization and more ubiquitous data access will lead to enhanced service-level redundancy and availability. This will lessen the need to deploy power-consuming and costly local uninterruptable power supply (UPS) systems, generators, and redundant power distribution throughout every datacenter, saving millions of dollars in facilities costs and 10-30 percent in ongoing energy consumption at each site.
Servers are becoming more efficient, with most vendors shipping high-efficiency power supplies and making a point to establish power consumption as a key driver in their hardware and software designs. The power supplies in most commodity servers accept input voltages from 100 to 240V, allowing them to work across all the standard AC power systems of the world. Some vendors already offer HPC products able to utilize higher-voltage power inputs such as 277, 400, or 480V. This higher voltage power distribution to the servers allows for even more efficient power delivery, saving another 2-5 percent on ongoing power consumption and reducing the facilities build-out costs for datacenters by eliminating transformers that would traditionally step the voltage down from 480V to 208V in the US.
At the embedded systems level, UCSD researchers Dr. Tajana Rosing and Dr. Allan Snavely are currently modeling the effects of throttling power within computational systems on the performance of various types of applications. This research could allow administrators of large systems, hardware manufacturers, operating system vendors, and other software vendors to optimize their platforms, including performance per watt as a key metric of success.
From top to bottom, the industry is becoming more and more focused on energy efficiency, and it is in everyone's best interest to do so.
Q: How do you stay current with "green" technologies as they apply to IT?
A: As with other leading-edge technology, access to information and interested partners is key. We have worked closely with our vendors, becoming close partners on various initiatives. We help them beta-test new software or hardware platforms and provide feedback and references as appropriate. As our top asset, it's important that SDSC staff is up to date on the latest developments in their domain-specific peer groups. For "green" technologies, this includes consortia and forums such as The Green Grid, the Data Center User Group, Storage Network World, and Supercomputing.
Q: As director of SDSC's CI services activities, what challenges do you find most daunting in your day-to-day job?
A: As technology becomes more and more prevalent in our daily lives, one of the largest challenges IT leaders face is that the underlying infrastructure is taken for granted. Cyberinfrastructure should be a ubiquitous conglomeration of technologies and tools that look seamless to the user. Essentially CI should become as accessible and reliable as the power that comes from your local utility company. CI is more complicated than the power grid, but it's quickly becoming just as essential to us in our daily lives. It requires continued investment and nurturing to maintain operational effectiveness and to keep up with user needs.
Software integration is another challenge. Various vendors provide great solutions for specific tasks. Taking those solutions and integrating them across systems and vendors is a challenge for IT administrators and developers. A great example of this in the green IT space is datacenter environmental monitoring. Servers throughout the datacenter have temperature and humidity sensors built into their chassis. Unfortunately, each vendor has a different (or no) way of providing access to that information from outside the machine. For us, writing software to collect this information from the machines in an ad-hoc way was much more expensive and uncertain than simply deploying dedicated sensors throughout the room. So, due to the lack of vendor standards adoption in this space, we installed and now maintain gear that serves a useful purpose but replicates hardware and software that we already paid for. Lack of standards adoption by vendors creates a huge downstream challenge for IT integrators.
Q: What aspects of your job give you the most satisfaction?
A: The cyberinfrastructure we deploy and maintain is vitally important, but it is a means to an end. I get the most satisfaction in seeing the fruits of our researchers and collaborators utilizing the infrastructure at SDSC to help find solutions to society's greater challenges. For example, SDSC's cyberinfrastructure and expertise enable earthquake researchers to simulate and analyze quakes ahead of time and within minutes of an actual event; it stores and maintains vital national archival material that must live on "for the life of the republic;" it gives geoscientists the tools to quantitatively measure and understand the makeup of the earth's crust; it provides medical researchers the ability to more quickly retrieve molecular structure information and simulate treatments, speeding the time to market for viable drugs; and it offers opportunities for integration in the cancer treatment process that could increase the effectiveness of radiation therapy. All of these activities happen with the help of the CI and expertise at SDSC, and that kind of deep and broad impact is what the center is here to provide