GPU computing company Nvidia has said that the LLVM open source compiler now supports its graphical processing units. The company has worked with LLVM developers to provide its CUDA compiler source code changes to the LLVM core and parallel thread execution backend.
- Administrators need an agile platform to usher the new era of enterprise storage
- Case Study - MTC Australia
The proposition here is a potential expansion of both the number of independent software vendors (ISVs) and programming languages that will be able to take advantage of the benefits of GPU acceleration.
The CUDA (Compute Unified Device Architecture) compiler provides C, C++, and Fortran support for accelerating applications using massively parallel GPUs.
LLVM supports a range of programming languages and front ends, including C/C++, Objective-C, Fortran, Ada, Haskell, Java bytecode, Python, Ruby, ActionScript, GLSL, and Rust. It is also the compiler infrastructure Nvidia uses for its CUDA C/C++ architecture and it has been "widely adopted" says the firm by companies such as Apple, AMD, and Adobe.
NOTE: The LLVM (Low Level Virtual Machine) project is a collection of modular and reusable compiler and toolchain technologies. The LLVM Core libraries provide a modern source- and target-independent optimizer, along with code generation support for many popular CPUs.
A game-changing milestone?
"The code we provided to LLVM is based on proven, mainstream CUDA products, giving programmers the assurance of reliability and full compatibility with the hundreds of millions of Nvidia GPUs installed in PCs and servers today," said Ian Buck general manager of GPU computing software at Nvidia. "This is truly a game-changing milestone for GPU computing, giving researchers and programmers an incredible amount of flexibility and choice in programming languages and hardware architectures for their next-generation applications."