The Numerical Algorithms Group (NAG) has announced the availability of its latest mathematical and statistical library, the NAG Library for Intel Xeon Phi.
- The Role of the WAN in Your Hybrid Cloud
- Securosis Analyst Report: Security and Privacy on the Encrypted Network
- Strategy: The Hybrid Enterprise Data Center
- SaaS 2011: Adoption Soars, Yet Deployment Concerns Linger
- How to Protect Your Content and Improve Security with Cloud Client Computing and Thin/Zero Clients
- How to Transform Paper Insurance Documents into Digital Data
The computational software and high-performance computing services firm says that this library contains over 1,700 numerical routines, a number of which have been parallelized and tuned to take advantage of the performance of the Xeon Phi.
NAG routines automatically offload, when it is beneficial to do so, compute-intensive operations to the Xeon Phi — this action is said to enable users to "transparently exploit" the performance of the Xeon Phi.
NOTE: For more advanced users the new NAG Library for Intel Xeon Phi also supports Intel's Explicit Offload and Native Execution models.
To further complement the new Library, NAG provides developers with parallel software engineering and performance optimization services to advise and give assistance with the migration of application codes in order that they effectively exploit the potential of the Xeon Phi coprocessor.
"The Numerical Algorithms Group has long provided value to the Intel Xeon processor with high-quality optimized numerical libraries," said Joe Curley, director of marketing for technical computing group at Intel Corporation. "In an extension of this partnership, NAG collaborated to provide input that helped in the definition and development of Intel Xeon Phi coprocessor technology. Tailoring the NAG Library to support Intel Xeon Phi coprocessors should benefit our mutual customers' highly-parallel, development efforts."
The NAG Library has performance-sensitive routines that are updated with each library release to work efficiently on parallel computing systems using standards such as OpenMP and MPI.