The Numerical Algorithms Group (NAG) has announced the availability of its latest mathematical and statistical library, the NAG Library for Intel Xeon Phi.
- Infographic: Is Your Datacenter Fit for Big Data?
- Red Hat cloud a road map to government cloud computing based on openness, portability, and choice
- Cloud Collaboration Tools: Big Hopes, Big Needs
- SaaS 2011: Adoption Soars, Yet Deployment Concerns Linger
- Intrusion Prevention Systems: What to Look for in a Solution
- How to Mitigate Fraud & Cyber Threats with Big Data and Analytics
The computational software and high-performance computing services firm says that this library contains over 1,700 numerical routines, a number of which have been parallelized and tuned to take advantage of the performance of the Xeon Phi.
NAG routines automatically offload, when it is beneficial to do so, compute-intensive operations to the Xeon Phi — this action is said to enable users to "transparently exploit" the performance of the Xeon Phi.
NOTE: For more advanced users the new NAG Library for Intel Xeon Phi also supports Intel's Explicit Offload and Native Execution models.
To further complement the new Library, NAG provides developers with parallel software engineering and performance optimization services to advise and give assistance with the migration of application codes in order that they effectively exploit the potential of the Xeon Phi coprocessor.
"The Numerical Algorithms Group has long provided value to the Intel Xeon processor with high-quality optimized numerical libraries," said Joe Curley, director of marketing for technical computing group at Intel Corporation. "In an extension of this partnership, NAG collaborated to provide input that helped in the definition and development of Intel Xeon Phi coprocessor technology. Tailoring the NAG Library to support Intel Xeon Phi coprocessors should benefit our mutual customers' highly-parallel, development efforts."
The NAG Library has performance-sensitive routines that are updated with each library release to work efficiently on parallel computing systems using standards such as OpenMP and MPI.