Accelerate HPC Workloads Across Multiple Architectures
Many businesses are supercharging big data and analytics use cases with HPC systems that distribute the computing process across a number of nodes—running workloads in parallel to accelerate results.
Transitioning software to function on HPC clusters and efficiently programming high-performance parallel computing can be complex, requiring significant time investment for developers. However, this process can be significantly shortened using the right software tools.
At the same time, developers face a growing need to accelerate specialized workloads through a variety of architectures—CPUs alongside accelerators such as GPUs and FPGAs. Here, they must also deal with a number of time-consuming and costly hurdles as they seek to ensure their software works with as many hardware types and computing models as possible.
To help solve these challenges, Intel offers several HPC tools and resources that help developers build high-performance, parallel-computing-optimized, cross-architecture applications. They’re all designed on the foundation of oneAPI, an open, cross-architecture, standards-based programming model.
Intel® oneAPI HPC Tools for Developers
Developers can build, analyze, optimize, and scale HPC applications across multiple types of architectures more easily using the Intel® oneAPI Base Toolkit and Intel® oneAPI HPC Toolkit. These resources include state-of-the-art techniques in vectorization, multithreading, multinode parallelization, and memory optimization so you can more easily build software that’s ready for HPC.
|Simplify implementation of HPC software on CPUs and accelerators with Intel® industry-leading compiler technology and libraries.||Quickly gauge how your application is performing, how resource use impacts your code, and where it can be optimized to ensure faster cross-architecture performance.||Deploy applications and solutions across shared memory and distributed memory (such as clusters) computing systems using the included standards-driven MPI library and benchmarks, MPI analyzer, cluster tuning tools, and cluster health-checking tools.|
Intel® oneAPI HPC Toolkit Components
- Intel® oneAPI DPC++/C++ Compiler: Use this standards-based C++ compiler with support for OpenMP to take advantage of more cores and built-in technologies in Intel® CPU, GPU, and FPGA platforms (Intel® Xeon®, Intel® Core™ processors with Intel® Processor Graphics, Intel® Xe architecture GPUs).
- Intel® C++ Compiler Classic: Use this standards-based C++ compiler with support for OpenMP to take advantage of more cores and built-in technologies in platforms based on Intel® Xeon® Scalable processors and Intel® Core™ processors.
- Intel® Cluster Checker: Verify that cluster components work together seamlessly for optimal performance, improved uptime, and lower total cost of ownership.
- Intel® Fortran Compiler: Use this standards-based Fortran Compiler with OpenMP support for CPU and GPU offload.
- Intel® Fortran Compiler Classic: This standards-based Fortran compiler includes support for OpenMP that provides continuity with existing CPU-focused workflows.
- Intel® Inspector: Locate and debug threading, memory, and persistent memory errors early in the design cycle to avoid costly errors later.
- Intel® MPI Library: Deliver flexible, efficient, scalable cluster messaging on Intel® architecture.
- Intel® Trace Analyzer and Collector: Understand MPI application behavior across its full runtime.
(Note: The HPC Toolkit is an add-on to the Intel® oneAPI Base Toolkit, which is required for full functionality.)
HPC with AI and Big Data Frameworks
AI and analytics workloads are a primary use case for HPC systems. These applications require massive amounts of compute to perform their task. While AI and big data applications have typically run on traditional single-node systems, organizations are increasingly moving to HPC technology to accelerate workflows and improve results.
To help accelerate AI and analytics, Intel offers the Intel® oneAPI AI Analytics Toolkit. This comprehensive package provides data scientists, AI developers, and researchers with familiar Python tools and AI frameworks to accelerate end-to-end data science and analytics pipelines on Intel® architectures.
Like the HPC Toolkit, the AI Analytics Toolkit components are built using oneAPI libraries for low-level compute optimizations. This toolkit maximizes performance end to end—from preprocessing through machine learning—and provides interoperability for efficient model development.
Using the AI Analytics Toolkit, you can:
- Deliver high-performance, deep-learning training on Intel® CPUs and GPUs and integrate fast inference into your AI development workflow with Intel-optimized frameworks for TensorFlow and PyTorch, pretrained models, and low-precision tools.
- Achieve drop-in acceleration for data preprocessing and machine learning workflows with compute-intensive Python packages, Modin, scikit-learn, and XGBoost, optimized for Intel.
- Gain direct access to analytics and AI optimizations from Intel to ensure that your software works together seamlessly.
Open Source Software
oneAPI is based on open industry standards. By using it as you build HPC applications, you can avoid proprietary programming code lock-in and maximize business opportunities. It’s an open approach to HPC software and HPC optimization.
With the Intel® oneAPI Toolkits built on the oneAPI foundation, you can ensure that your solutions are interoperable with HPC standards, including C/C++, Fortran, Python, OpenMP, and MPI for easy integration with legacy code. You’ll ensure they’re flexible enough to deploy across a multitude of architectures and compute models.
Additionally, Intel is a member of the OpenHPC community. OpenHPC, an open source HPC platform software for Intel® architecture-based systems, simplifies the installation and management of HPC systems by reducing the integration and validation effort needed to run the HPC software stack.