What Is High Performance Computing (HPC)?

The proliferation of data, as well as data-intensive and AI-enabled applications and use cases, is driving demand for the computing power of HPC.

What Is HPC?

  • High performance computing (HPC) is a class of applications and workloads that solve computationally intensive tasks.

  • Demand is growing for HPC to drive data-rich and AI-enabled use cases in academic and industrial settings.

  • HPC clusters are built on high-performance processors with high-speed memory and storage, and other advanced components.

author-image

โดย

Why Is HPC Important?

High performance computing (HPC) is not new. HPC workstations and supercomputers have played an integral role in academic research for decades, solving complex problems and spurring discoveries and innovations.

In recent years, the quantity of data has proliferated rapidly, and many new applications benefit from the power of HPC—that is, the ability to perform computationally intensive operations across shared resources—to achieve results in less time and at a lower cost compared to traditional computing. At the same time, HPC hardware and software have become more attainable and widespread.

Scientists, engineers, and researchers rely on HPC for a range of use cases, including weather forecasting, oil and gas exploration, physics, quantum mechanics, and other areas in academic research and commercial applications.

How Does HPC Work?

While HPC can be run on a single node, its real power comes from connecting multiple HPC nodes into a cluster or supercomputer with parallel data processing capabilities. HPC clusters can compute extreme-scale simulations, AI inferencing, and data analyses that may not be feasible on a single system.

Some of the first and most prominent supercomputers were developed by Cray and IBM, which are now Intel® Data Center Builders partners. Modern supercomputers are large-scale HPC clusters made up of CPUs, accelerators, high-performance communication fabric, and sophisticated memory and storage, all working together across nodes to prevent bottlenecks and deliver the best performance.

Scaling Up Performance

HPC applications take advantage of hardware and software architectures that spread computation across resources, typically on a single server. Parallel processing within a single system can offer powerful performance gains, but applications can scale up only within the limits of that system’s capabilities.

Scaling Out Performance

When multiple systems are configured to act as one, the resulting HPC clusters enable applications to scale out performance by spreading computation across more nodes in parallel.

Benefits of HPC

HPC is gaining acceptance in academia and in the enterprise as demand grows to handle massive data sets and advanced applications. And with the advent of highly scalable, high-performance processors and high-speed, high-capacity memory, storage, and networking, HPC technologies have become more accessible. Scientists and engineers can run HPC workloads on their on-premises infrastructure, or they can scale up and out with cloud-based resources that do not require large capital outlays.

HPC Use Cases

Today, research labs and businesses rely on HPC for simulation and modeling in diverse applications, including autonomous driving, product design and manufacturing, weather forecasting, seismic data analysis, and energy production. HPC systems also contribute to advances in precision medicine, financial risk assessment, fraud detection, computational fluid dynamics, and other areas.

HPC Components

The most productive high performance computing systems are designed around a combination of advanced HPC hardware and software products. Hardware for HPC usually includes high-performance CPUs, fabric, memory and storage, and networking components, as well as accelerators for specialized HPC workloads.

HPC platform software, libraries, optimized frameworks for big data and deep learning, and other software tools help to improve the design and effectiveness of HPC clusters.

HPC and Intel

Intel delivers a comprehensive technology portfolio to help developers unlock the full potential of high performance computing.

Intel® hardware provides a solid foundation for agile, scalable HPC solutions. Intel also offers an array of software and developer tools to support performance optimizations that take advantage of Intel® processors, accelerators, and other powerful components.

With this broad range of products and technologies, Intel delivers a standards-based approach for many common HPC workloads through the Intel® HPC Platform Specification.

HPC Processors and Accelerators

Intel’s array of HPC processors and accelerators, including Intel® Xeon® Scalable processors and FPGAs, support HPC workloads in configurations from workstations to supercomputers.

HPC Software and Tools

Intel offers the software and tools that developers need to streamline and accelerate programming efforts for HPC applications, including AI, analytics, and big data software.

HPC Storage and Memory

The evolution of HPC storage and memory requirements has driven the need for latency reduction. Intel® HPC storage and memory solutions are optimized for non-volatile memory (NVM) technologies, HPC software ecosystems, and other HPC architecture components.

Intel® Optane™ SSDs

Intel® Optane™ Solid State Drives (Intel® Optane™ SSDs) provide the storage flexibility, stability, and efficiency needed to help prevent HPC data center bottlenecks and enable improved performance.

DAOS

Enabled by Intel® Optane™ persistent memory, Distributed Asynchronous Object Storage (DAOS) offers dramatic improvements to storage I/O to accelerate HPC, AI, analytics, and cloud projects.

Intel® Optane™ Persistent Memory (PMem)

PMem provides a unique combination of affordable larger capacity and support for data persistence to support distinctive operating modes and adapt to the varied needs of HPC workloads.

HPC in the Cloud

Today’s HPC cloud services can support the most complex and challenging workloads with many of the same Intel® technologies that are available in the on-premises data center. Throughput for HPC can be accelerated in the cloud, where on-demand availability of computing resources enables jobs to move forward instead of languishing in a queue. An additional advantage of HPC in the cloud is the ability to evaluate the benefits of running your workloads on HPC hardware preselected and configured by the cloud service provider (CSP). Doing so will help you identify what works for you and what adjustments or optimizations you may need.

Researchers and developers can deploy HPC workloads in a combination of on-premises and HPC cloud environments, supported by innovative Intel® architecture-based platforms at major CSPs worldwide. Intel® technology partners can offer expert guidance and advice to help organizations choose the best HPC cloud instances.

Driving Advances in HPC

The latest advances in computing and software are resulting in leaps forward for HPC. Intel works closely with OEMs and the broader HPC ecosystem to optimize hardware performance and integrate HPC with AI and analytics.

Intel is designing new HPC technologies, processors, and materials that help to bring quantum computing from the laboratory to the commercial sector.

The latest Intel® Xeon® Scalable processor-based supercomputers are on track to address the world’s most demanding scientific challenges with exascale computing.

HPC and AI

The combined power of HPC and machine and deep learning shows amazing promise across a variety of applications, from linguistics to genomics sequencing to global climate modeling. Intel is powering HPC and AI by working with ecosystem partners to develop reference architectures and solutions designed specifically for these important solutions.

Exascale Computing

As scientists and engineers apply HPC technologies to solve ever more challenging problems, they are building powerful supercomputers that operate at exascale. For example, the Aurora Supercomputer at the Argonne National Laboratory is expected to provide entirely new capabilities to integrate data analytics, AI, and simulation for advanced 3D modeling.

Other transformational HPC projects include genomics sequencing at the Broad Institute of MIT and Harvard, advanced research with one of the world’s most powerful supercomputers at the Texas Advanced Computing Center (TACC), and particle accelerator simulations based on data from the Large Hadron Collider at the European Organization for Nuclear Research (CERN).

Intel Supports Powerful, Scalable HPC

High performance computing offers tremendous potential to researchers, developers, and engineers in academia, government, and industry.

The latest advances in computing and software continue to enable new achievements for HPC. However, it can be a challenge to choose the optimal hardware and software configurations that deliver powerful, scalable HPC solutions.

Intel continues to work closely with HPC solution providers to optimize hardware performance and blend HPC and AI with analytics. The power of HPC on Intel® architecture can spur innovation and solve increasingly complex problems, paving the way for a future of exascale and zettascale computing.

Frequently Asked Questions

High performance computing (HPC) is a class of applications and workloads that perform computationally intensive operations across shared resources.

The increased use of data-intensive and AI-enabled applications and use cases has led to the growing demand for HPC in academic and industrial settings.

HPC is used for simulation, modeling, and analysis in a broad variety of use cases, including autonomous driving, product design and manufacturing, weather forecasting, precision medicine, financial risk assessment, computational fluid dynamics, and other academic and commercial applications.