Supercomputing and Exascale Computing Overview

Unlock the full potential of data across government, academia, and the enterprise.

Takeaways:

  • Supercomputing helps solve complex and data-intense problems in academia, enterprises, and government to improve all of our lives.

  • Starting in 2022, Argonne National Lab will be home to the Aurora exascale computer, one of the first US exascale supercomputers capable of performing 10¹⁸ calculations per second.

  • Intel will provide new technology that will empower this three-pillar machine to simultaneously manage modeling and simulation, AI and machine learning, and big data and analytics.

author-image

โดย

Supercomputing and Exascale Computing Overview

Supercomputing helps solve problems in science, engineering, and business that make our world a better place—from individualized treatments for cancer to predictions of climate change to understanding the chemistry of the big bang. To address these complex and data-intense problems requires massive computational power. As data sets have grown, along with the need for speed, HPC has advanced from supercomputers to exascale computers like the Aurora Exascale Computer that Argonne National Lab will launch in 2022. National labs like Argonne, Oak Ridge, and Lawrence Livermore make HPC facilities available to industry and academia, providing a profound intellectual and economic benefit from public investments.

What Is Supercomputing?

Supercomputing is a form of HPC using enormous databases, high-speed computations, or both. Hyperion Research classifies supercomputers as any system priced at more than USD 500,000. Supercomputers contain hundreds to thousands of nodes, similar to workstation computers, working in parallel. Each node has many processors or cores that carry out instructions. To ensure they all work in sync, the computers communicate through a network. Today the fastest supercomputers solve problems at petascale—1015 calculations per second (or to be precise, floating point operations per second)—but that will change with the introduction of exascale computers, which are a thousand times faster. For a look at the world’s fastest supercomputers, see the TOP500.

Historically, the problems supercomputing solved involved modeling and simulation. Examples include studying colliding galaxies, the subatomic characteristics of atoms, or even designing a shampoo bottle that doesn’t break when you drop it. Automotive manufacturers have used supercomputing to take the design life cycle of a car from five years to two, saving billions of dollars and years of time by reducing the number of wind-tunnel simulations needed. But supercomputing also focuses on new questions, and even uncovers new problems that need to be solved.

What Is Exascale Computing?

Exascale computers are capable of calculating at least 1018 floating point operations per second, the equivalent of one exaFLOP. For an idea of just how fast the Aurora exascale computer will be, imagine each of the approximately eight billion people on earth using a calculator to multiply two numbers—say 1056.26 x 784.98—every 10 seconds. At that rate, it would take us 40 years to complete the calculations an exascale computer does in one second. That’s a billion billion calculations per second—or a quintillion, for those accustomed to thinking in lots of zeroes.

Trends in Supercomputing

One way supercomputers generate new discoveries is by processing and analyzing ever more massive and valuable data sets. So it follows that the major trends in supercomputing today address the sheer size of those data sets, with the infusion of artificial intelligence techniques, big data analytics, and edge computing.

Artificial intelligence. AI techniques enable supercomputers to make inferences by analyzing increasingly large data sets. But AI also requires the processing power to analyze all of that data, which exascale can handle much more quickly. Scientists will be able to ask questions and get answers that they never could before.

Big data analytics. Big data has become a primary driver of new and expanded HPC installations. For now, most HPC big data workloads are based on traditional simulation and modeling. But going forward, the technical and business forces shaping big data will lead to new forms of HPC configurations to garner insights from unimaginably big data sets.

Edge computing. Edge computing has become a prolific source of new data sets. These data sets arise from both single instruments capturing immense amounts of data and from the billions of connected devices spread around the world. For example, the lidar telescope in the Andes and the Square Kilometre Array radio telescope operating in Western Australia and South Africa generate huge amounts of data. But so do smart cities that use multitudes of sensors and cameras for traffic management and public safety. All that data feeds problems that require HPC to solve.

Challenges of Supercomputing

The challenges of supercomputing, especially in creating an exascale computer like Aurora, fall into three main areas— power, scale, and heterogeneity.

Power. The world’s fastest petascale computer requires 28.3 megawatts to run.1 Although no organization currently building an exascale computer has announced specifics, exascale computers will need 30 to 50 megawatts of power to operate. To put that in perspective, 50 megawatts would be sufficient to power the residential buildings of a town of 50,000 to 70,000 people. Over a year, one megawatt of electricity costs roughly one million dollars, so reducing supercomputer power consumption remains a critical focus. Processors based on innovative microarchitectures enable scalable performance and energy efficiency.

Scale. Over the last 30 years, supercomputers have moved from a unique processor with one thread to many cores and hyperthreading to thousands of nodes all working together. Now, developers writing an application for exascale computing need to break the problem into multiple pieces that take full advantage of the computer’s parallel nature and ensure the threads stay in sync.

Heterogeneity. At one time, developers wrote code for only one component in a computer: a core processor. But handling HPC and AI workloads today demands thousands of processing nodes with 10 times as many processing cores. Diverse architectures combine CPUs and GPUs—and now FPGAs and other types of accelerators—leaving the developer to decide which will work best for each type of computation that must be performed. Developers can’t write code just once—each processor type requires separate code that must work with the others. But as algorithms and software become more complex, so does integrating proprietary programming models, leading to vendor lock-in. The oneAPI cross-industry, open standards-based unified programming model is an industry initiative to deliver common developer experiences for faster application performance, increased productivity, and greater innovation.

The Aurora Supercomputer

In partnership with Argonne National Laboratory, Intel is engineering one of the first US exascale supercomputers. The Aurora Supercomputer will enable breakthrough science, innovation, and discovery to address some of the world’s greatest challenges.

Aurora is a three-pillar machine, meaning it’s being designed to enable modeling and simulation, AI and machine learning, and big data and analytics to work efficiently together. This will require a massive amount of storage on a high-performance fabric. Aurora will also use the Ponte Vecchio high-performance, general-purpose Xe-based GPU optimized for HPC and AI workloads.

Aurora will be focused not only on the standard modeling and simulation that supercomputers historically have had, but also will be a great machine to solve AI problems and perform big data analytics."

- Dr. Robert Wisniewski, chief HPC architect and Aurora technical lead and PI

Argonne is conducting an Early Science Program (ESP) to provide research teams with preproduction computing time and resources, so they’ll be able to run key applications when they shift to the Aurora exascale machine. ESP presently includes both modeling and simulation programs and AI programs, partnering in a center of excellence.

In addition, the Department of Energy (DOE) sponsors the Exascale Computing Project (ECP), with a goal of accelerating delivery of a capable exascale computing ecosystem that includes applications, software, hardware, architecture, and workforce development. ECP aims to address critical challenges in 24 application areas critical to DOE, including basic sciences, applied energy, and national security.

Exascale Use Cases

Exascale computing handles a range of science, engineering, and enterprise problems. As an example, HPC can help unravel three main aspects of the COVID-19 puzzle and combine data from all three sources to inform modeling and simulation.

Analyze test information. Every day, millions of people around the world get tested for COVID-19. By analyzing this edge data, scientists can better understand disease vectors—such as transmission through air—and how they can be slowed.

Understand clinical causes of disease. Probing enormous amounts of complex patient data supplied by hospitals, properly anonymized to meet privacy requirements, can help determine clinical causes and offer new insights into diagnosis and treatment.

As an example, supercomputers using Intel® technology are exploring the atomic structure of the COVID-19 virus, investigating spread and containment via a “digital twin” of the US population, and identifying targets for new drug treatment therapies.

Read the story

Drug discovery. Pharmaceutical companies are working to develop treatments, including vaccinations. This requires massive computation to simulate how the COVID-19 virus replicates and attaches to cells and assess the efficacy of injecting various chemicals and antiviral agents.

For instance, the Texas Advanced Computing Center is working with researchers at the Amaro Lab at University of California San Diego to develop models of the COVID-19 virus and other systems to better prepare and design therapeutics.

Watch the video

Here are a few more use cases that demonstrate how researchers apply HPC to improve our lives.

Future of Supercomputing

The future of supercomputing encompasses varied types of computing and a focus on using AI to tap the potential of the burgeoning amount of data collected.

New Types of Computing

Although supercomputers initially used only CPUs, they now also use powerful GPUs, FPGAs, and other accelerators that implement functions to perform operations faster or with greater energy efficiency. GPUs accelerate CPUs for scientific and engineering applications by handling some of the compute-intensive and time-consuming tasks. FPGAs can be configured as needed for various applications to offer tremendous performance boosts. And the new Habana Labs AI training processor and AI inference processor can speed up evolving AI workloads. But that’s just the start. As researchers break apart applications, other technologies will allow continuing advances.

Neuromorphic computing offers exceptional properties of computation per power consumed. Using hardware, neuromorphic computing emulates the way neurons are organized, communicate, and learn in the brain. Neuromorphic computing will help AI address novel situations and abstractions as part of automating ordinary human activities.

Quantum computing offers extraordinary potential to decrease the time needed to solve problems. Quantum computing reimagines the binary on-off encoding of data fundamental to present-day computing, and replaces bits with qubits that can simultaneously manifest multiple states. That could enable computing at unprecedented levels of massive parallelism.

Software

As researchers continue to include AI in HPC applications, they generate more insight and ideas on how they can use it. AI will enable modeling and simulation on a new scale to handle extremely complex problems like climate modeling. AI can also identify anomalies in massive amounts of data and provide pointers to scientists on what they might investigate. Advanced libraries and tools, such as those in the Intel® oneAPI toolkits, simplify programming and help developers improve efficiency and innovation.

Unmatched Portfolio and Ecosystem Empower Breakthrough Research and Discovery

Supercomputers—and now exascale computers—put powerful tools into the hands of researchers who can make previously unimaginable breakthroughs to move society forward. Achieving outstanding performance across a diverse range of real-world HPC and AI workloads—manufacturing, life sciences, energy, and financial services—requires a partner and technology that adapts as needs change. That, plus support from a broad supercomputing ecosystem, will unlock the full potential of data across government, academia, and the enterprise.

Intel® Supercomputing Technology

Intel has the tools and technology to support solutions to problems in intensive science, engineering, and business that require high-speed computations or large databases.