It once seemed inevitable—and it sure is—that supercomputers help companies meet the demands posed by massive databases, complex engineering tools, and other processor attrition challenges. Then, all of a sudden, technology and companies took a different path.
Chris Munro, co-founder and chief scientist at quantum computing company IonQ, provides a simple explanation for the sudden change in interest. “Supercomputers have failed to catch up because, while they hold the promise of speed and the ability to tackle big computational problems, they come with a large physical footprint. [and] power/cooling requirements,” he notes. “When it comes to mainstream adoption, supercomputers never achieve the right balance between affordability, size, reach, and value-added corporate use cases.”
Supercomputers have traditionally been defined by the fact that they combine a set of parallel machines that provide very high computing throughput and fast interconnections. “This is in contrast to traditional parallel processing where [there are] Lots of networked servers solve a problem,” explains Scott Buchholz, chief technology officer of government and public services and director of national emerging technology research for Deloitte Consulting. “Most business problems can be resolved either by the latest generation of standalone processors or by parallel servers .”
The arrival of cloud computing and easily accessible APIs, as well as the development of private cloud and SaaS software, has put high performance computing (HPC) and supercomputers in the rearview mirror, notes Chris Mattman, Chief Technology and Innovation Officer (CTIO) at NASA’s Jet Propulsion Laboratory (JPL). “Due to science and other boutique uses, HPC/supercomputers… have not caught up with the modern era [business] Standards. ”
Today, while most companies have moved away from supercomputers, science and engineering teams often turn to technology to help them tackle a variety of highly complex tasks in areas such as weather forecasting, molecular simulation, and fluid dynamics. “The range of scientific and simulation problems that supercomputers are uniquely suited to solving will not go away,” Buchholz says.
Supercomputers are primarily used in areas where large models are developed to make predictions that include a large number of measurements, notes Francisco Weber, CEO of Cortical.io, a company that specializes in extracting value from unstructured documents.
“The same algorithm is applied over and over again to many monitor states that can be computed in parallel,” says Weber, hence the potential for acceleration when running on a large number of CPUs. He explains that supercomputer applications can range from experiments at the Large Hadron Collider, which can generate up to petabytes of data per day, to meteorology, where complex weather phenomena are broken down into the behavior of countless particles.
There is also a growing interest in supercomputers based on the graphics processing unit (GPU) and the tensor processing unit (TPU). These machines may be well suited to some AI and machine learning problems, such as training algorithms [and] Analyze large amounts of image data,” Buchholz says. “As these use cases grow, there may be more opportunities to ‘rent’ time via the cloud or other service providers to those who need periodic access, but don’t have enough use cases to guarantee Direct purchase of a giant computer. “
While supercomputers have mostly been ported to large academic and government laboratories, they have been able to find a foothold in a few specific industry sectors, such as petroleum, automobiles, aerospace, chemistry, and pharmaceutical companies. “While adoption is not necessarily widespread on a large scale, it does demonstrate the ability of these organizations to invest and experiment,” Munro says.
Matman predicts that the focus going forward will be on new types of supercomputer architectures, such as neural computing and quantum computing. “This is where computing giants will invest to disrupt the traditional cloud-powered model.”
Monroe notes that classical computing will simply reach the limit. “Moore’s Law no longer applies, and organizations need to think beyond silicon,” he advises. “Even the best supercomputers… are dated the moment they were designed.” Munro adds that he’s also beginning to see calls to combine supercomputers with quantum computers, and create a hybrid computer architecture.
Ultimately, however, Monroe anticipates the widespread adoption of powerful and stable quantum computers. “Their unique computational power is better suited to solving complex and large-scale problems, such as financial risk management, drug discovery, macroeconomic modeling, climate change, and more—beyond the capabilities of even the largest supercomputers,” he notes. “While supercomputers still have a huge presence…the top business minds are already looking at quantum.”
Buchholz doesn’t expect major organizations to reverse their view of supercomputers anytime in the foreseeable future. “If the question is whether most organizations need a multi-million dollar special-purpose device, the answer is generally ‘no’, because most applications and systems target what can be done with today’s commodity hardware,” he explains.
On the other hand, Buchholz points out, whether or not they realize it, technology momentum may eventually overwhelm many companies in the supercomputer market. “It’s important to remember that today’s supercomputer is the commodity machine of the next decade,” he says.
What comes next for the COVID-19 Computing Consortium
Supercomputers are recruited to work on COVID-19 research
Is quantum computing ready for prime time?