• Technology
  • Electrical equipment
  • Material Industry
  • Digital life
  • Privacy Policy
  • O name
Location: Home / Technology / How Cloud-Based Supercomputing Is Changing R&D

How Cloud-Based Supercomputing Is Changing R&D

techserving |
2207
Leer en español
Ler em português

While the cloud is now ubiquitous in enterprise computing, there is one area where the shift to cloud has only just quietly begun: supercomputing. A catchall term for the world’s largest, most powerful computers, supercomputers were once available only to governments, research universities, and the most well-heeled corporations, and were used for cracking enemy codes, simulating weather, and designing nuclear reactors. But today, the cloud is bringing supercomputing into the mainstream.

This transition has the potential to accelerate (or disrupt) how businesses deliver complex engineered products, from designing rockets capable of reaching space and supersonic jets to creating new drugs and discovering vast pools of oil and gas hidden deep underground. Just as enterprise cloud computing created new ways for businesses to engage customers and disruptions from software-as-a-service to mobile computing, supercomputing will open up new possibilities for innovation breakthroughs by accelerating R&D speed and product development by orders of magnitude.

For example, the Concorde supersonic transport program took 25 years and $5 billion (adjusted for inflation) to launch its first commercial flight in 1976. Contrast that timeline with Boom Supersonic, a startup that promises to cut air travel time in half, shuttling passengers between New York and Paris in 3.5 hours. Only founded in 2014, it plans to deliver its Overture supersonic airliner in half the time, at a small fraction of the cost and personnel.

Boom’s rapid R&D speed was powered by cloud supercomputing. Rapid software simulations allowed the company to replace most of the physical prototyping and wind-tunnel testing required by the Concorde. Because of the cloud, Boom (which is a Rescale client) could afford to quickly run 53 million compute hours on Amazon Web Services (AWS) with plans to scale to more than 100 million compute hours. The company already has commitments from United to buy 15 of its supersonic transport jets, even though the aircraft has yet to fly. That’s how much confidence the airlines have in the millions of hours in computer simulation results produced to date.

So, given the potential of this technology, why are less than one in four supercomputers for simulations cloud based? The simple answer is that it’s hard. Computational engineering requires a complex and specialized technology stack, and few company IT organizations have the in-house expertise to set up a real R&D operation in the cloud.

There are a few reasons for this. First, high-performance computing infrastructure, which makes computational engineering possible, is a new offering for public cloud providers. Second, the simulation software required can be complex to set up and maintain. Third, choosing the right software/hardware combination and maintaining the proper configuration as IT technology advances is critical to achieve the optimal performance for computational engineering workloads. I’m familiar with how challenging this process can be for organizations because my firm, Rescale, specializes in helping companies set up and automate these systems.

While it can be difficult to get a cloud-based supercomputer up and running, the rewards can make it well worth the effort. Today, researchers can use their simulation software of choice on nearly unlimited computing power, without ever having to worry about the infrastructure, and run cloud-based desktops to interact with their simulations or models. Technology leaders can apply policies to control costs and find the balance between time-to-solve and lowest cost. In short, it’s a R&D-centric supercomputing experience, available on-demand and billed by consumption.

How Cloud-Based Supercomputing Is Changing R&D

The question is: How do you know when you have a problem that a supercomputer could help solve?

When Is a Supercomputer Worth It?

In the last decade, big data gave the enterprise profound new business insights and improved how large data sets are analyzed. Computational methods in R&D will improve the physical performance of engineered products through simulations just as profoundly. The common thread in all simulations is that we are determining the likely observations of how a product would interact with its environment, based on the scientific principles that shape our world — from physics to chemistry to thermodynamics.

Cloud-based supercomputing can be particularly helpful to organizations in the following situations:

Accelerate time to market: Evaluating new designs through cloud-based simulation instead of physical prototyping can dramatically accelerate how fast companies are able to commercialize new product innovations. Florida-based startup Sensatek created an innovative IoT sensor that adheres to turbine blades to measure the internal stresses on jet engines during flight. The Air Force wanted to buy Sensatek’s sensors, but the company didn’t have the resources to buy supercomputers to perfect its product fast enough, until it turned to high-performance computing in the cloud. Similarly, Specialized Bicycles performs simulations with rapid prototyping so they can quickly fine tune their road bike aerodynamics and overall performance.

Digital twins: Simulating a product’s interaction with real-world scenarios is critical when physically prototyping is impractical. For example, Commonwealth Fusion Systems, a fusion nuclear reactor startup, relies on simulations to validate potential reactor designs, as no commercial fusion reactor has ever existed. Firefly Aerospace, a Texas-based rocket startup, relies on computational engineering to explore and test the designs of its moon-bound commercial rockets. Similarly, drug manufacturers need complex simulations to know how molecules will interact with a biological environment before they can commit to producing new drug discovery breakthroughs.

Combine AI/ML with simulation: Simulations can not only predict how a single human-designed product might perform, but it can also predict the performance of a full range of potential designs. Organizations investing in these virtual experiments develop intellectual property on the models covering a broad range of design parameters and implications to product performance. Here is where early-adopter companies gain competitive advantage with their data assets. Automakers like Nissan, Hyundai, and Arrival make it much easier and faster for their engineers to test new design techniques to build safer and more efficient vehicles in an increasingly complex operating environment with autonomous, electric, and connected capabilities. In developing advanced driver assistance systems, ML algorithms can train driver software in simulated worlds. Just as aircraft wind tunnel testing has gone virtual, so can testing for autonomous driving systems. In the life sciences space Recursion Pharmaceuticals is applying Artificial Intelligence techniques to biology, and accelerating new drug discoveries by analyzing cells 20 times faster using machine learning on supercomputers.

New computation-enabled products or services: Cloud’s scale and connected nature create new possibilities for science and engineering. For example, Samsung Electronics created a cloud-based platform for computational engineering collaboration, so fabless customers — who design and sell hardware, but don’t manufacture it — can use diverse electronic design automation tools on demand and collaborate on designs with Samsung ahead of manufacturing. This new approach essentially brings continuous integration (a practice common in software development today) to engineered products. Engineers can not only quickly validate their design decisions but also integrate their designs to an overall system for seamless collaboration and systems level simulation and validation.

From Big Data to Big Compute

With all the investments in the past decade around social media, mobile, and cloud technologies, the next major industry transformations are likely to come in the world of science and engineering. In this new world, data generation — not just collection — will grow in importance as simulations that create digital twins of real world products become more common.

Harnessing supercomputing in the cloud is becoming foundational to innovation in many industries, particularly as continuous integration and continuous delivery ties R&D ever closer to product cycles and a company’s software delivery process. Supercomputing in the cloud is making possible what seemed like science fiction yesterday. Indeed, there are entire industries that only exist because of this new computational capability — such as private space travel.

Rocket companies like SpaceX and Blue Origin were barely possible 15 years ago. These innovation leaders in aerospace required hundreds of millions of dollars just to build the computer infrastructure that could run the simulations their businesses required. But next-generation aerospace companies like Firefly, Relativity and Virgin Orbit can now deliver R&D results at less than a tenth of the cost of their legacy peers. And they can do this today at any scale, rapidly dropping barriers for innovation.

Today, anyone can spin up a world-class supercomputer on their credit card. This changes the pace and dynamics of innovation, the impact of which is only recently beginning to emerge.