Berkeley Using New Tech for Quake Studies
Researchers at the Berkeley Lab and Lawrence Livermore National Laboratory are currently using high-performance computing systems to better predict how structures will respond to an earthquake.
Led by principal David McCallen, the researchers are working with the systems to model a 7.0-magnitude earthquake along San Francisco’s Hayward Fault for different rupture scenarios.
Berkeley says that a focus of the project is prepping for emerging Department of Energy exascale computing systems that are expected to be available in 2022 or 2023.
Those systems will be capable 50 times more scientific throughput than current High-Performance Computing systems, which allows higher fidelity simulations and makes modeling different scenarios quicker.
The team’s goal is to learn how these different earthquake scenarios would impact structures throughout the San Francisco Bay Area region.
McCallen notes that the new system is better because it allows scientists and engineers to use physics-based models to predict regional-scale ground motions and the variability of those ground motions. This is in contrast to the way earthquakes have been predicted in the past, which is usually just to look at past earthquake records.
This is a three-step modelling process, McCallen says: rupture, propagation through the Earth, and interaction between the waves and the structure at the site.
“There’s a lot of uncertainty in predicting future earthquake motions and what particular facilities would be subjected to,” he said.
“And you really need to understand those motions because if you understand the input to the structure, you can then model the structural response and understand the potential for damage.”
The reason the advanced machines will allow research to move more quickly is the heightened ability to measure ground motion in hertz.
“Because of computer limitations we could previously resolve ground motions to only about one or two hertz: ground motions that vibrate back and forth about one or two times per second,” McCallen said.
“To do accurate engineering evaluations, we need to get all the way up to eight to 10 hertz. We’ve been able to do five- to 10- hertz simulations with the highest speed computers now, but those take a long time, like 20 to 30 hours.”
McCallen says that the new system will hopefully knock that timeframe down to just three to five hours.