


“It’s a really cool result,” said Laurence Perreault-Levasseur, an astrophysicist at the University of Montreal, in Science. “Compared with other non-deep learning techniques usually employed in building emulators,” the researchers wrote, “the models found and trained by DENSE achieved the best results in all tested cases, and in most cases by a significant margin.” Simulation outputs compared to emulator outputs. More impressively still, the resulting emulators – which ran fastest on GPUs – achieved extremely high levels of accuracy, reaching 99.9% identical in the case of the astronomy simulation. “The method successfully accelerates simulations by up to 2 billion times in 10 scientific cases,” the researchers wrote in the abstract of their paper, “including astrophysics, climate science, biogeochemistry, high energy density physics, fusion energy, and seismology, using the same super-architecture, algorithm, and hyperparameters.” DENSE is based on an approach co-developed by Melody Guan of Stanford University, who told Science that she was excited to see her work used for scientific discovery. DENSE emulators can also solve inverse problems, in which they identify the best parameters for output prediction. DENSE adds random inputs between the inputs and outputs, testing with each iteration whether the added layer improves performance and quickly training the model. These researchers’ approach takes a different route: a tool called Deep Emulator Network SEarch (DENSE). But producing training data and optimized architectures for emulators means running those costly simulations many, many times, diluting the computational benefits.
#Hoyt fireshot series#
Emulators help accelerate this process by allowing researchers to feed a series of inputs and corresponding outputs into a machine learning model, after which the emulator attempts to predict what the output would be for a given set of inputs. Most simulations start from the ground up, building a system – such as a cell, a climate or a galaxy – one piece at a time and then letting them operate by a set of rules and other inputs to produce an outcome and answer a question.

Now, a team of researchers from Stanford University and the University of Oxford have applied deep learning to speed simulations quite a bit more – up to billions of times faster – without sacrificing accuracy. Normally, even a two-fold speedup is a big deal for a large-scale simulation, saving large amounts of time (and energy, and money) on machines that are often booked to capacity. Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
