(C) Idaho Capital Sun
This story was originally published by Idaho Capital Sun and is unaltered.
. . . . . . . . . .



How AI helps to solve a big problem with small earthquakes in Yellowstone • Idaho Capital Sun [1]

['Alysha Armstrong', 'Michael Poland', 'Sylvia Nicovich', 'Shaul Hurwitz', 'More From Author', 'July', '.Wp-Block-Co-Authors-Plus-Coauthors.Is-Layout-Flow', 'Class', 'Wp-Block-Co-Authors-Plus', 'Display Inline']

Date: 2025-07-28

Although they mostly go unnoticed by humans, small earthquakes occur much more frequently than large earthquakes, and knowing more about these tiny seismic events can help us better understand the earthquake hazard and geological processes occurring in a region. Using conventional methods to measure the magnitude of small earthquakes in Yellowstone, however, can be challenging. But perhaps artificial intelligence (AI) approaches offer a solution. After all, AI is already helping to refine earthquake location procedures in Yellowstone.

Earthquake magnitudes are calculated from the energy released by the earthquake as recorded by a seismometer. In Yellowstone, the University of Utah Seismograph Stations (UUSS) operates a network of seismometers to monitor earthquakes in the area. Generally, magnitude measurements for a single earthquake are made at several stations in the network independently, and the estimates are then averaged into the final magnitude that is reported. Accurately computing magnitude values for small earthquakes becomes challenging when 1) there are not enough measurements, or 2) earthquakes are happening close together. This is a particular problem during swarms of small earthquakes because the signals from individual seismic events can overlap. Usually, this can be fixed by a seismic analyst after they locate the earthquake, but not always! About 2% of the earthquakes in the UUSS catalog do not have a magnitude computed, likely because of a combination of these issues, so the value is reported as -9.99. To solve this problem, let’s reach into our AI toolkit!

Most people are likely familiar with complicated “deep learning” models, like ChatGPT, that accept and output complex data like long text sequences and images. The deep learning models we use in processing small earthquakes are similar, and they take ground motion data from seismometers as the input. Deep learning models like these are a type of machine learning, which describes algorithms that learn patterns in a dataset to estimate values of interest for new data. The models learn the patterns during a training phase, in which the model is provided with examples — sort of like a test with an answer key. After training, the model can accept inputs it has never seen before and estimate the output, given what it learned from the training data.

There is also a somewhat simpler, though still powerful, type of machine learning that relies on human-defined features that describe the data instead of the more complicated, raw data to make predictions. In a recent study, UUSS scientists used this method to train models to calculate earthquake magnitudes based on short windows of data, so it won’t generally be a problem if earthquakes are close together.

In the new approach, the UUSS scientists trained one machine learning model for each station in the Yellowstone region using data from the UUSS earthquake catalog. Each model uses features describing the earthquake signal — such as the amplitude — and the location of the earthquake to estimate a magnitude value. The new method makes better use of the available data by accounting for multiple types of seismic waves, and the method also can take advantage of data from more seismic stations because of the rigorous training step. The net result is that there are up to 4 times as many measurements available to calculate a magnitude. Like in the conventional approach, these measurements are combined to determine a final magnitude.

The new method will ultimately complement, and not replace, the traditional approach for magnitude calculations. This is because traditional methods work very well most of the time (except for these small, nearby events), and because the machine learning approach does have its limitations, mostly because the models are only going to work well for earthquakes that are similar to the training dataset. So, for example, a model may fail to estimate the magnitude of an earthquake occurring near Hebgen Lake if it saw very few training examples from that area. Similarly, if most training examples were greater than M0.5, the model may perform poorly when applied to earthquakes with a magnitude less than 0.5. Combining predictions from multiple station models can help us to remove and identify poor magnitude estimates, but it can be challenging to know when the models are uncertain. In the future, UUSS scientists plan to expand the approach to not only provide a magnitude, but also an assessment of the confidence in that magnitude.

These machine learning methods are at the current cutting edge of seismology, and Yellowstone provides the perfect location to train and test the new approaches!

Yellowstone Caldera Chronicles is a weekly column written by scientists and collaborators of the Yellowstone Volcano Observatory.

[END]
---
[1] Url: https://idahocapitalsun.com/2025/07/28/how-ai-helps-to-solve-a-big-problem-with-small-earthquakes-in-yellowstone/

Published and (C) by Idaho Capital Sun
Content appears here under this condition or license: Creative Commons BY-NC-ND 4.0.

via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/idahocapitalsun/