Lab Home | Phone | Search | ||||||||
|
||||||||
The last decade has seen a surge in the number of great earthquakes (magnitude M ≥ 8), including three of the six largest events on record in the past century. These events have prompted speculation that large events cluster in time on a global scale, implying that global seismic hazard is currently elevated. Recent studies have addressed this question by applying several statistical tests that compare the earthquake catalog to a process that is random in time (i.e. event times are uncorrelated). These studies show that the earthquake data does not deviate from a random process -- it does not support earthquake clustering. Here we study the statistics of recurrence times between earthquakes, using the standard measure for fluctuations, the variance. At most magnitude ranges, the data is within the variability expected for a random process. However, we find evidence for a deviation from a random process among earthquakes above magnitude 8.4-8.5 after removing aftershocks, which are known to cluster spatially and temporally near an earthquake. If we only consider data since 1950, when instrumentation worldwide improved significantly and event magnitudes become better constrained, the likelihood that the earthquake catalog is random becomes remarkably small (~1/1000). We attribute the nonrandom behavior to clustering of large earthquakes, as there are two clusters of events (one in the 1950s-1960s, and one from 2004-present) separated by a long period with no events. Our results have an impact on the way seismic hazard is estimated worldwide, suggesting the need to incorporate both local strain accumulation and large event interactions. Host: Kipton Barros, T-4 and CNLS |