top of page
Anchor 1
Publications List
Filters: 54 results found
Sort by Title:
Working together to assess risk from global to local: lessons from the Global Earthquake Model
Type:
Peer-reviewed
Reliable, high-quality risk assessment is the basis for an objective understanding of risk; priority for action 1 of the Sendai framework for disaster risk reduction 2015-2030. It is the foundation of decisions and actions that effectively build resilience. Earthquake risk continues to rise, yet reliable data, risk information, and assessment tools are out of reach or under-utilised in many areas of the world. The Global Earthquake Model (GEM Foundation) was created to bridge these critical gaps. Through authentic collaboration across public and private stakeholders, the GEM community supports risk management and awareness by developing and implementing open risk assessment tools, compiling and generating risk information. GEM influences risk reduction by promoting technology transfer and developing risk assessment capacity. All GEM risk assessment resources are made freely available through its web-based OpenQuake platform. As input to the Sendai framework, this paper provides an overview of GEM's achievements to date, lessons learnt - emphasizing effective modes of collaboration and capacity development -, and presents opportunities and challenges in going forward.
This paper is from the Global Risk Forum Davos Planet@Risk Journal, which is no longer available.
Ranking and developing ground-motion models for Southeastern Africa
Type:
Peer-reviewed
The southern East African Rift System (EARS) is an early-stage continental rift with a deep seismogenic zone. It is associated with a low-to-moderate seismic hazard, but due to its short and sparse instrumental record, there is a lack of ground-motion studies in the region. Instead, seismic hazard assessments have commonly relied on a combination of active crustal and stable continental ground-motion models (GMMs) from other regions without accounting for the unusual geological setting of this region and evaluating their suitability. Here, we use a newly compiled southern EARS ground-motion database to compare six active crustal GMMs and four stable continental GMMs. We find that the active crustal GMMs tend to underestimate the ground-motion intensities observed, while the stable continental GMMs overestimate them. This is particularly pronounced in the high-frequency intensity measures (>5 Hz). We also use the referenced empirical approach and develop a new region-specific GMM for southern EARS. Both the ranked GMMs and our new GMM result in large residual variabilities, highlighting the need for local geotechnical information to better constrain site conditions.
A Database and Empirical Model for Earthquake Post Loss Amplification
Type:
Peer-reviewed
The impact of destructive earthquakes might exceed the local capacity to cope with disasters and lead to an increase in the reconstruction costs. This phenomenon is commonly termed as post-loss amplification, and its main causes include the increase in the cost of construction materials and labor due to the sudden demand, the need to reconstruct following higher standards, or other unexpected costs. We reviewed 70 past earthquakes to identify events where post-loss amplification was observed, and collected a set of seismogenic, socio-economic, geographical, and impact variables for those events. Using this database, we developed two models to predict post-loss amplification, using a composite indicator that reflects the level of destruction in the region, or a parameter that characterizes the frequency of the event. This study indicates increased costs (>10%) for events where the economic losses exceed 1% of the regional gross domestic product, or for events with an estimated return period of at least 10 years. These models can be applied directly in the amplification of economic losses in earthquake scenarios or in probabilistic seismic risk assessment.
Earthquake scenarios for building portfolios using artificial neural networks: Part I – Ground motion modelling
Type:
Peer-reviewed
The calculation of the spatial distribution of ground motion is one of the most important steps in earthquake scenarios. The advancements in machine learning algorithms and the release of new ground motion data can enable improvements in the reliability and accuracy of this component. We developed an artificial neural network (ANN) ground motion model using a compiled database from two subsets with measured Vs30 from the Pan-European strong motion database and the NGA-West2 database. The ANN model employs five input parameters: moment magnitude (Mw), hypocentral depth, Joyner–Boore distance (Rjb), shear wave velocity in the top 30 m (Vs30), and faulting type. The outputs of the ANN are the RotD50 horizontal components of common intensity measures used in seismic risk assessment: PGA, PGD, PGV, Arias Intensity, and 5% damped spectral acceleration at 27 periods from 0.01 to 4.0 s. A mixed-effects modelling approach was followed to train the ANN and partition the ground motion variability into between-event and between-site terms. The input parameter scaling relationships were studied and demonstrated physically sound trends of the ground motion with respect to Mw scaling, distance attenuation, and site amplification. The predicted median response spectra for several combinations of Mw and Rjb were compared to common ground motion models for Europe, and the results are discussed. The developed ANN is used in a companion study to calculate the hazard footprints for several historical earthquakes in the Balkan region.
Earthquake scenarios for building portfolios using artificial neural networks: Part II – Damage and loss assessment
Type:
Peer-reviewed
Seismic risk assessment of building portfolios has been traditionally performed using empirical ground motion models, and scalar fragility and vulnerability functions. The advent of machine learning algorithms in earthquake engineering and ground motion modelling has demonstrated promising advantages. The aim of the present study is to explore the benefits of employing artificial neural networks (ANNs) in earthquake scenarios for spatially-distributed building portfolios. To this end, several recent major seismic events in the Balkan region were selected to assess damage and economic losses, considering different modelling approaches. For the assessment of the seismic demand, the ANN developed in the companion study and common ground motion models for Europe were adopted. For the vulnerability component, recent ANN models and existing scalar fragility and vulnerability functions for the Balkans were used. The estimates of all modelling cases were compared against the aggregated damage and economic loss data observed in the aftermath of these events. The findings of this study suggest that overall, the ANNs led to damage and economic loss estimates closer to the observations.
Advancing nearshore and onshore tsunami hazard approximation with machine learning surrogates
Type:
Peer-reviewed
Probabilistic tsunami hazard and risk assessment (PTHA and PTRA) are vital methodologies for computing tsunami risk and prompt measures to mitigate impacts. At large regional scales, their use and scope are currently limited by the computational costs of numerically intensive simulations behind them, which may be feasible only with advanced computational resources like high-performance computing (HPC) and may still require reductions in resolution, number of scenarios modelled, or use of simpler approximation schemes. To conduct PTHA and PTRA for large proportions of the coast, we therefore need to develop concepts and algorithms for reducing the number of events simulated and for more efficiently approximating the needed simulation results. This case study for a coastal region of Tohoku, Japan, utilises a limited number of tsunami simulations from submarine earthquakes along the subduction interface to build a wave propagation and inundation database and fits these simulation results through a machine learning-based variational encoder-decoder model. This is used as a surrogate to predict the tsunami waveform at the coast and the maximum inundation depths onshore at the different test sites. The performance of the surrogate models was assessed using a 5-fold cross validation assessment across the simulation events. Further to understand its real world performance and test the generalisability of the model, we used 5 very different tsunami source models from literature for historic events to further benchmark the model and understand its current deficiencies.
A Building Imagery Database for the Calibration of Machine Learning Algorithms
Type:
Peer-reviewed
In the last decades, most efforts to catalog and characterize the built environment for multi-hazard risk assessment have focused on the exploration of census data, cadastral data sets, and local surveys. Typically, these sources of information are not updated regularly and lack sufficient information to characterize the seismic vulnerability of the building stock. Some recent efforts have demonstrated how machine learning algorithms can be used to automatically recognize specific architectural and structural features of buildings. However, such methods require large sets of labeled images to train, verify, and test the algorithms. This article presents a database of 5276 building images from a parish in Lisbon (Alvalade), whose buildings have been classified according to a uniform taxonomy. This database can be used for the testing and calibration of machine learning algorithms, as well as for the direct assessment of earthquake risk in Alvalade. The data are accessible through an open Github repository (DOI: 10.5281/zenodo.7625940).
Fault Source Models Show Slip Rates Measured across the Width of the Entire Fault Zone Best Represent the Observed Seismicity of the Pallatanga–Puna Fault, Ecuador
Type:
Peer-reviewed
We explore how variation of slip rates in fault source models affect computed earthquake rates of the Pallatanga–Puna fault system in Ecuador. Determining which slip rates best represent fault‐zone seismicity is vital for use in probabilistic seismic hazard assessment (PSHA). However, given the variable spatial and temporal scales slip rates are measured over, significantly different rates can be observed along the same fault. The Pallatanga–Puna fault in southern Ecuador exemplifies a fault where different slip rates have been measured using methods spanning different spatial and temporal scales, and in which historical data and paleoseismic studies provide a record of large earthquakes over a relatively long time span. We use fault source models to calculate earthquake rates using different slip rates and geometries for the Pallatanga–Puna fault, and compare the computed magnitude–frequency distributions (MFDs) to earthquake catalog MFDs from the fault zone. We show that slip rates measured across the entire width of the fault zone, either based on geodesy or long‐term geomorphic offsets, produce computed MFDs that compare more favorably with the catalog data. Moreover, we show that the computed MFDs fit the earthquake catalog data best when they follow a hybrid‐characteristic MFD shape. These results support hypotheses that slip rates derived from a single fault strand of a fault system do not represent seismicity produced by the entire fault zone.
A relocated earthquake catalog and ground motion database for the southern East African rift system
Type:
Peer-reviewed
The southern East African rift system (EARS) is geologically rare considering its early-stage continental rift setting combined with a deep seismogenic zone. Several seismically vulnerable communities are located within this tectonically active region, resulting in a significant seismic risk. However, the ground motion and seismic hazard analyzes necessary to increase the earthquake preparedness in the region have been limited due to the relatively short instrumentation history and scarce ground motion data available. Here, we present a newly compiled ground motion database for the southern EARS which is critically lacking and preventing local ground motion studies. This database includes a regional catalog of 882 earthquakes spanning 1994–2022 (magnitudes 3–6.5) with available waveform records within epicentral distances of 300 km. Three different velocity models were used to relocate 256, 255, and 252 events, respectively, to quantify depth sensitivity, relocating events down to depths of 35–40 km. The final database contains 10,725 time-series records from 353 stations along with P- and S-wave phase arrivals for each record. The ground motion database contains peak ground acceleration and velocity and 5% damped pseudo-spectral acceleration for 291 frequencies from 1.0 to 30 Hz for the horizontal components. In addition, a Fourier amplitude spectrum table for 212 frequencies from 0.1 to 30 Hz is included. The database is accessible through the ISC repository (https://doi.org/10.31905/4GGVBFBE).
Calculation of National Seismic Hazard Models with Large Logic Trees: Application to the NZ NSHM 2022
Type:
Peer-reviewed
National‐scale seismic hazard models with large logic trees can be difficult to calculate using traditional seismic hazard software. To calculate the complete 2022 revision of the New Zealand National Seismic Hazard Model—Te Tauira Matapae Pūmate Rū i Aotearoa, including epistemic uncertainty, we have developed a method in which the calculation is broken into two separate stages. This method takes advantage of logic tree structures that comprise multiple, independent logic trees from which complete realizations are formed by combination. In the first stage, we precalculate the independent realizations of the logic trees. In the second stage, we assemble the full ensemble of logic tree realizations by combining components from the first stage. Once all realizations of the full logic tree have been calculated, we can compute aggregate statistics for the model. This method benefits both from the reduction in the amount of computation necessary and its parallelism. In addition to facilitating the computation of a large seismic hazard model, the method described can also be used for sensitivity testing of model components and to speed up experimentation with logic tree structure and weights.
bottom of page