2021 – EDITH kickoff meeting – 14-16 April

Here we are! rooms (virtual) are booked and we are now opening the registrations to the incoming EDITH kickoff meeting. You can register HERE!

A preliminary Meeting Program can be found here.

We invited eight keynote speakers that will introduce the main themes of EDITH and provide some really interesting case stadies. See the list below!

We also asked them to give their feedback on some of our questions. Let’s start discussion from now on! in the Registration form you’ll be able to pose up to three questions specifically addressed to each of the speakers. Get inspired by the abstract and text below! 😉

we’ll collect your inquiries and forward a summary of them to the speakers.

Our key questions:

UNCERTAINTYWhat is the major limit that your approach have to face with? At present, do you see major sources of epistemic uncertainty in your approach?

HORIZONSWhat would be a major step forward in your approach to earthquake geology?  if resources would not matter, which approach would you choose to decipher the seismic cycle? Which novel methodology could be applied?

TIME SCALESDo you think it is more promising to decipher one or two seismic cycles in as much detail as possible (instrumentally) or do you think it is more promising to get statistically robust recurrence intervals for longer timescales (via paleoseismology). Why? Are we close to the level at which we can provide reliable data sets for the probabilistic prediction of Earthquakes? Could you envisage new paleoseismological methods to find long-term records? Which target faults would you go to?

COLLABORATIONIn EDITH, we identified four major approaches  (i.e., paleoseismology, archeoseismology, geodesy and tectonic geomorphology): what is the other approach that yours could take benefit from? Could this help in solving the uncertainties you described above?



Jim McCalpin

GEO-HAZ Consulting, P.O. Box 837, Crestone, CO 81131 USA; mccalpin@geohaz.com

“The nature of distributed faulting, as illuminated by sackungen and fracture network analysis”

At present, future distributed faulting (DF) can only be predicted probabilistically based on empirical datasets. The algorithms predict the probability and displacement of DF as it decreases away from the principal fault, but do not consider along-strike variability or the length/continuity/pattern of DF faults. That is because there is no agreement on the physical mechanism that creates DF. Is DF created by coseismic rebound of the strained crustal blocks of either side of the fault? Or by the impact of coseismic P and S waves on the surface, as in an explosion? During a study of coseismic sackungen (a type of DF) created during the 2002 Denali, Alaska earthquake (M7.9), I measured numerous 2D spatial parameters of the swarms of sackung scarps, looking for a single numerical parameter that would describe the geometry of a swarm. I did not find that, but did discover the field of Fracture Network Analysis, which contains a host of 2D spatial metrics for describing sets of faults/fractures. For example, the length-frequency and length-aperture relationships for rock fractures (joints) is fractal. Semi-variograms describe the spatial clustering aspects of fractures. Can these be used to describe DF?  

James’ thoughts on our questions…

  • UNCERTAINITY – In PFDHA, the current approach for both principal and distributed faulting (DF) is strictly empirical; is has no physical basis. The current uncertainty is very high, but the new very detailed rupture datasets (from lidar and drones) are driving down the aleatory uncertainty. In contrast, the model-driven (epistemic) uncertainty has not been addressed at all for DF; we have not developed any physical models. This must change.
  • HORIZONS – The 3D spatial pattern of surface ruptures must hold clues to the physical mechanism(s) that produced them. Once we understand the physical mechanisms, we will be able to eliminate most of the prediction uncertainty that currently exists in PFDHA. Other research groups have studied the 3D spatial patterns of fractures in rock, and have developed the field of Fracture Network Analysis. We should test that against PF and DF fault arrays.
  • TIME SCALES – I find longer timescales more promising because after each characteristic earthquake, the constellation of locked asperities across the fault plane shifts. This changes the locations of all asperities and their failure strengths, which controls the initiation point of fault slip, and the cascading failure of all slightly-stronger asperities that leads to a large earthquake rupture. We cannot image the new constellation of asperities from surface geophysics nor measure the stress state on all of them. So we will never be able to predict the time of failure of a single seismic cycle.  All we can do is assess the total variability of the renewal process, without knowing the variability of its components. The easiest long-term records to obtain are records of off-fault deformation events in environments of continuous deposition, such as lakes and oceans. The Dead Sea Rift contains a dated record of 50 consecutive “seismites” created by strong earthquake shaking. But with the multiple faults in the rift, it is unknown which fault caused each seismite. So the timing, recurrence, and shaking strength cannot be assigned to a specific fault, and thus does not assist seismic source characterization in PSHA.  However, the shaking record is still valid for the site of the seismite outcrop. It can the used as a reality check on ground motions predicted for that site from a rift-wide PSHA.
  • COLLABORATION – I find the studies on fracture netwrok analysis as the most promising.

Your questions to Jim:

Is it possible to use of the amount of crustal deformation during earthquakes as a base for a new  Magnitude scale? (Leonello Serva).

Given that the recurrence interval of earthquakes depends on the tectonic regime, based on your experience, is it sufficient to consider only the last 40 thousand years to understand if a fault is active and capable in a country like Italy? (Valerio Comerci)

From my experience I see that a rupturing process can produce a single fault with a large displacement or distributed faulting where we see many sub-parallel fractures with small slips. How do you think which factors influence on it during rupturing? (Oksana Lunina)

Do you think we need very detailed paleo-rupture datasets (from lidar and drones)? I suppose this will have little effect on the already estimated magnitude of the earthquake. But do you think it might be important to predict a width of the rupture zone or an area covering surface rupturing during an earthquake. May be for it we need a relationship between M and width of the surface rupture zone (Here I mean “width” as in the plan). (Oksana Lunina)

I like the approach of using fracture network analysis in SHA and wanted to ask whether you believed fractal distributions will always be the most appropriate to describe off-fault damage, or whether network properties may instead have ‘characteristic’ values based on external factors (e.g. lithology, mechanical boundaries etc.). (Billy Andrews)

Like the fractal pattern of fracture and faults in DF, Would be expected to found a similar or other numerical relationship between the different colluvial wedges? (Ambrosio Vega)

To which extent DF can alter the output of a paleoseismological investigation in terms of e.g., recurrence interval, slip per event, elapsed time? (Maria Francesca Ferrario)

Do you think the distributed faulting is a primarily a reflection of how lithospheric plates interact and create a family of fractures that interact and cause distributed faulting? The genetic defects in rocks, along with lithological contracts enhance nucleation of fractures, which facilitates distributed faults. (Afroz Shah)

Which are the possible mechanisms of fault reactivation in DF? (Pierfrancesco Burrato)

Can the paleseismological record of a fault be characterised by a single trench study? What would be the ratio trenches-fault length needed for an accurate paleoseismologial analysis of one fault? (Alba Peiro)

Tamarah King


How can insights from Australia’s record of reverse faulting shape the future of SHA?

Australia, a low-strain stable continental region, has a surprisingly rich record of well-preserved historic and paleoseismic reverse faulting. This record holds important insight into; distributed deformation, bedrock controls on faulting, environmental earthquake effects, strong ground motions, and methods of paleoseismology. While high-strain regions are necessarily areas of great interest for earthquake and fault studies, low-strain regions also have insights to offer regarding the interaction between strain, seismicity, geomorphology, and time. Therefore, lessons from Australia can (and should) feed into next generation seismic hazard models, particularly for low and moderate strain-rate regions.

Tamarah’s thoughts on our questions…

  • UNCERTAINITY -Reliable long-term paleoseismic slip histories require expensive and lengthy field programs, trenches, and dating. Questions of uncertainty and fault behavior will resolve once more faults are comprehensively studied. The major limit of paleoseismology is therefore the availability of time and money to conduct comprehensive studies. If only one prior earthquake can be identified / dated, the slip rate derived from that data has large epistemic uncertainties because the recurrence interval does not ‘end’ at the modern day. Similarly, the assumption of episodic rupture is also largely uncertainty, so even if two events can be dated, this interval is not necessarily representative of true recurrence.
  • HORIZONS – Identifying and dating multiple slip events on large and small faults globally, to understand how reliable the ‘episodic recurrence’ assumption is outside of high-strain regions. In higher strain regions where seismic cycles are shorter, strain-rate maps from InSAR provide a great opportunity to image the potential seismic cycle. In lower strain regions, I think traditional paleoseismology is the best way to decipher the full seismic cycle. Particularly if novel methods were developed to accurately (and cheaply) date more trench sediments, and to dig trenches to greater depths in many more locations.
  • TIME SCALES – This is dependent on the location. If you’re sitting on a plate boundary sure, instrumental records might capture recurrence. But for most other regions, paleoseismology and/or tectonic geomorphology are the best way to accurately capture recurrence (or, at least, for the next 1 – 100 ka, until instrumental seismology catches up!). I personally think ‘probabilistic prediction’ is a contradiction of terms. However, I believe we are getting ever closer to more accurate probabilistic forecasts of earthquake hazard through space and time. In Australia we have applied erosion rate estimates to determine the residence time of surface offset as an estimate of slip-rate. However, this requires a steady-state assumption of erosion and is only really applicable to bedrock scarps. This method may still prove useful in other low-strain regions though, particularly if more robust erosion rate methods could be applied.
  • COLLABORATION – The regions I work in (Australia and Central Asia) have long human-occupancy histories that are under explored. These areas could benefit from archeological or oral-history approaches to understanding earthquake cycles. In the Australian context, oral histories may help us narrow down regions of higher seismicity over longer time-periods than the instrumental record. However, it seems unlikely that these histories could uncover specific fault activity at specific times.

Your questions to Tamarah

You are saying Australia is stable, but the earthquake occurrence could mean that it is not? What is stable in terms of tectonics when we know Australia is moving north, and there could be a good possibility that old fractures can reactivate and cause damaging earthquakes? (Afroz Shah)

What approaches, you think we should use to help us identify which of the pre-existent structures may pose a risk to seismic hazard? (Paula Marques Figueiredo)

Is there any indigenous knowledge in Australia on seismic events? (Anika Braun)


Paolo Forlin


Exploring interdisciplinary dialogues: the archaeology of medieval earthquakes and the excavation of a lost seismic disaster at El Castillejo (Granada, Spain)

My paper introduces the research activities undertaken by two research projects developed at the Department of Archaeology of Durham University (UK) since 2104, namely the ArMedEa project (Archaeology of Medieval Earthquakes in Europe) and the RiskRes project (Risk and Resilience: Exploring responses to historic earthquakes in Europe, AD 1200-1755). In particular, I will discuss the case study of El Castillejo, an Islamic fortified settlement located in the Betic Cordilleras some 40 km south of Granada. Here, a combination of building archaeology, targeted excavation and robust radiocarbon dating led to the identification of an uncatalogued seismic event which caused, in the later medieval period, a prolonged abandonment of the settlement. My talk will therefore discuss the potential of collaboration between archaeologists and palaeoseismologists, investigating challenges and opportunities for the employment of such collaborative results for the implementation of historic earthquake catalogues, assessment of the local seismic hazard and seismic risk communication.

Paolo’s thoughts on our questions…

  • UNCERTAINITY – The major uncertainties I have faced in my research relate to the (i) assessment of earthquake damage in the archaeological record (here including buried stratigraphy, standing buildings, and the palaeoenvironment) and the (ii) absolute dating of this evidence. In particular, when working on historic monuments or buildings, I am very often dealing with the ‘silencing’ of earthquake damage in the form of invasive restorations, substantial reconstructions, and the application of obliterating plaster surfaces. When the damage is still visible, the seismic origin of the damage has to be carefully evaluated and assessed in order to rule out any other possible natural or man-made cause for the observed damage. When identified, the chronology of the feature is very often challenging to date, moreover in areas where the reoccurrence of earthquakes generates a palimpsest of damage (and post-earthquake responses) within a single building. In my experience, only a careful stratigraphic analysis of the structure (i.e. building archaeology) can help solve these methodological issues, and previous research convinces me that only the combination of standing structure analysis with the excavation of their buried deposits can provide a comprehensive picture of the ‘seismic history’ of the site under investigation. At the same time, in order to tackle the aforementioned uncertainties, I am exploring the contribution of spatial quantitative analysis such as those provided by GIS-based seismic models (e.g. GEM) in order to better characterise the ‘expected’ level of damage at a given location (or site). I also think that the interdisciplinary collaboration with palaeoseismologists and historic seismologist is crucial .
  • TIME SCALES – Declining this question in archaeological terms, I am interested in investigating both a specific seismic event (e.g. the Carinthia and Friuli 1348 earthquake) or a recurrent sequence of earthquakes in a given geographical area (e.g. historic earthquakes in Spanish Pyrenees). On the other hand, I think that he possibility to archaeologically identify, assess and date previously unknown events (such as the ‘lost’ earthquake which affected El Castillejo) can support seismological research into seismic cycles and/or recurrence intervals. In terms of target faults (or areas in my case), I will soon start a new project on the Carinthia and Friuli 1348 earthquake, so I will be collaborating with palaeoseismologists as part of the AlpShape4 project (PI: Christoph Grützner, Jena University) around a number of faults (i.e. areas) capable to have generated this and other severe seismic events in the eastern and southern Alps.
  • COLLABORATION – I am very keen in establishing a stronger and more regular collaboration with palaeoseismologists ‘in the field’. This is something I already started doing in Spain (with Klaus Reicherter, Aachen University; see our joined work on El Castillejo) and hope to continue to do soon in Italy and Austria (with Christoph Grützner, Jena University). From an archaeological perspective, it is very clear that this collaboration can help us better characterise past seismic events, thus contributing together to the impact of our research in terms of seismic risk assessment. What I strongly hope for is collaborative works and joined fieldwork around targeted seismic events or seismically hazardous key-regions. I am deeply convinced that our shared stratigraphic approach facilitates interdisciplinary dialogues and joined project but also stress the significance of the research of physical evidence for past seismic events ‘in the landscape’ also as a tool for the communication of seismic risk and seismic awareness.

Your questions to Paolo

In archaeological sites that show a record of recurring earthquakes, would it be useful to analyze the different layers of paint or repair material and their composition to get an approximation to the age of these events? In the sense that the composition of these materials, especially different colors of paints, have different provinces that could indicate commercial relations between the communities of approximate known age. (Ambrosio Vega)

Once an earthquake is identified with the archeoseismological approach, how do you address the problem of finding the source (seismogenic fault) and characterize the magnitude of the seismic event? (Pierfrancesco Burrato)

Julian Lozos

Modeling the Physics of Historic Pre-Instrumental Earthquakes: Examples from California’s San Andreas Fault System

The historic record of damaging earthquakes extends far further back in time than the instrumental record. While not having seismic waveforms or geodetic data for these older historic events means it is not possible to invert for their rupture processes, that does not mean there is no way to look into the physics behind these ruptures. Eyewitness accounts of shaking, as well as descriptions and remaining evidence of damage, can provide useful constraints on physics-based computer simulations of historic rupture processes – and all the more so when they are used in combination with paleoseismic data. In this talk, I will describe how I have used historical accounts and paleoseismic data to set up and constrain dynamic rupture simulations of several pre-instrumental California earthquakes.

Julian’s thoughts on our questions…

  • UNCERTAINITY – As a computer modeler, I’m fully aware that my models are nonunique, and that there may be several completely different physics-based ways to match any given observational dataset. This is a large part of why I try to constrain my models with several different types of datasets wherever possible. I feel like this makes them more robust. Another major limitation of modeling pre-instrumental earthquakes is that I only have observations from the ground surface (rupture length, slip, sometimes ground motion) to match. So, no matter how good the surface datasets are, the question of constraining what is happening on the fault at depth is still a mystery.
  • TIME SCALES – I think these are equally useful, though for different purposes. Recurrence interval gives information on which places should expect a large earthquake sooner, but detailed investigations of one or two events can tell us a lot about how faults behave once the earthquake begins. You can get probabilities from the long-term work, but things like shaking hazard from the short-term work.
  • HORIZONS – I would love to be able to conduct fully physics-based models of the entire seismic cycle. Right now, my models only capture the coseismic period, and I have to use other methods or approaches if I want to consider the rest of the cycle. I know there are tools being developed for integrated full-cycle modeling, and I’m excited to try using them in the future. I certainly don’t think we are close to probabilistic predictions. Even places with very good multi-cycle datasets (Parkfield, most notably) have defied past prediction experiments. Not to mention, all the data in the world on known faults cannot account for the behaviors and recurrence intervals of unknown ones! I’m not a paleoseismologist, so I can’t speak to methods, but I think it would be useful to target junctions between major faults around the world. Thinking of earthquakes as being confined to one fault just doesn’t bear out with many recent and historic events (Ridgecrest and Kaikoura come to mind immediately), so having long paleoseismic records near junctions would help piece together the long-term pattern of whether ruptures stop at or go through the junction point.
  • COLLABORATION – My work is already pretty interdisciplinary. As I mentioned earlier, models are nonunique and can give all sorts of results, so constraining them with paleoseismic, historic, and geodetic data is very important to me. I also think my method can be useful for constraining the meaning of those observations in turn. I would love to collaborate with more paleoseismologists and geodesists on future projects! I am also hoping that this workshop will give me ideas on how to collaborate with tectonic geomorphologists – whether I’m using that data to constrain models, or whether I’m making models to help constrain geomorphological observations.

Your questions to Julian:

When investigating large historical earthquakes in regions with multiple source faults, how can eyewitness testimonies be confidently linked to an individual fault if only a few records exist? (Sarah Boulton)

For pre-instrumental earthquakes, how can you disentangle between a single multi-fault rupture and a sequence of smaller events close in time? (Maria Francesca Ferrario)


Sabrina Metzger

“Tectonic Geodesy – Crustal Deformation Observed From Space “

Modern geodetic methods, that is Global Navigation Satellite Systems (GNSS) and satellite radar interferometry (InSAR), can be used to observe ongoing crustal kinematics with high spatial and temporal resolution. The presentation covers both theory and application, benefits and limits of this discipline, and how it benefits from geological records.

Sabrina‘s thoughts on our questions…

  • UNCERTAINITY – Active tectonics occur often at high-topography, high-precipitation areas. Here, satellite radar interferometry (InSAR) performs most poorly. Atmospheric signal delay, near-surface processes, vegetation growth, snow fall, epistemic long-wavelength patterns and poorly resolved elevation models disguse the tectonic footprint in radar observations. Thus, interferometric interpretations is nearly as difficult as reading tea leaves. Modern geodetic data archives stretch 20 years at maximum, which – in geologic terms – is just a blink of an eye.
  • HORIZONS – Modern geodesy is a young and quickly evolving methodology with major advances being implemented every few years. Recent example is the European Sentinel-1 radar satellite mission that samples every ~10 meter of the ground every 3-14 days. But the archive is rather short and we have to wait a few years measure a full seismic cycle. Data archives are growing exponentially. They can only be explored fully, if space agencies offer data processing services such that scientists can focus on data interpretation. Large-scale tectonics can only be correctly interpreted by the help of dense GNSS networks. We also need more accurate elevation and weather models. I expect that in the near-future InSAR time-series will be analysed like GNSS time-series, where the observed signals are decomposed into linear, seasonal and transient components. But unlike with relatively sparse GNSS data networks, one must consider the spatial data correlation.
  • TIME SCALES – If I would be Goddess Of Geoscience, I would provide the people a detailed seismic catalog of the last 10 million years. As President Of The Worlds Research Grants I would invest all the research money in early warning systems and infrastructure and not in fundamental geoscience research! As me, I expect that we first obtain the complete record of 1-2 seismic cycles rather than closing historical gaps. But even then we cannot extrapolate potential future slip activities as still so many faults are hidden. Maybe we should first invest in seismic imaging to get a full fault catalogue, before we can discuss kinematics?
  • COLLABORATION – You missed the (instrumental) resolution of the crust, which helps to identify hidden faults! Apart from that I benefit from all approaches as they either fill the gaps in the temporal archive, or use the same spatial resolution as InSAR (e.g. LiDAR). None, however, solves the atmosphere problem and the InSAR “blindness” to N-S kinematics due to the satellite’s acquisition geometry.

Your questions to Sabrina

Based on your perception, which disciplines, used together, will allow us to predict earthquakes in the future? (Valerio Comerci)

We can observe ongoing crustal deformation at a place after a while using InSAR and ultra-high precision orthophoto or DEM. Do you find it will be interesting to compare results for the same place from both methods? (Oksana Lunina)

How do you see, springs as a strong geomorphic signature for normal faults and strike slip faults? And what about their alignment?

What features generally do we expect to emerge from strike-slip faults crossing a mountain ridge? Other than deflection. (Bashir Ahmad)

Yariv Hamiel

Probing crustal deformation along the Dead Sea Fault with GPS measurements

The interseismic deformation along the Dead Sea Fault (DSF) is analyzed using 23 years of GPS measurements obtained from 270 stations. This GPS dataset is probably the longest record and the densest dataset for the DSF and the Levant region. We use inversion models to infer the spatial variations of slip and creep rates as well as locking depths along the DSF. A combined analysis of our results together with historical and paleo-seismic catalogs allowed us to estimate the average recurrence interval for large earthquakes and the amount of seismic moment accumulated since the last large earthquake along different sections of the DSF.

Yariv‘s thoughts on our questions…

  • UNCERTAINITY -The major limitation of GPS observations is that it captures only a small portion of the seismic cycle. It is like to take some snapshots and try to understand from it the entire scenario. Obviously, this limitation is also a major source for the epistemic uncertainties of the obtained slip rate and locking depth of a fault.  
  • HORIZONS – In the field of Seismo-Geodesy, I would love to add two new approaches: a). Install as much as possible seismo-geodetic stations- integrated stations that include GPS, seismometer and accelerometer. This way we will have good indication on the displacement, velocity and acceleration fields. b). Collect new data from an array of InSAR satellites with high rate of repeat cycles and different line-of-sight. This is an important point for the Dead Sea Fault (DSF) because the orientation of the DSF is nearly parallel to the current SAR data tracks (i.e., perpendicular to the radar line-of-sight). Therefore, at the present, GPS remains the only efficient tool for direct measuring the ongoing horizontal deformation along the DSF as well as monitoring the DSF-parallel velocities and strain accumulation rate. A combination of the above two points will be a great step forward for understanding crustal deformation.
  • TIME SCALES – I think that detailed study of one or two seismic cycles and statistical analysis of recurrence intervals for longer timescales are complementary to each other.  Therefore, I think that we can all benefit from an integration between the two approaches in order to understand earthquakes and spatial and temporal variations in crustal deformation. In my studies, I uses a combination of slip rate calculations obtained by GPS measurements together with paleoseismic data to estimate the average recurrence interval for large earthquakes and the amount of seismic moment accumulated since the last large earthquake along different sections of the Dead Sea Fault.
  • COLLABORATION – Although my main fields are geodesy and tectonics, I am trying to integrate between all four approaches, i.e., paleoseismology, archeoseismology, geodesy and tectonic geomorphology. Therefore, my studies can benefit from all four approaches.

Your questions to Yariv

Which fault parameters (e.g., geometry, length) are necessary to calculate a fault’s geodetic slip rate from a GPS velocity field? (Sara Pena Castellnou)

If much of the DSFZ is ‘locked’ how representative is the short term GPS record of long-term seismicity for this fault? (Sarah Boulton)


Joanna Faure Walker

“From tectonic geomorphology to SHA – details, data and deductions”

Tectonic geomorphology can provide long-term multi-seismic cycle deformation rates and the locations of “active” faults, both of which are key inputs for fault-based seismic hazard assessments. Field studies show that changes in surface slip-rates occur along strike-variable normal faults. Therefore, we need to record such changes along faults and include this variability when calculating seismic hazard.  To achieve this, the data needs to be presented in such a manner that it can be appropriately incorporated into models. As models progress to include more detailed data, we can identify which faults drive risk and which are the optimal sites for further study in terms of maximum influence on risk calculations.

Joanna’s thoughts on our questions…

  • UNCERTAINITY – There are challenges using surface data to model displacement and geometry at depth as we cannot directly measure the displacement at depth, thus this we cannot know the certainty of these assumptions. Modelling the mechanics of deformation can help guide whether assumptions are likely valid or invalid.
  • HORIZONS – A major step forward would be to know the detailed slip history of faults, thus having a detailed palaeoseismic record at multiple sites along each fault would give us a chance to study the variability in the magnitude and inter-event timing of earthquakes and the relationships between the timing of earthquakes on different faults. Such records need a long-term marker covering multiple seismic cycles to act as a benchmark against.
  • TIME SCALES – we need both short-term and long-term timescales…. short-term single seismic cycles to understand what indicators could track where in a cycle we are and to identify behaviours common to imminent ruptures and long-term multi seismic cycle to understand variability in the seismic cycle and to identify which durations may be most indicative of current hazard.
  • COLLABORATION – add present-day seismology, statistical modelling, 

Your questions to Joanna

From seismic hazard modelers we often get complaints that the slip-rates derived from tectonic geomorphology are way higher than the ones that were obtained by modelling. It seems that geomorphological slip rates are higher because they average through long time windows (multi-seismic cycle) whereas modeled slip-rates are derived from GPS data (recent state within seismic cycle). When providing the data for SHA, how to account for the issue that long-term slip-rates may not be representative for current slip-rates and therefore for the current seismic hazard assessment? (Petra Jamšek Rupnik)

How to estimate the aseismic part of the slip-rate derived from tectonic geomorphology? (Petra Jamšek Rupnik)

Are longer palaeoseismic records realistic or should we be focusing on other geomorphic markers to refine the long term slip history of faults? (Sarah Boulton)

What in your opinion is the greatest challenge regarding the ability pass field data onto modelers in a useful format but without losing too much of the spatial heterogeneity observed in the field? How can uncertainties (e.g. outcrop bias) also be communicated and incorporated into these datasets? (Billy Andrews)

Some recent seismic sequences showed widespread distributed faulting; others showed the repeated rupture of the same fault strand (like Mt. Vettore fault in 2016). Is this information implemented into fault-based SHA? If not, will it be possible in the future? (Maria Francesca Ferrario)

Do you think we need to remap or recalibrate the global fault zones and create a new database, which is handled by a professional team of global experts? I am saying this because I come across a lot of work where basic fault mapping is very crude and often misleading! This will create standard global earthquake hazard maps that we are need the most. (Afroz Shah)

How much representative are the slip rates calculated at the surface respect to the ones expected at depth, and what is the influence of the 3D geometry of the fault on what we see at surface? (Pierfrancesco Burrato)

How could (paleo) landslides or other mass movements help to improve the seismic record? (Anika Braun)

Ramon Arrowsmith

” Sharpening our view of active faults “

Advances in our understanding of active faulting come from continued application of existing and new tools to our investigations. These include careful field observation, detailed analysis of remotely sensed data, application of geochronology and paleoclimate constraints, and simulation of the interacting processes. We have recently been working on several projects relevant to SHA and PFDHA. The first is a “pre-rupture mapping initiative” where we have worked to standardize fault zone mapping and train unbiased students to competently produce fault maps along faults based on data from before their recent historic earthquakes. The second area is the application of robotics and machine learning (object detection) to produce rich maps of rock traits (2D) and their 3D application including rigid body physics engines to fragile geologic features. Topographic differencing of data collected before and after significant fault slip provides a rich set of data for directly testing PFDHA models. Finally, simulation of the detailed interaction of surface processes as modulated by annual to centennial climate drivers with earthquake rupture enhances our ability to confidently read fault zone landscapes for evidence of past earthquakes.

Ramon’s thoughts on our questions…

  • UNCERTAINITY – My interest is to understand and simulate the interacting processes to address epistemic uncertainty. But, it is clear that managing bias is central to much of what we do. There is a tension between competent interpretation and convergence to accepted models and understanding. We still need to go together in the field and in the computer to discuss and understand what each of us is seeing and continue to standardize our approaches and our reporting.
  • HORIZONS – We can continue to make progress with improved technology for remote sensing and interpretion of the geomorphology of active fault zones and with improved access and application of multiple chronometers to dating of past events (sedimentation, erosion, ground rupture, soil formation, etc.). I think that we can also better exploit synchronous weather and climate events to better correlate earthquakes across regions and potential stress modulators.
  • TIME SCALES – I think that we have to continue to push for single long record sites to help build the datasets for reference analysis. However, I still think that the substitution of space for time has value and that we can develop understanding with ensemble datasets.
  • COLLABORATION – Along with our core colleagues here in EDITH, I have recently found good success and stimulating efforts in collaborations with geochronologists, and with roboticists. The latter includes both enhanced data collection but also improved data analysis and interpretation.

Your questions to Ramon

Based on your perception, which disciplines, used together, will allow us to predict earthquakes in the future? (Valerio Comerci)

What does it mean that you have standardize fault zone mapping? (Rosa Nappi)

As you point out subjective bias can have a large impact on various geological data sources (my work is mainly on fracture analysis), as a community how do we go about addressing this variability in large community driven datasets that are integral if we are to build up a significant record in a particular geological area? (Billy Andrews)

How could landslides be used as proxy for past earthquakes? Also considering the increasing availability of landslide catalogs for recent earthquake events as well as HR remote sensing products for analyzing geomorphology? (Anika Braun)