Location quality of synthetic events
Hello Anthony,
I am evaluating the quality of some synthetic event localizations using NonLinLoc. This quality is based on some event's parameters and metrics, such as the uncertainties (ellipsoid's semi-axes), station network configuration (e.g. azimuths), RMS...
You can observe the results in the figure below:
In the first row, I varied the distance to the nearest station, while keeping the travel-time errors constant in each column and I tested this configuration for some event depths: 0, 5, 10, 20 and 30 km. In the second row what I varied was the (primary) azimuthal gap.
A general trend that I observe is that the location quality was optimal for the synthetic events between 5 and 20 km depth, in comparison with the rest, and it worsens considerably at 30 depth. Why does this happen even when the RMS are fixed and the station coverage is identical (thus only the uncertanties vary)?
Best regards :)
Does your velocity model have a variation of velocity with depth? Or homogeneous?
Anthony
This is the 1D velocity model used for the location:
LAYER 0.0 5.50 0.0 3.16 0.0 2.7 0.0
LAYER 1.0 5.75 0.0 3.30 0.0 2.7 0.0
LAYER 2.0 5.98 0.0 3.44 0.0 2.7 0.0
LAYER 4.0 6.11 0.0 3.51 0.0 2.7 0.0
LAYER 10.0 6.30 0.0 3.62 0.0 2.7 0.0
LAYER 18.0 6.36 0.0 3.66 0.0 2.7 0.0
LAYER 26.0 7.35 0.0 4.22 0.0 2.7 0.0
LAYER 30.0 8.05 0.0 4.63 0.0 2.7 0.0
Greetings!
There is a large increase in velocity at 26 and 30 km depth, so the 30 km depth locations should have very different ray take-off angles (more horizontal) for more distant stations than shallower events. Location constraint is dependent on the ray coverage and ray directions at the hypocenter, so a difference in take-off angle distribution for the 30 km deep events may explain the difference in quality.
What are the columns in your plot - what are the RMS differences?
Anthony
I see, thanks for the explanation. But the increase in velocity between 18 and 26 km depth is even larger and yet I do not observe the same effect on the synthetic events at 20 km depth...
The columns represent the variation on synthetic timing errors on each station following this syntax: EQSTA <station> <phase> GAU <error_value> GAU <error_value> 1.0
The aim was to assess the effect of travel-time picking errors in the location quality.
Given your LAYER specifications, with NLL Vel2Grid, the P velocity will be 6.36 down to 26 km, then jump to 7.35 at 26 km depth. If you want a continuous (linear gradient) increase in velocity, you would need to specify a non-zero gradient in the column after the P velocity and after the S velocity. You also need to specify a VGGRID that is fine enough to represent well this gradient. Or you can generate externally a smooth model and use a large number of LAYER lines to specify the smooth model to Vel2Grid.
The columns represent the variation on synthetic timing errors on each station
OK. Depending on the details of your quality measure, one reason for decreasing quality with increasing timing errors is that the extent of the pdf (and thus ellipsoid semi-axes) increases with increasing nominal pick uncertainty and will increase, in general, with increasing random pick error.
Anthony