Y. Radzyner1 , Y. Ben Horin2 , D.M. Steinberg3
1National Data Centre of Israel, Soreq Nuclear Research Center, Israel
2Soreq Nuclear Research Center, Israel
3Tel Aviv University, Raymond and Beverly Sackler Faculty of Exact Sciences, Department of Statistics and Operations Research, Tel Aviv, Israel
Magnitude, a concept first presented by Gutenberg and Richter, is the standard measure for the strength of an earthquake. The IDC defines the body wave magnitude for event i at station j as m(sta) =log (A⁄T)+VC(∆,h), where A is the maximum amplitude, T is the corresponding time period, VC is the Veith-Clawson (VC) correction compensating for the epicentral distance of the station (∆), and the depth (h) of the source. The network magnitude (m(net)) is calculated as the average of the station magnitudes, and should be close in value to the station magnitudes. In reality, it is observed that the residuals range between -1 and 1 mu or ±25% of a given m(net) value. We show that the residual depends linearly on log (A⁄T), and we propose a method to correct for this using station-specific correction terms. The procedure was used on roughly four million station-event pairs, representing over 400,000 events in the REB bulletin, and we find that it reduces the residuals by roughly a third. We show that this reduction is not an artifact of the averaging process. We also conducted two sets of simulations, which were designed to differentiate between the underlying models for station magnitude.