No matching response information found

Hello,
I am working on a script to remove the instrument response for a large number of earthquakes in sac format.
Here’s the script:

# Import third-party libraries
from pathlib import Path
from obspy import read_events, read, read_inventory, UTCDateTime
from obspy.geodetics.base import gps2dist_azimuth
from obspy.core.util.attribdict import AttribDict
import matplotlib.pyplot as plt
# Read the evids/directories
infile = "quakes.csv"
with open(infile,"r") as f:
    next(f) # skip header
    evids = [line.split(",")[0].strip() for line in f]
print(evids)
# Pre filter corners in the frequency domain,
# to prevent overamplication while convolving the inverted response sectrum
pre_filt = (0.01,0.02,15,20)
for evid in evids: # to process all events
    event_dir = evid
    # Directory of processed waveforms
    outdir = "Processed" # Output directory
    Path(outdir).mkdir(parents=True,exist_ok=True)
    # Event hypocenter
    st = read(f"{event_dir}/*.sac")
    header = st[0].stats.sac
    origin_time = f"{header.nzyear:04}-{header.nzjday:03}T{header.nzhour:02}:{header.nzmin:02}:{header.nzsec:02}.{header.nzmsec:03}"
    depth = header.evdp
    evlo = header.evlo
    evla = header.evla
    print(origin_time, depth, evlo, evla)
    print(st)
    # Read response files
    inv = read_inventory("%s/stations/*"%event_dir,format='STATIONXML')
    _ = inv.plot(projection="local") # plot stations
    # Detrend and remove instrument response
    st.detrend(type="linear") # equivalent to rtr in SAC
    st.remove_response(inventory=inv, evalresp=True, pre_filt=pre_filt, output="DISP", zero_mean=True) # correct to displacement
    st.detrend(type="linear")
    st.detrend(type="demean") # remove mean

Then I get the error: No matching response information found
Although the information are matching.
I attached one of the sac files and the intrument response for it.

2015.01.01-08.45.44.DHK1n.sac (391.2 KB)
MP.DHK1.xml (352.0 KB)

Thank you,

Just as the error stated, there is no matching response information in your metadata. The waveforms are for 2015 and the metadata is for 2017 with open end.

You were right, it is working now after modifying the xml file.
I have one more question.
I used:

    st.remove_response(inventory=inv, pre_filt=pre_filt, output="VEL", zero_mean=True)

My raw data was velocity count/sec
what will be the unit after removing the instrument response?
How can I convert it to velocity nm/sec?
here is the header of the data after removing the instrument response:

Thank you for your help,

Are you sure it is the same instrument before that time, though? Depending on where this metadata file comes from, that start date might have been correct and you’re looking for a different instrument’s metadata. It is not at all uncommon that one site sees instrumentation changes over time.

Technically, your raw data is just counts and it might happen to be proportional to velocity in some parts of the spectrum if it is a seismometer (as opposed to accelerometer etc).

Output is in SI units, so if you chose "VEL" it will be “m/s”. So… just multiply with 1e9 to get “nm/s”?
Also, the output units are detailed in the documentation, please check out the docs first, we put a lot of time and effort in them. :wink:

https://docs.obspy.org/master/packages/autogen/obspy.core.trace.Trace.remove_response.html#obspy.core.trace.Trace.remove_response

Yes, I am sure of that. Because the XML file is from IRIS, they have the data from 2017. This is why their XML file started in 2017. However, I have more data that is not stored in IRIS.

I appreciate your help; that was really helpful.
So, I assume that I need to add the command line:

conversion_factor = 1e9
for trace in st:
    trace.data *= conversion_factor

Is that right?

Yes, that’s correct :+1:

1 Like

@megies
Sorry but I have to ask again :sweat_smile:
Sometimes there is invalid value in my data shows as -nan in the header. it stops the script. what can I do in this case? I cannot find those invalid data because I have about 900 events.

You can find the problematic data using the debugger:

Several ways you can go after it, simplest way might be to start your script with pdb enabled for post mortem debugging on command line

$ python -m pdb my_script.py