How to combine 3 component mseed files?

I am using mass-downloader to download daily 3 component .mseed files from GEOFON (GFZ) network. Instead of downloading all components in a single file, script downloading each component in an individual file. I require combining all the component in a single file.
Script works well for single case, but I have a bulk of data with different date and station names.
May someone suggest how can, I generalize this script, so it can save 3 component in a single file and same name.

import obspy
input_fnames = [f"6D.ABR8..HH{c}__20041201T000001Z.mseed" for c in list("ENZ")]
output_fname = "combined.mseed"

traces = []
for file in input_fnames:
  stream = obspy.read(file)
  traces.append(stream[0])
combined = obspy.Stream(traces)
combined.write(output_fname, format="MSEED")  

You could make use of glob expressions. For example, you can shorten your example:

from obspy import read

stream = read('6D.ABR8..HH?__20041201T000001Z.mseed')
stream.write('combined.mseed', 'MSEED')

Generalized script:

from glob import glob
from obspy import read

out_path = 'combined/'

for fname in glob('*Z__*.mseed'):
    stream = read(fname.replace('Z__', '?__'))
    out_fname = out_path + fname.replace('Z__', '__')
    stream.write(out_fname, 'MSEED')
1 Like

miniSEED is a very simple format and no Python scripting is required. Simply concatenate the content of all files of interest and form a new one. Example:

cat *.mseed > combined.mseed
2 Likes

What the others said. :+1:

In general, since MSEED is a streaming format with self contained blocks, you want to simply concatenate the raw data without going through ObsPy. It is probably not important to you, but you lose some fine grained information on record level (MiniSEED flags) if you go through ObsPy (not sure if mass downloader goes through a read operation anyway though). So like @droessler pointed out, on Linux simply concatenate on command line, or if not possible, you could do the concatenation in Python as well…

import io

data = b''io.BytesIO()
for path in ...:
    with open(path, 'rb') as fh:
        data.write(fh.read())
data.seek(0)
with open('...', 'wb') as fh:
    fh.write(data.read())

Edit: Oh and this is definitely not what you want:

If your data is gappy, you have more than one trace in file and you are missing out on any consecutive traces… so to catch all data:

    traces.extend(stream.traces)