I ask because it does not appear to do this.
I tested on mac os with 1.4.0dev and on linux with 1.3
I tried trimming a stream, writing to disk and then reading back in. The processing attribute is present after the trim but is missing when I read the trimmed stream back in from disk.
This seems like a bug to me but maybe this is intended ??
MiniSEED 2 which is still by far the most prevalent version does not support custom headers like that. MiniSEED version 3 does, so it would be possible to do that (even though file size is increased significantly when writing a lot of custom headers), but we still need to finalize support for MiniSEED 3, which will hopefully happen in the next few weeks.
EDIT: If the (pre)-processing is cheap, just do it on the fly and only keep the raw data. If the processing workflow is heavy, you’ll need to store that processing information yourself alongside the data, like you could serialize the dictionary-like Stats object with json module, e.g .like
from obspy import read, UTCDateTime
st = read()
tr = st[0]
import json
tr.stats.pop('response')
class MyJSONEncoder(json.JSONEncoder):
def default(self, o):
if isinstance(o, UTCDateTime):
return str(o)
return JSONEncoder.default(self, o)
with open('/tmp/info.json', 'wt') as fh:
json.dump(tr.stats.__dict__, fh, indent=2, cls=MyJSONEncoder)
Just getting back to this (slow day … and a couple of thoughts:
Just in case anyone wants to go this route, note that AttribDict (like UTCDateTime) is also not serializable and so must be converted to dict in default() and
My question was more of one of philosphy - people/data centers exchange processed data pretty regularly (I think) - it seems like the mseed header should log all of the processing, not a separate file that could easily become detached from the mseed file itself. Maybe implementing mseed ver 3 will handle this as you suggest.