Imagine one has two miniSEED records. The first record’s last sample has the timestamp defined in the its blockette plus the its number of samples times its sampling period. If the timestamp of the second record’s first sample is now the first’s records last sample + the sampling period, these two records are combined into one trace (I think).
In real data, however, the stated startime of the second record may deviate from the first record’s last sample time + sampling period by only a fraction. I wonder what internal condition Obspy uses to declare that such two records will still be combined into one trace as opposed to two different traces. I suppose there is something like an epsilon condition. Is that true, and if so, could I access it somehow when reading a stream object?
I digged in the code but could not find anything in that regard. Thanks in advance for any pointers.
there are some tolerances being used in MiniSEED reading, Lion and Chad will know more on this, but if you wanna dig in the code, you can start here:
Hi John, maybe what you are looking for can be found under stream._cleanup:
The misalignment_threshold can be set during a call to stream.merge, perhaps. Since any merge operation performs a cleanup merge as a first step and passes **kwargs downstream.
yes and no. It's correct that you can control this during merging Traces, but in MiniSEED reading routines specifically, some similar merging is already done in C code before Traces are presented to the user or even created. This is mostly owed to the per-record granularity in MiniSEED and thus potentially a very high number of Trace objects having to be created if we don't do that, which would be slow and potentially have a huge memory overhead.
we already discussed this a while ago here:
There is even a partial fix for this issue. Back in the days we decided
against directly merging it as it does change behavior to previous ObsPy
versions. Maybe its time to discuss this again. Best move the rest of
the discussion to the github issue.