Parsing dataless files

Hi All,

I am new to obspy and I have run into a problem with the getPAZ method. It is throwing an IndexError when trying to set the abbreviation value. I looked at the value of blockette.stages and it is a single integer in a list within a list. However, the code is looking for a second value within the list. I changed the code to look for the single value (matching line 485) and then I get: AttributeError: 'Blockette030' object has no attribute 'sensitivity_gain'

I am wondering if this is a problem with our dataless files or a bug in the code?

Any help on this would be greatly appreciated. Below is the python output.

-Jacob

Python 2.6.6 (r266:84292, Aug 28 2012, 10:55:56)
[GCC 4.4.6 20120305 (Red Hat 4.4.6-4)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from obspy.xseed import Parser
>>> from obspy.core import read
>>> st = read("./DATA/20120413000000.CI.DEV.BHZ.mseed")
>>> tr = st.select(id="CI.DEV..BHZ")[0]
>>> parser = Parser("./Responses/dataless.CI.DEV")
>>> from obspy import UTCDateTime
>>> t = UTCDateTime("2012-04-13")
>>> paz = parser.getPAZ(tr.id,t)
Traceback (most recent call last):
   File "<stdin>", line 1, in <module>
   File "/usr/lib/python2.6/site-packages/obspy-0.8.3-py2.6-linux-x86_64.egg/obspy/core/util/decorator.py", line 67, in echo_func
     return func(*args, **kwargs)
   File "/usr/lib/python2.6/site-packages/obspy-0.8.3-py2.6-linux-x86_64.egg/obspy/xseed/parser.py", line 480, in getPAZ
     abbreviation = blockette.stages[0][1]
IndexError: list index out of range

Hi Jacob,

a copy of your dataless would be very helpful to debug this issue -
either send it to me directly or provide some download link.

Best regards,
Robert

Hi All,

I am new to obspy and I have run into a problem with the getPAZ
method. It is throwing an IndexError when trying to set the
abbreviation value. I looked at the value of blockette.stages and
it is a single integer in a list within a list. However, the code
is looking for a second value within the list. I changed the code
to look for the single value (matching line 485) and then I get:
AttributeError: 'Blockette030' object has no attribute
'sensitivity_gain'

I am wondering if this is a problem with our dataless files or a
bug in the code?

Any help on this would be greatly appreciated. Below is the python
output.

-Jacob

Python 2.6.6 (r266:84292, Aug 28 2012, 10:55:56) [GCC 4.4.6
20120305 (Red Hat 4.4.6-4)] on linux2 Type "help", "copyright",
"credits" or "license" for more information.

from obspy.xseed import Parser from obspy.core import read st
= read("./DATA/20120413000000.CI.DEV.BHZ.mseed") tr =
st.select(id="CI.DEV..BHZ")[0] parser =
Parser("./Responses/dataless.CI.DEV") from obspy import
UTCDateTime t = UTCDateTime("2012-04-13") paz =
parser.getPAZ(tr.id,t)

Traceback (most recent call last): File "<stdin>", line 1, in
<module> File
"/usr/lib/python2.6/site-packages/obspy-0.8.3-py2.6-linux-x86_64.egg/obspy/core/util/decorator.py",

line 67, in echo_func

return func(*args, **kwargs) File
"/usr/lib/python2.6/site-packages/obspy-0.8.3-py2.6-linux-x86_64.egg/obspy/xseed/parser.py",

line 480, in getPAZ

abbreviation = blockette.stages[0][1] IndexError: list index out of
range

_______________________________________________ obspy-users mailing
list obspy-users@lists.sevor.de
ObsPy Forum

- --
Dr. Robert Barsch

EGU Office Munich
Luisenstr. 37
80333 Munich
Germany

Phone: +49-89-21806549
Fax: +49-89-218017855
eMail: barsch@egu.eu

Thanks for the quick reply Robert. Here is a link to our dataless files: http://www.data.scec.org/ftp/stations/seed/

-Jacob

Dear Jacob,

a short update on this issue - didn't had very much time yet to look into it - but it seems we have a issue with correctly parsing/interpreting blockette 060.

For now I opened up a new ticket at
https://github.com/obspy/obspy/issues/511 - will have a deeper look into it this weekend.

Robert

PS: besides, I wasted a lot of time yesterday to find a proper way to split the dataless file into channel specific dataless containing only the entry for this one failing channel at the given timestamp - however due to the huge station-wide abbreviations dictionary my split files always resulted in sizes of more than 50k - quiet huge for a test file :confused:

Does anyone know of a good tool which is up to the task described above?