If I understand correctly, you think that the anomalous packet has a modified sampling rate as a way to “correct” for a ~1sample time drift. And you will compensate for this by changing the data timing to agree with the anomalous packet, while leaving the sampling rate constant.
If the assumption about what is happening is correct, then I think your solution is a good way to go. Have some logic to define normal sampling rate and ignore anomalous sample rates, and always (or only when there is an anomalous sample rate?) re-adjust the data timing to reflect the start time of the latest packet. A problem with this is changing data timing, but this still may be the least bad solution, especially with not too long data windows…