|
Posted by Moe Trin on 06/26/07 20:05
On Mon, 25 Jun 2007, in the Usenet newsgroup alt.comp.linux, in article
<1182837968.071609.156750@e16g2000pri.googlegroups.com>, sla29970@gmail.com
wrote:
>Say somehow the clock is more than three seconds fast, so the system
>is out of the range which is tolerated according to SEC guidelines for
>trading. But the clock also runs a real-time process control system
>where changing the frequency by 1 PPM will throw off synchronization.
Poorly contrived example. If someone was so inept as to be running a
system used for SEC regulated trading, _AND_ using the same system as a
process controller, the problem is not going to be solved by somehow
offsetting the date/time. The process will be solved by getting the
process controller off the system with this gross oscillator.
>What to do?
Shoot the designer - he has no concept of error budgets. A process
that demands a "less than" 1 PPM stability should never have been
designed using such a crap oscillator as is found in computers. I used
to have a copy of a document from an oscillator manufacturer (possibly
Greenray, Bliley, or Vectron) that went into stability problems. The
predominant error mechanism in crystal oscillators is temperature,
followed by electrical supply voltage, load, and physical vibration.
Then you have "aging" where the crystal gradually changes frequency
(drifting slowly higher, at a rate dependent on the orientation of the
cut as well as temperature), as well as the accuracy of setting the
frequency in the first place. There are additional error mechanisms,
such as thermal shock, and even the physical orientation of the crystal.
If you need a _stability_ of better than 1 PPM over all (or most)
operating conditions, you are paying a large number of bucks to get a
very specialized oscillator (probably an order of magnitude over the
cost of that PC). Does 'stability' imply timing accuracy? Absolutely
not, although an accurate frequency DOES imply stability.
>200 years ago navigators understood that the ship's chronometer would
>not stay in agreement with the earth, but resetting the chronometer
>made it unstable,
Agree - except that the absolute measure of time back then was by
today's standards "gross". However, _PRACTICAL_ celestial navigation
was talking of errors in the range of a significant fraction of a mile
so that absolute accuracies of +/- a second of time are lost in the
noise of the sextant angle measurement errors. If you are talking of
the time error in surveying, you also have to look up the term "circle
of uncertainty", and note that with the advent of satellite based
navigation systems starting with "Transit", the coordinates of most
places in the world had to be restated to correct errors in surveyed
values.
>so they kept logs of the difference between chronometer and clock.
I suspect the last word should be something else - perhaps "time
obtained from a recognized standard". Now whether that is the time from
a government operated "Standard Time Station" such as WWV*, CHU, MSF,
JJY, or RWM, or a celestial observation of the transit of moons around
Jupiter is irrelevant.
>50 years ago there was still no such thing as a clock which did not
>need to be reset, and the concept was unthinkable. At some level of
>precision (which depends on how much money was spent on the clock and
>engineering the systems around it) that remains true.
That's also because the absolute definition of time keeps changing,
hence the use of leap-seconds which started this whole thread.
Old guy
Navigation:
[Reply to this message]
|