|
Posted by Roy Harvey on 01/18/07 20:58
This is a documented behavior of the datetime datatype. As the Books
On Line says: "Values are rounded to increments of .000, .003, or .007
seconds". It applies to all versions, and I would not expect it to
ever change.
If you must keep it to the exact millisecond then you can not use
datetime.
You could split the information into two columns, say one part for the
date (could use smalldatetime) and the other for milliseconds since
midnight. Or, since smalldatetime is to the minute the second column
would just have seconds and milliseconds. There are countless
variations possible, none will make processing simple.
Roy Harvey
Beacon Falls, CT
On 18 Jan 2007 11:15:14 -0800, vikram.mankar@gmail.com wrote:
>I'm running into a constant issue of SQL Server modifying the
>millisecond part of a timestamp insert from another application. The
>application inserts timestamp which includes a millisecond portion as a
>string (varchar). But when an SQL Server moves this data to another
>table (for reporting), the string is inserted in a datetime field, the
>millisecond field invariably changes by 1-2 milliseconds for every
>single data point inserted. Given the time critical nature of this data
>(to a millisecond), its almost impossible to avoid this other than to
>leave the data as string type. But this drives the analytical reporting
>folks wild as report queries based on time criteria are getting messed
>up. Any ideas how to force SQL Server not to mess around with the
>millisecond value? Does this problem exist with SQL Server 2005 as well?
[Back to original message]
|