Posted by vikram.mankar on 01/18/07 19:15
I'm running into a constant issue of SQL Server modifying the
millisecond part of a timestamp insert from another application. The
application inserts timestamp which includes a millisecond portion as a
string (varchar). But when an SQL Server moves this data to another
table (for reporting), the string is inserted in a datetime field, the
millisecond field invariably changes by 1-2 milliseconds for every
single data point inserted. Given the time critical nature of this data
(to a millisecond), its almost impossible to avoid this other than to
leave the data as string type. But this drives the analytical reporting
folks wild as report queries based on time criteria are getting messed
up. Any ideas how to force SQL Server not to mess around with the
millisecond value? Does this problem exist with SQL Server 2005 as well?
Navigation:
[Reply to this message]
|