|
Posted by Erland Sommarskog on 10/01/28 11:18
New MSSQL DBA (boscong88@gmail.com) writes:
> Hi all, we are now planning to upgrade our application from a
> non-unicode version to a unicode version. The application's backend is
> a SQL Server 2000 SP3.
>
> The concern is, existing business data are stored using collation
> "Chinese_PRC_CI_AS", i.e. Simplified Chinese. So I thought we need to
> extract these data out to the new SQL Server which is using Unicode (I
> assume it means converting them to nchar, nvarchar type of fields for I
> don't enough information from the application side, or is there a
> general unicode collation that will make even char and varchar types to
> store data as Unicode?).
You will have to move to nchar/nvarchar.
> The problem is what's the best and most efficient way to do this data
> conversion?
> bcp? DTS? or others?
One idea would be to create a new database on the same server, with
the (var)char columns changed to n(var)char columns, and then insert
data over. In this case you will get a conversion from the multi-byte
character set you use today. You would then move that database to the
new server with detach/attach or backup/restore.
You would not create indexes, constraints and triggers in the new
database, until you have copied the data.
Using BCP meand that you have to bounce over disk. Then again, bulk-
load is faster so it could still be faster. Here I cannot really say
that you will get a conversion, although I believe that you would.
(I have never converted Chinese text from double-byte to Unicode, so
I don't really know what works and what does not.)
As for DTS, I don't know DTS at all, so I can't say whether it's good or
not.
--
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinfo/productdoc/2000/books.asp
[Back to original message]
|