Reply to Re: Bulk Insert

Your name:

Reply:


Posted by Erland Sommarskog on 06/28/05 00:46

pk (philip.kluss@gmail.com) writes:
> Sorry for the piece-by-piece nature of this post, I moved it from a
> dormant group to this one and it was 3 separate posts in the other
> group. Anyway...
>
> I'm trying to bulk insert a text file of 10 columns into a table with
> 12. How can I specify which columns to insert to? I think format
> files are what I'm supposed to use, but I can't figure them out. I've
> also tried using a view, as was suggested on one of the many websites
> I've searched, but I clearly did that incorrectly as well.

Format files are a bit tedious, but for 10 columns it's not that
bad. Here is an example:

8.0
10
1 SQLCHAR 0 0 "\t" 1 X ""
2 SQLCHAR 0 0 "\t" 2 X ""
...
10 SQLCHAR 0 0 "\r\n" 12 X ""

First row is the version of the file format. Next row lists the number
of fields in the bulk file. Next ten rows details the fields.

First column is the field number. Second column number is the data type.
This is always SQLCHAR for an ANSI file, and SQLNCHAR for a Unicode
file. Other data types applies only to binary data files.

Third column is prefix length. This is always 0 for a text file. Fourth
column is column length. Use this for fixed-length columns or leave 0.
Fifth column is the field terminator. In the example, I'm assuming
tab, save for the last row that is terminated by carriage return+line feed.

The sixth column is the column number for the table column in SQL Server.
This does not have to follow the numbers in the file. If the number is 0,
that file in the text file is not imported.

The seventh column is the column name, but this column is informational
only.

The eigth column specifies the collation. This is good if you need to
convert data between charsets when importing.


> --------------------------------
> Update:
>
> I'm working with the view, and I've got a view that contains the exact
> columns from the table I want. I ran my bulk insert command,
>
> BULK INSERT Test..IV10401 FROM 'c:\bulkInsertFile2.txt'
>
> and it returned the error:
>
> Server: Msg 2601, Level 14, State 3, Line 1
> Cannot insert duplicate key row in object 'IV10401' with unique index
> 'AK2IV10401'.
> Note: Bulk Insert through a view may result in base table default
> values being ignored for NULL columns in the data file.
> The statement has been terminated.
>
> The AK2IV10401 key is comprised of 3 columns and I'm sure that each of
> my rows in the insert file is unique according to those three. What
> should I be checking for?

Maybe the keys are already in the table?

> -----------------------
> Update 2:
>
> I can only successfully insert 1 row. It seems to be treating each row
>
> as an individual primary key when it should be treating them as
> composite keys. I cannot alter the table, since it was created by
> Great Plains Dynamics.

Without access to table definition, data file and the BCP command
it's hard to tell what is going on.

A common technique is to bulk load into a staging table, and then
clean up data there, before moving to the target table.

> Is there some sort of switch that I'm missing
> in my bulk insert statement or can I suppress the errors?

Well, you can use -b and -m to set the batch size, and increase the
number of errors permitted. See Books Online for further details.

--
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinfo/productdoc/2000/books.asp

[Back to original message]


Удаленная работа для программистов  •  Как заработать на Google AdSense  •  England, UK  •  статьи на английском  •  PHP MySQL CMS Apache Oscommerce  •  Online Business Knowledge Base  •  DVD MP3 AVI MP4 players codecs conversion help
Home  •  Search  •  Site Map  •  Set as Homepage  •  Add to Favourites

Copyright © 2005-2006 Powered by Custom PHP Programming

Сайт изготовлен в Студии Валентина Петручека
изготовление и поддержка веб-сайтов, разработка программного обеспечения, поисковая оптимизация