|
Posted by Erland Sommarskog on 06/28/05 10:15
pk (philip.kluss@gmail.com) writes:
> It is not clustered. It is based off of 6 columns, PRCSHID, EPITMTYP,
> ITEMNMBR, UOFM, QTYFROM, and QTYTO. The Create UNIQUE checkbox is
> checked and greyed out, so I can't use my previous workaround of
> checking the "Ignore Duplicate Key" box. Index Filegroup is PRIMARY.
> Fill Factor is 90%.
>
> One last thing is that the Table Identity Column for IV10402 is set to
> DEX_ROW_ID, which happens to be the one column that I'm not inserting.
> Is this a problem?
>
> Again, this table is empty when I run this insert. I'm almost positive
> that there aren't actually duplicate primary keys. Did Microsoft
> really offer no way to find out which rows it feels are duplicates?
Either there are duplicates in the file, or the format file is incorrect
somehow, so that data ends up in the wrong columns.
Create a copy of the table, but put no indexes or constraints on the
table. Bulk load data into that table. Check for duplicate with
SELECT col1, col2, ... COUNT(*)
FROM tbl
GROUP BY col1, col2, ...
HAVING COUNT(*) > 1
Also, do something like "SELECT TOP 100 * FROM tbl" to see whether the
data makes any sense.
You can use the Object Broswer in Query Analyzer to create a script
for the table. Find the table, and scripting options is on the context
menu. Create the table in tempdb.
As for finding which rows that are problematic directly, BULK INSERT
does not seem to offer this option. BCP does, but I think that error
file covers only format errors, not insertion errors.
--
Erland Sommarskog, SQL Server MVP, esquel@sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinfo/productdoc/2000/books.asp
Navigation:
[Reply to this message]
|