Friday, March 30, 2012

import of data in text file How ?

Hello group,
Each month we want to import data from an ascii file,
the records are of a fixed length (some spare space is added to fill them
up)
there are 12 different types of records (variants)
Each type has a different (but within the type always the same) build.
Each record starts with a type indication. Then the records follows.
One type can have 2 fields the other type can have 15 fields.
Records following eachother can have relations, so a type two record
belongs to the type 1 record before the type 2 record.
Example with 3 types
1 AAAAAABB
2 CCCDDDDDDEEEEFFGGHH
2 CCCDDDDDDEEEEFFGGHH
3 JJJJJJKKKKKKKKKLLMMMMMMNOPPPPQQQQ
1 AAAAAABB
2 CCCDDDDDDEEEEFFGGHH
The letters are indications for the different fields.
Any suggestions how to solve this ?
In our propriety system this was solved by reading the record
into a pascal record with all twelf variants and then writing it to the
propriety database. We could reuse the technique of the program,
but can not reuse the program.
And we still have to split up each record into fields.
Our propriety database just accepted the 'binairy' data as is.
ben brugmanBen,
You can reimplement your method of reading a record at a time and inserting
the row into the proper destination by using a DTS package and doing some
scripting for each row.
How I tend to do things like this is:
BULK INSERT the file to a work table
CREATE TABLE BulkInsertWork
(RowID INT IDENTITY,
RowText NVARCHAR(1000))
TRUNCATE TABLE BulkInsertWork
BULK INSERT MyDB.dbo.BulkInsertWork FROM 'c:\MyMonthlyFile.txt' ...
Then I write a stored procedure that parses out the specific rows, such as:
INSERT INTO Table1 (ColumnA, ColumnB)
SELECT SUBSTRING(RowText,3,6), SUBSTRING(9,2)
FROM BulkInsertWork
etc. for each type
FWIW
Russell Fields
http:/www.sqlpass.org/events/seattle03
2003 PASS Community Summit - Seattle
- The largest user-even dedicated to SQL Server! Register TODAY!
"ben brugman" <ben@.niethier.nl> wrote in message
news:bob7k7$4dj$1@.reader11.wxs.nl...
> Hello group,
> Each month we want to import data from an ascii file,
> the records are of a fixed length (some spare space is added to fill them
> up)
> there are 12 different types of records (variants)
> Each type has a different (but within the type always the same) build.
> Each record starts with a type indication. Then the records follows.
> One type can have 2 fields the other type can have 15 fields.
> Records following eachother can have relations, so a type two record
> belongs to the type 1 record before the type 2 record.
> Example with 3 types
> 1 AAAAAABB
> 2 CCCDDDDDDEEEEFFGGHH
> 2 CCCDDDDDDEEEEFFGGHH
> 3 JJJJJJKKKKKKKKKLLMMMMMMNOPPPPQQQQ
> 1 AAAAAABB
> 2 CCCDDDDDDEEEEFFGGHH
> The letters are indications for the different fields.
> Any suggestions how to solve this ?
> In our propriety system this was solved by reading the record
> into a pascal record with all twelf variants and then writing it to the
> propriety database. We could reuse the technique of the program,
> but can not reuse the program.
> And we still have to split up each record into fields.
> Our propriety database just accepted the 'binairy' data as is.
> ben brugman
>|||I immeadiatly went ahead with your suggested method.
I created a .txt file from the example in the mail.
I changed the nvarchar in the table to varchar.
When inserting from the .txt file I get the following errors :
Server: Msg 4832, Level 16, State 1, Line 1
Bulk Insert: Unexpected end-of-file (EOF) encountered in data file.
Server: Msg 7399, Level 16, State 1, Line 1
OLE DB provider 'STREAM' reported an error. The provider did not give any
information about the error.
The statement has been terminated.
Added :
WITH (
DATAFILETYPE = 'char',
ROWTERMINATOR = '\n'
)
But stil the same errors.
(Lastrow and rowsperbatch did not bring a solution either).
Please advise.
Also played around a bit with DTS, but the problem is that I can do an
import and then
a selection. Or that I can do a split up of the line into different fields.
But I can not do them both.
(Or I have to split up every line in every possible way and after selection
throwing away
the not desired results. Make a table for each type and insert every row in
every table and
then trowing away the types which do not belong in the table. I think this
is pretty ugly).
Ben brugman
"Russell Fields" <RussellFields@.NoMailPlease.Com> wrote in message
news:uf4ZQm7oDHA.1072@.TK2MSFTNGP09.phx.gbl...
> Ben,
> You can reimplement your method of reading a record at a time and
inserting
> the row into the proper destination by using a DTS package and doing some
> scripting for each row.
> How I tend to do things like this is:
> BULK INSERT the file to a work table
> CREATE TABLE BulkInsertWork
> (RowID INT IDENTITY,
> RowText NVARCHAR(1000))
> TRUNCATE TABLE BulkInsertWork
> BULK INSERT MyDB.dbo.BulkInsertWork FROM 'c:\MyMonthlyFile.txt' ...
> Then I write a stored procedure that parses out the specific rows, such
as:
> INSERT INTO Table1 (ColumnA, ColumnB)
> SELECT SUBSTRING(RowText,3,6), SUBSTRING(9,2)
> FROM BulkInsertWork
> etc. for each type
> FWIW
> Russell Fields
> http:/www.sqlpass.org/events/seattle03
> 2003 PASS Community Summit - Seattle
> - The largest user-even dedicated to SQL Server! Register TODAY!
>
> "ben brugman" <ben@.niethier.nl> wrote in message
> news:bob7k7$4dj$1@.reader11.wxs.nl...
> > Hello group,
> >
> > Each month we want to import data from an ascii file,
> > the records are of a fixed length (some spare space is added to fill
them
> > up)
> > there are 12 different types of records (variants)
> > Each type has a different (but within the type always the same) build.
> > Each record starts with a type indication. Then the records follows.
> > One type can have 2 fields the other type can have 15 fields.
> >
> > Records following eachother can have relations, so a type two record
> > belongs to the type 1 record before the type 2 record.
> >
> > Example with 3 types
> >
> > 1 AAAAAABB
> > 2 CCCDDDDDDEEEEFFGGHH
> > 2 CCCDDDDDDEEEEFFGGHH
> > 3 JJJJJJKKKKKKKKKLLMMMMMMNOPPPPQQQQ
> > 1 AAAAAABB
> > 2 CCCDDDDDDEEEEFFGGHH
> >
> > The letters are indications for the different fields.
> >
> > Any suggestions how to solve this ?
> >
> > In our propriety system this was solved by reading the record
> > into a pascal record with all twelf variants and then writing it to the
> > propriety database. We could reuse the technique of the program,
> > but can not reuse the program.
> > And we still have to split up each record into fields.
> > Our propriety database just accepted the 'binairy' data as is.
> >
> > ben brugman
> >
> >
>|||Ben,
A guess: The last row of the text file does not have the '\n'. If that is
so, append the character.
Less likely and more obscure: Are you on SQL Server 2000? If you are on
7.0, KB article 324122 has obscure problems that it is unlikely apply,
but...
http://support.microsoft.com/default.aspx?scid=kb;en-us;324122#appliesto
http://support.microsoft.com/default.aspx?scid=kb;EN-US;272292
http://support.microsoft.com/default.aspx?scid=kb;EN-US;197043
All error numbers that are in the range of 7300 to 7399 indicate a problem
with the provider, in this case STREAM.
Russell Fields
http://www.sqlpass.org/events/seattle03
2003 PASS Community Summit - Seattle
- The largest user-event dedicated to SQL Server! Register TODAY!
"ben brugman" <ben@.niethier.nl> wrote in message
news:eumzJHGpDHA.2188@.TK2MSFTNGP11.phx.gbl...
> I immeadiatly went ahead with your suggested method.
> I created a .txt file from the example in the mail.
> I changed the nvarchar in the table to varchar.
> When inserting from the .txt file I get the following errors :
> Server: Msg 4832, Level 16, State 1, Line 1
> Bulk Insert: Unexpected end-of-file (EOF) encountered in data file.
> Server: Msg 7399, Level 16, State 1, Line 1
> OLE DB provider 'STREAM' reported an error. The provider did not give any
> information about the error.
> The statement has been terminated.
>
> Added :
> WITH (
> DATAFILETYPE = 'char',
> ROWTERMINATOR = '\n'
> )
> But stil the same errors.
> (Lastrow and rowsperbatch did not bring a solution either).
> Please advise.
> Also played around a bit with DTS, but the problem is that I can do an
> import and then
> a selection. Or that I can do a split up of the line into different
fields.
> But I can not do them both.
> (Or I have to split up every line in every possible way and after
selection
> throwing away
> the not desired results. Make a table for each type and insert every row
in
> every table and
> then trowing away the types which do not belong in the table. I think this
> is pretty ugly).
> Ben brugman
>
>
>
> "Russell Fields" <RussellFields@.NoMailPlease.Com> wrote in message
> news:uf4ZQm7oDHA.1072@.TK2MSFTNGP09.phx.gbl...
> > Ben,
> >
> > You can reimplement your method of reading a record at a time and
> inserting
> > the row into the proper destination by using a DTS package and doing
some
> > scripting for each row.
> >
> > How I tend to do things like this is:
> > BULK INSERT the file to a work table
> >
> > CREATE TABLE BulkInsertWork
> > (RowID INT IDENTITY,
> > RowText NVARCHAR(1000))
> >
> > TRUNCATE TABLE BulkInsertWork
> >
> > BULK INSERT MyDB.dbo.BulkInsertWork FROM 'c:\MyMonthlyFile.txt' ...
> >
> > Then I write a stored procedure that parses out the specific rows, such
> as:
> >
> > INSERT INTO Table1 (ColumnA, ColumnB)
> > SELECT SUBSTRING(RowText,3,6), SUBSTRING(9,2)
> > FROM BulkInsertWork
> >
> > etc. for each type
> >
> > FWIW
> > Russell Fields
> > http:/www.sqlpass.org/events/seattle03
> > 2003 PASS Community Summit - Seattle
> > - The largest user-even dedicated to SQL Server! Register TODAY!
> >
> >
> >
> > "ben brugman" <ben@.niethier.nl> wrote in message
> > news:bob7k7$4dj$1@.reader11.wxs.nl...
> > > Hello group,
> > >
> > > Each month we want to import data from an ascii file,
> > > the records are of a fixed length (some spare space is added to fill
> them
> > > up)
> > > there are 12 different types of records (variants)
> > > Each type has a different (but within the type always the same) build.
> > > Each record starts with a type indication. Then the records follows.
> > > One type can have 2 fields the other type can have 15 fields.
> > >
> > > Records following eachother can have relations, so a type two record
> > > belongs to the type 1 record before the type 2 record.
> > >
> > > Example with 3 types
> > >
> > > 1 AAAAAABB
> > > 2 CCCDDDDDDEEEEFFGGHH
> > > 2 CCCDDDDDDEEEEFFGGHH
> > > 3 JJJJJJKKKKKKKKKLLMMMMMMNOPPPPQQQQ
> > > 1 AAAAAABB
> > > 2 CCCDDDDDDEEEEFFGGHH
> > >
> > > The letters are indications for the different fields.
> > >
> > > Any suggestions how to solve this ?
> > >
> > > In our propriety system this was solved by reading the record
> > > into a pascal record with all twelf variants and then writing it to
the
> > > propriety database. We could reuse the technique of the program,
> > > but can not reuse the program.
> > > And we still have to split up each record into fields.
> > > Our propriety database just accepted the 'binairy' data as is.
> > >
> > > ben brugman
> > >
> > >
> >
> >
>|||After loads of trying, reading the KB file, still getting the same Error
message.
In the end we deleted the identity column from "BulkInsertWork" and now the
insert works, but offcourse we have lost the order of the table.
Also did try some DTS work, but DTS is not very flexible when it comes to
variant records in a text file.
Thanks for your time,
and if you have suggestions to bring the identity column back,
I would appreciate that.
ben brugman
"Russell Fields" <RussellFields@.NoMailPlease.Com> wrote in message
news:uqOboZIpDHA.2404@.TK2MSFTNGP12.phx.gbl...
> Ben,
> A guess: The last row of the text file does not have the '\n'. If that
is
> so, append the character.
> Less likely and more obscure: Are you on SQL Server 2000? If you are on
> 7.0, KB article 324122 has obscure problems that it is unlikely apply,
> but...
> http://support.microsoft.com/default.aspx?scid=kb;en-us;324122#appliesto
> http://support.microsoft.com/default.aspx?scid=kb;EN-US;272292
> http://support.microsoft.com/default.aspx?scid=kb;EN-US;197043
> All error numbers that are in the range of 7300 to 7399 indicate a problem
> with the provider, in this case STREAM.
> Russell Fields
> http://www.sqlpass.org/events/seattle03
> 2003 PASS Community Summit - Seattle
> - The largest user-event dedicated to SQL Server! Register TODAY!
> "ben brugman" <ben@.niethier.nl> wrote in message
> news:eumzJHGpDHA.2188@.TK2MSFTNGP11.phx.gbl...
> > I immeadiatly went ahead with your suggested method.
> > I created a .txt file from the example in the mail.
> > I changed the nvarchar in the table to varchar.
> >
> > When inserting from the .txt file I get the following errors :
> >
> > Server: Msg 4832, Level 16, State 1, Line 1
> > Bulk Insert: Unexpected end-of-file (EOF) encountered in data file.
> > Server: Msg 7399, Level 16, State 1, Line 1
> > OLE DB provider 'STREAM' reported an error. The provider did not give
any
> > information about the error.
> > The statement has been terminated.
> >
> >
> > Added :
> > WITH (
> > DATAFILETYPE = 'char',
> > ROWTERMINATOR = '\n'
> > )
> >
> > But stil the same errors.
> > (Lastrow and rowsperbatch did not bring a solution either).
> >
> > Please advise.
> >
> > Also played around a bit with DTS, but the problem is that I can do an
> > import and then
> > a selection. Or that I can do a split up of the line into different
> fields.
> > But I can not do them both.
> > (Or I have to split up every line in every possible way and after
> selection
> > throwing away
> > the not desired results. Make a table for each type and insert every row
> in
> > every table and
> > then trowing away the types which do not belong in the table. I think
this
> > is pretty ugly).
> >
> > Ben brugman
> >
> >
> >
> >
> >
> >
> >
> > "Russell Fields" <RussellFields@.NoMailPlease.Com> wrote in message
> > news:uf4ZQm7oDHA.1072@.TK2MSFTNGP09.phx.gbl...
> > > Ben,
> > >
> > > You can reimplement your method of reading a record at a time and
> > inserting
> > > the row into the proper destination by using a DTS package and doing
> some
> > > scripting for each row.
> > >
> > > How I tend to do things like this is:
> > > BULK INSERT the file to a work table
> > >
> > > CREATE TABLE BulkInsertWork
> > > (RowID INT IDENTITY,
> > > RowText NVARCHAR(1000))
> > >
> > > TRUNCATE TABLE BulkInsertWork
> > >
> > > BULK INSERT MyDB.dbo.BulkInsertWork FROM 'c:\MyMonthlyFile.txt' ...
> > >
> > > Then I write a stored procedure that parses out the specific rows,
such
> > as:
> > >
> > > INSERT INTO Table1 (ColumnA, ColumnB)
> > > SELECT SUBSTRING(RowText,3,6), SUBSTRING(9,2)
> > > FROM BulkInsertWork
> > >
> > > etc. for each type
> > >
> > > FWIW
> > > Russell Fields
> > > http:/www.sqlpass.org/events/seattle03
> > > 2003 PASS Community Summit - Seattle
> > > - The largest user-even dedicated to SQL Server! Register TODAY!
> > >
> > >
> > >
> > > "ben brugman" <ben@.niethier.nl> wrote in message
> > > news:bob7k7$4dj$1@.reader11.wxs.nl...
> > > > Hello group,
> > > >
> > > > Each month we want to import data from an ascii file,
> > > > the records are of a fixed length (some spare space is added to fill
> > them
> > > > up)
> > > > there are 12 different types of records (variants)
> > > > Each type has a different (but within the type always the same)
build.
> > > > Each record starts with a type indication. Then the records follows.
> > > > One type can have 2 fields the other type can have 15 fields.
> > > >
> > > > Records following eachother can have relations, so a type two record
> > > > belongs to the type 1 record before the type 2 record.
> > > >
> > > > Example with 3 types
> > > >
> > > > 1 AAAAAABB
> > > > 2 CCCDDDDDDEEEEFFGGHH
> > > > 2 CCCDDDDDDEEEEFFGGHH
> > > > 3 JJJJJJKKKKKKKKKLLMMMMMMNOPPPPQQQQ
> > > > 1 AAAAAABB
> > > > 2 CCCDDDDDDEEEEFFGGHH
> > > >
> > > > The letters are indications for the different fields.
> > > >
> > > > Any suggestions how to solve this ?
> > > >
> > > > In our propriety system this was solved by reading the record
> > > > into a pascal record with all twelf variants and then writing it to
> the
> > > > propriety database. We could reuse the technique of the program,
> > > > but can not reuse the program.
> > > > And we still have to split up each record into fields.
> > > > Our propriety database just accepted the 'binairy' data as is.
> > > >
> > > > ben brugman
> > > >
> > > >
> > >
> > >
> >
> >
>|||Ben,
Yes, I am sorry that I glossed this in my earlier quick explanation. (I
thought about it during the night, don't you know.)
You need a format file to tell bulk insert to skip over the Identity column.
Something like:
8.0
2
1 SQLCHAR 0 0 "" 1 ID SQL_Latin1_General_Cp437_BIN
2 SQLCHAR 0 1000 "\r\n" 2 Text SQL_Latin1_General_Cp437_BIN
Russell Fields
http://www.sqlpass.org/events/seattle03
2003 PASS Community Summit - Seattle
- The largest user-event dedicated to SQL Server! Register TODAY!
"ben brugman" <ben@.niethier.nl> wrote in message
news:OiAr$FTpDHA.3844@.tk2msftngp13.phx.gbl...
> After loads of trying, reading the KB file, still getting the same Error
> message.
> In the end we deleted the identity column from "BulkInsertWork" and now
the
> insert works, but offcourse we have lost the order of the table.
> Also did try some DTS work, but DTS is not very flexible when it comes to
> variant records in a text file.
> Thanks for your time,
> and if you have suggestions to bring the identity column back,
> I would appreciate that.
> ben brugman
>
> "Russell Fields" <RussellFields@.NoMailPlease.Com> wrote in message
> news:uqOboZIpDHA.2404@.TK2MSFTNGP12.phx.gbl...
> > Ben,
> >
> > A guess: The last row of the text file does not have the '\n'. If that
> is
> > so, append the character.
> >
> > Less likely and more obscure: Are you on SQL Server 2000? If you are
on
> > 7.0, KB article 324122 has obscure problems that it is unlikely apply,
> > but...
> > http://support.microsoft.com/default.aspx?scid=kb;en-us;324122#appliesto
> > http://support.microsoft.com/default.aspx?scid=kb;EN-US;272292
> > http://support.microsoft.com/default.aspx?scid=kb;EN-US;197043
> >
> > All error numbers that are in the range of 7300 to 7399 indicate a
problem
> > with the provider, in this case STREAM.
> >
> > Russell Fields
> > http://www.sqlpass.org/events/seattle03
> > 2003 PASS Community Summit - Seattle
> > - The largest user-event dedicated to SQL Server! Register TODAY!
> >
> > "ben brugman" <ben@.niethier.nl> wrote in message
> > news:eumzJHGpDHA.2188@.TK2MSFTNGP11.phx.gbl...
> > > I immeadiatly went ahead with your suggested method.
> > > I created a .txt file from the example in the mail.
> > > I changed the nvarchar in the table to varchar.
> > >
> > > When inserting from the .txt file I get the following errors :
> > >
> > > Server: Msg 4832, Level 16, State 1, Line 1
> > > Bulk Insert: Unexpected end-of-file (EOF) encountered in data file.
> > > Server: Msg 7399, Level 16, State 1, Line 1
> > > OLE DB provider 'STREAM' reported an error. The provider did not give
> any
> > > information about the error.
> > > The statement has been terminated.
> > >
> > >
> > > Added :
> > > WITH (
> > > DATAFILETYPE = 'char',
> > > ROWTERMINATOR = '\n'
> > > )
> > >
> > > But stil the same errors.
> > > (Lastrow and rowsperbatch did not bring a solution either).
> > >
> > > Please advise.
> > >
> > > Also played around a bit with DTS, but the problem is that I can do an
> > > import and then
> > > a selection. Or that I can do a split up of the line into different
> > fields.
> > > But I can not do them both.
> > > (Or I have to split up every line in every possible way and after
> > selection
> > > throwing away
> > > the not desired results. Make a table for each type and insert every
row
> > in
> > > every table and
> > > then trowing away the types which do not belong in the table. I think
> this
> > > is pretty ugly).
> > >
> > > Ben brugman
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > "Russell Fields" <RussellFields@.NoMailPlease.Com> wrote in message
> > > news:uf4ZQm7oDHA.1072@.TK2MSFTNGP09.phx.gbl...
> > > > Ben,
> > > >
> > > > You can reimplement your method of reading a record at a time and
> > > inserting
> > > > the row into the proper destination by using a DTS package and doing
> > some
> > > > scripting for each row.
> > > >
> > > > How I tend to do things like this is:
> > > > BULK INSERT the file to a work table
> > > >
> > > > CREATE TABLE BulkInsertWork
> > > > (RowID INT IDENTITY,
> > > > RowText NVARCHAR(1000))
> > > >
> > > > TRUNCATE TABLE BulkInsertWork
> > > >
> > > > BULK INSERT MyDB.dbo.BulkInsertWork FROM 'c:\MyMonthlyFile.txt' ...
> > > >
> > > > Then I write a stored procedure that parses out the specific rows,
> such
> > > as:
> > > >
> > > > INSERT INTO Table1 (ColumnA, ColumnB)
> > > > SELECT SUBSTRING(RowText,3,6), SUBSTRING(9,2)
> > > > FROM BulkInsertWork
> > > >
> > > > etc. for each type
> > > >
> > > > FWIW
> > > > Russell Fields
> > > > http:/www.sqlpass.org/events/seattle03
> > > > 2003 PASS Community Summit - Seattle
> > > > - The largest user-even dedicated to SQL Server! Register TODAY!
> > > >
> > > >
> > > >
> > > > "ben brugman" <ben@.niethier.nl> wrote in message
> > > > news:bob7k7$4dj$1@.reader11.wxs.nl...
> > > > > Hello group,
> > > > >
> > > > > Each month we want to import data from an ascii file,
> > > > > the records are of a fixed length (some spare space is added to
fill
> > > them
> > > > > up)
> > > > > there are 12 different types of records (variants)
> > > > > Each type has a different (but within the type always the same)
> build.
> > > > > Each record starts with a type indication. Then the records
follows.
> > > > > One type can have 2 fields the other type can have 15 fields.
> > > > >
> > > > > Records following eachother can have relations, so a type two
record
> > > > > belongs to the type 1 record before the type 2 record.
> > > > >
> > > > > Example with 3 types
> > > > >
> > > > > 1 AAAAAABB
> > > > > 2 CCCDDDDDDEEEEFFGGHH
> > > > > 2 CCCDDDDDDEEEEFFGGHH
> > > > > 3 JJJJJJKKKKKKKKKLLMMMMMMNOPPPPQQQQ
> > > > > 1 AAAAAABB
> > > > > 2 CCCDDDDDDEEEEFFGGHH
> > > > >
> > > > > The letters are indications for the different fields.
> > > > >
> > > > > Any suggestions how to solve this ?
> > > > >
> > > > > In our propriety system this was solved by reading the record
> > > > > into a pascal record with all twelf variants and then writing it
to
> > the
> > > > > propriety database. We could reuse the technique of the program,
> > > > > but can not reuse the program.
> > > > > And we still have to split up each record into fields.
> > > > > Our propriety database just accepted the 'binairy' data as is.
> > > > >
> > > > > ben brugman
> > > > >
> > > > >
> > > >
> > > >
> > >
> > >
> >
> >
>|||I'll try that on monday.
At the moment I have done away with the identity column,
I am not sure if I will need it, because it seems that the only dependency
in order of the records is the dependency on the very first record, which
only exists once.
I am using your method of
"SUBSTRING(RowText,3,6), SUBSTRING(RowText,9,2)"
But because there are 153 columns in total, I do not want to type this out.
So I am in the process of writing scripts to generate this code.
Up to know I have managed to get three tables done, but the script
still needs some 'hand editing' for each table.
Using constructs like
Select 'INSERT INTO' + @.@.tablename+ '('
Select 'SUBSTRING(Rowtext,
'+convert(varchar(3),offset)+','+convert(varchar(3),length)+'),'
From metatable where table_name = @.@.tablename and length <> 0
Select 'From Bulkinsertwork'
Or something similar, then I cut and past this into another QA window,
clean it up and execute it.
Some things still to be solved are
The extra comma behind the last SUBSTRING line.
The extra lines generated (with the number of rows affectted and headers).
So I might go and do this in a stringvariabele and try to get the last comma
out.
Haven't decided jet if I am going to deliver the end script or enough tools
to build
the script.
Still some problems with referential constraints and spaces which have to be
converted to
<NULLS>. (After or during ? inserting ?).
Chopping all problems up and going from intermediate result to intermadiate
result makes
the code less complex.
And some problems with the content of the columns coming from other lines.
(The first line contains a sort of number which should be included in almost
all records).
Probably use temporary tables resolve all problems and then copy the content
to the
production tables.
Thanks for you help.
ben brugman
"Russell Fields" <RussellFields@.NoMailPlease.Com> wrote in message
news:ucvlkTTpDHA.1632@.TK2MSFTNGP10.phx.gbl...
> Ben,
> Yes, I am sorry that I glossed this in my earlier quick explanation. (I
> thought about it during the night, don't you know.)
> You need a format file to tell bulk insert to skip over the Identity
column.
> Something like:
> 8.0
> 2
> 1 SQLCHAR 0 0 "" 1 ID SQL_Latin1_General_Cp437_BIN
> 2 SQLCHAR 0 1000 "\r\n" 2 Text SQL_Latin1_General_Cp437_BIN
>
> Russell Fields
> http://www.sqlpass.org/events/seattle03
> 2003 PASS Community Summit - Seattle
> - The largest user-event dedicated to SQL Server! Register TODAY!
> "ben brugman" <ben@.niethier.nl> wrote in message
> news:OiAr$FTpDHA.3844@.tk2msftngp13.phx.gbl...
> > After loads of trying, reading the KB file, still getting the same Error
> > message.
> >
> > In the end we deleted the identity column from "BulkInsertWork" and now
> the
> > insert works, but offcourse we have lost the order of the table.
> >
> > Also did try some DTS work, but DTS is not very flexible when it comes
to
> > variant records in a text file.
> >
> > Thanks for your time,
> > and if you have suggestions to bring the identity column back,
> > I would appreciate that.
> >
> > ben brugman
> >
> >
> > "Russell Fields" <RussellFields@.NoMailPlease.Com> wrote in message
> > news:uqOboZIpDHA.2404@.TK2MSFTNGP12.phx.gbl...
> > > Ben,
> > >
> > > A guess: The last row of the text file does not have the '\n'. If
that
> > is
> > > so, append the character.
> > >
> > > Less likely and more obscure: Are you on SQL Server 2000? If you are
> on
> > > 7.0, KB article 324122 has obscure problems that it is unlikely apply,
> > > but...
> > >
http://support.microsoft.com/default.aspx?scid=kb;en-us;324122#appliesto
> > > http://support.microsoft.com/default.aspx?scid=kb;EN-US;272292
> > > http://support.microsoft.com/default.aspx?scid=kb;EN-US;197043
> > >
> > > All error numbers that are in the range of 7300 to 7399 indicate a
> problem
> > > with the provider, in this case STREAM.
> > >
> > > Russell Fields
> > > http://www.sqlpass.org/events/seattle03
> > > 2003 PASS Community Summit - Seattle
> > > - The largest user-event dedicated to SQL Server! Register TODAY!
> > >
> > > "ben brugman" <ben@.niethier.nl> wrote in message
> > > news:eumzJHGpDHA.2188@.TK2MSFTNGP11.phx.gbl...
> > > > I immeadiatly went ahead with your suggested method.
> > > > I created a .txt file from the example in the mail.
> > > > I changed the nvarchar in the table to varchar.
> > > >
> > > > When inserting from the .txt file I get the following errors :
> > > >
> > > > Server: Msg 4832, Level 16, State 1, Line 1
> > > > Bulk Insert: Unexpected end-of-file (EOF) encountered in data file.
> > > > Server: Msg 7399, Level 16, State 1, Line 1
> > > > OLE DB provider 'STREAM' reported an error. The provider did not
give
> > any
> > > > information about the error.
> > > > The statement has been terminated.
> > > >
> > > >
> > > > Added :
> > > > WITH (
> > > > DATAFILETYPE = 'char',
> > > > ROWTERMINATOR = '\n'
> > > > )
> > > >
> > > > But stil the same errors.
> > > > (Lastrow and rowsperbatch did not bring a solution either).
> > > >
> > > > Please advise.
> > > >
> > > > Also played around a bit with DTS, but the problem is that I can do
an
> > > > import and then
> > > > a selection. Or that I can do a split up of the line into different
> > > fields.
> > > > But I can not do them both.
> > > > (Or I have to split up every line in every possible way and after
> > > selection
> > > > throwing away
> > > > the not desired results. Make a table for each type and insert every
> row
> > > in
> > > > every table and
> > > > then trowing away the types which do not belong in the table. I
think
> > this
> > > > is pretty ugly).
> > > >
> > > > Ben brugman
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > "Russell Fields" <RussellFields@.NoMailPlease.Com> wrote in message
> > > > news:uf4ZQm7oDHA.1072@.TK2MSFTNGP09.phx.gbl...
> > > > > Ben,
> > > > >
> > > > > You can reimplement your method of reading a record at a time and
> > > > inserting
> > > > > the row into the proper destination by using a DTS package and
doing
> > > some
> > > > > scripting for each row.
> > > > >
> > > > > How I tend to do things like this is:
> > > > > BULK INSERT the file to a work table
> > > > >
> > > > > CREATE TABLE BulkInsertWork
> > > > > (RowID INT IDENTITY,
> > > > > RowText NVARCHAR(1000))
> > > > >
> > > > > TRUNCATE TABLE BulkInsertWork
> > > > >
> > > > > BULK INSERT MyDB.dbo.BulkInsertWork FROM 'c:\MyMonthlyFile.txt'
...
> > > > >
> > > > > Then I write a stored procedure that parses out the specific rows,
> > such
> > > > as:
> > > > >
> > > > > INSERT INTO Table1 (ColumnA, ColumnB)
> > > > > SELECT SUBSTRING(RowText,3,6), SUBSTRING(9,2)
> > > > > FROM BulkInsertWork
> > > > >
> > > > > etc. for each type
> > > > >
> > > > > FWIW
> > > > > Russell Fields
> > > > > http:/www.sqlpass.org/events/seattle03
> > > > > 2003 PASS Community Summit - Seattle
> > > > > - The largest user-even dedicated to SQL Server! Register TODAY!
> > > > >
> > > > >
> > > > >
> > > > > "ben brugman" <ben@.niethier.nl> wrote in message
> > > > > news:bob7k7$4dj$1@.reader11.wxs.nl...
> > > > > > Hello group,
> > > > > >
> > > > > > Each month we want to import data from an ascii file,
> > > > > > the records are of a fixed length (some spare space is added to
> fill
> > > > them
> > > > > > up)
> > > > > > there are 12 different types of records (variants)
> > > > > > Each type has a different (but within the type always the same)
> > build.
> > > > > > Each record starts with a type indication. Then the records
> follows.
> > > > > > One type can have 2 fields the other type can have 15 fields.
> > > > > >
> > > > > > Records following eachother can have relations, so a type two
> record
> > > > > > belongs to the type 1 record before the type 2 record.
> > > > > >
> > > > > > Example with 3 types
> > > > > >
> > > > > > 1 AAAAAABB
> > > > > > 2 CCCDDDDDDEEEEFFGGHH
> > > > > > 2 CCCDDDDDDEEEEFFGGHH
> > > > > > 3 JJJJJJKKKKKKKKKLLMMMMMMNOPPPPQQQQ
> > > > > > 1 AAAAAABB
> > > > > > 2 CCCDDDDDDEEEEFFGGHH
> > > > > >
> > > > > > The letters are indications for the different fields.
> > > > > >
> > > > > > Any suggestions how to solve this ?
> > > > > >
> > > > > > In our propriety system this was solved by reading the record
> > > > > > into a pascal record with all twelf variants and then writing it
> to
> > > the
> > > > > > propriety database. We could reuse the technique of the program,
> > > > > > but can not reuse the program.
> > > > > > And we still have to split up each record into fields.
> > > > > > Our propriety database just accepted the 'binairy' data as is.
> > > > > >
> > > > > > ben brugman
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > >
> > > >
> > >
> > >
> >
> >
>

No comments:

Post a Comment