Friday, March 30, 2012
Import script
I need import some data from one db in one server to another db in another
server. I can do it by import/export wizard, but I need do that by script
for deployment time. Do you know how to do that?
Thanks,
HUwhich database server are you using? 2000 or 2005|||2000
"Ajit - The Scorpio" <ajitscorpio@.gmail.com> wrote in message
news:1176215374.790446.12340@.p77g2000hsh.googlegroups.com...
> which database server are you using? 2000 or 2005
>|||Hello,
Vysa's have some good utility procedures to generate insert statements, its
all pretty handy.
http://vyaskn.tripod.com/code.htm#inserts
Thanks
Hari
"italic" <hugur@.hotmail.com> wrote in message
news:%23eFv013eHHA.4136@.TK2MSFTNGP02.phx.gbl...
> 2000
> "Ajit - The Scorpio" <ajitscorpio@.gmail.com> wrote in message
> news:1176215374.790446.12340@.p77g2000hsh.googlegroups.com...
>
Import script
I need import some data from one db in one server to another db in another
server. I can do it by import/export wizard, but I need do that by script
for deployment time. Do you know how to do that?
Thanks,
HUwhich database server are you using? 2000 or 2005|||2000
"Ajit - The Scorpio" <ajitscorpio@.gmail.com> wrote in message
news:1176215374.790446.12340@.p77g2000hsh.googlegroups.com...
> which database server are you using? 2000 or 2005
>|||Hello,
Vysa's have some good utility procedures to generate insert statements, its
all pretty handy.
http://vyaskn.tripod.com/code.htm#inserts
Thanks
Hari
"italic" <hugur@.hotmail.com> wrote in message
news:%23eFv013eHHA.4136@.TK2MSFTNGP02.phx.gbl...
> 2000
> "Ajit - The Scorpio" <ajitscorpio@.gmail.com> wrote in message
> news:1176215374.790446.12340@.p77g2000hsh.googlegroups.com...
>> which database server are you using? 2000 or 2005
>sql
Import schema into SqlExpress
I've got a PostgreSQL schema, how can I import it in SqlExpress?
What's the best tool for administering Sql Express.
Thanks,
LorenzoHi
I am not a Postgres expert, but if there is a way to generate the DDL
(CREATE TABLE scripts) you could use that and modify any syntax that is not
SQLExpress compliant. Alternatively you may be able to reverse engineer the
database with a CASE/design tool such as Visio/Erwin/ER_studio etc... and
use that.
For specific questions on SQL Express you should ask in the express
newsgroup
http://communities.microsoft.com/ne...ver2005.express
John
<lbolognini@.gmail.com> wrote in message
news:1117710946.423920.164950@.g44g2000cwa.googlegr oups.com...
> Hi all,
> I've got a PostgreSQL schema, how can I import it in SqlExpress?
> What's the best tool for administering Sql Express.
> Thanks,
> Lorenzo|||John Bell wrote:
> Hi
> I am not a Postgres expert, but if there is a way to generate the DDL
Hi John,
yeah I got that. found out there are some command line tools that
should do the job just fine, then I could mantain it allright from
Visual Web Developer
> For specific questions on SQL Express you should ask in the express
> newsgroup
Thanks, didn't know about that, I'll post there from now on
Lorenzo
import rpt file to SQL Server 2005
Hi,
I know how to import the .rpt file into Access database using import.
can some body tell about how to import to SQL Server 2005 database, both front end (using some utility or interface) and back end (directly accessing the server) solution.
Thanks,
Fahim.
hi,
I'm not sure I understood you question...
.rpt file could stand for a "report file", but SQL Server has nothing to do with client issues like reports.. so, the question is, where do you want to import those .rpt files? do you like to store them into a database table? and if this is correct, what do you like to do with that files, as SQL Server can not use them to render reports?..
a "branch" of SQL Server, available in SQLExpress edition as well, is devoted to reporting, using Reporting Services, an additional "engine" responsible for the purpose..
regards
|||In addition to Andrewa, if you want to convert your rpt file into a report in reporting services you will have to do manual work as the queries within the Report will have to be trasnfered to SQL Server syntax. In addition, the migraiton assistant for Report wont transfer the underlying tables to SQL Server, you will have to do that on your own.
Jens K. Suessmeyer
http://www.sqlserver2005.de
|||Thanks for your reply,
I am currently doing like this,
-- take comma delimited file.
-- convert into rpt file using Excel
-- manuall insert to Access database table.
now I want to upgrade this process to
-- SQL Server 2005 ( i know how to upgrade database to this engine)
-- Inert rpt file to SQL table now ( from front end ),
i hope my question is clear.
thanks for your help.
|||Thanks for your reply,
I am currently doing like this,
-- take comma delimited file.
-- convert into rpt file using Excel
-- manuall insert to Access database table.
now I want to upgrade this process to
-- SQL Server 2005 ( i know how to upgrade database to this engine)
-- Insert rpt file to SQL table now ( from front end ),
i hope my question is clear.
thanks for your help.
|||hi,
FahimatMicrosoftForum wrote:
Thanks for your reply,
I am currently doing like this,
-- take comma delimited file.
-- convert into rpt file using Excel
-- manuall insert to Access database table.
now I want to upgrade this process to
-- SQL Server 2005 ( i know how to upgrade database to this engine)
-- Insert rpt file to SQL table now ( from front end ),
if you want to insert "files" into a database table, you can perhaps bulk insert them... or you can deal the task via client code, reading the files into memory and passing the resulting byte array to the Transact-SQL command specified for the INSERT INTO operation..
i hope my question is clear.
absolutely not... at least not to me
regards
|||
FahimatMicrosoftForum wrote:
Hi,
I know how to import the .rpt file into Access database using import.
can some body tell about how to import to SQL Server 2005 database, both front end (using some utility or interface) and back end (directly accessing the server) solution.
Thanks,
Fahim.
Hi,
I also looking for this solution. I am import excel, txt, pdf file to sql2005. Are you found the solution?
|||Search Books Online for the bcp utility, it should allow you to import various types of files.
Mike
|||Excel and Textfiles are supported by bcp, bcp does not support imporintg pdf files as their file structure is not bulk-importable.
Jens K. Suessmeyer
http://www.sqlserver2005.de
import rpt file to SQL Server 2005
Hi,
I know how to import the .rpt file into Access database using import.
can some body tell about how to import to SQL Server 2005 database, both front end (using some utility or interface) and back end (directly accessing the server) solution.
Thanks,
Fahim.
hi,
I'm not sure I understood you question...
.rpt file could stand for a "report file", but SQL Server has nothing to do with client issues like reports.. so, the question is, where do you want to import those .rpt files? do you like to store them into a database table? and if this is correct, what do you like to do with that files, as SQL Server can not use them to render reports?..
a "branch" of SQL Server, available in SQLExpress edition as well, is devoted to reporting, using Reporting Services, an additional "engine" responsible for the purpose..
regards
|||In addition to Andrewa, if you want to convert your rpt file into a report in reporting services you will have to do manual work as the queries within the Report will have to be trasnfered to SQL Server syntax. In addition, the migraiton assistant for Report wont transfer the underlying tables to SQL Server, you will have to do that on your own.
Jens K. Suessmeyer
http://www.sqlserver2005.de
|||Thanks for your reply,
I am currently doing like this,
-- take comma delimited file.
-- convert into rpt file using Excel
-- manuall insert to Access database table.
now I want to upgrade this process to
-- SQL Server 2005 ( i know how to upgrade database to this engine)
-- Inert rpt file to SQL table now ( from front end ),
i hope my question is clear.
thanks for your help.
|||Thanks for your reply,
I am currently doing like this,
-- take comma delimited file.
-- convert into rpt file using Excel
-- manuall insert to Access database table.
now I want to upgrade this process to
-- SQL Server 2005 ( i know how to upgrade database to this engine)
-- Insert rpt file to SQL table now ( from front end ),
i hope my question is clear.
thanks for your help.
|||
hi,
FahimatMicrosoftForum wrote:
Thanks for your reply,
I am currently doing like this,
-- take comma delimited file.
-- convert into rpt file using Excel
-- manuall insert to Access database table.
now I want to upgrade this process to
-- SQL Server 2005 ( i know how to upgrade database to this engine)
-- Insert rpt file to SQL table now ( from front end ),
if you want to insert "files" into a database table, you can perhaps bulk insert them... or you can deal the task via client code, reading the files into memory and passing the resulting byte array to the Transact-SQL command specified for the INSERT INTO operation..
i hope my question is clear.
absolutely not... at least not to me
regards
|||
FahimatMicrosoftForum wrote:
Hi,
I know how to import the .rpt file into Access database using import.
can some body tell about how to import to SQL Server 2005 database, both front end (using some utility or interface) and back end (directly accessing the server) solution.
Thanks,
Fahim.
Hi,
I also looking for this solution. I am import excel, txt, pdf file to sql2005. Are you found the solution?
|||Search Books Online for the bcp utility, it should allow you to import various types of files.
Mike
|||
Excel and Textfiles are supported by bcp, bcp does not support imporintg pdf files as their file structure is not bulk-importable.
Jens K. Suessmeyer
http://www.sqlserver2005.de
Import RIS format to SQL Server?
I have data exported from a Reference Manager 11, and need to import it
into and SQL database.
Each record has different number of fields. It is used to cite journal
articles.
(more about the format at
http://www.adeptscience.co.uk/kb/article/A626)
The format is very strange:
TY - RPRT
A1 - Esparza,J.
T1 - Report of a WHO workshop on the measurement and significance of
neutralizing antibody to HIV and SIV, London, 3-5 October 1988
Y1 - 1990
VL - 4
SP - 269
EP - 275
RP - Not In File
CY - San Francisco CA
PB - UC Berkeley
KW - HIV
KW - SIV
KW - AIDS
T3 - World Health Organisation Global Programme on AIDS
ER -
TY - CHAP
A1 - Franks,L.M.
T1 - Preface by an AIDS Victim
Y1 - 1991
VL - 10
SP - vii
EP - viii
RP - Not In File
T2 - Cancer, HIV and AIDS.
CY - Berkeley CA
PB - Berkeley Press
KW - HIV
KW - AIDS
M1 - 1
M2 - 1
SN - 0-679-40110-5
ER -
TY - CASE
A1 - Cary,A.
A1 - Friedenrich,W.
T1 - Redman v. State of California
Y1 - 1988/10/7
VL - 201
IS - 32
SP - 220
EP - 240
RP - Not In File
CY - ATLA Law Reporter
PB - San Diego County 45th Judicial District, California
KW - AIDS
KW - litigation
KW - AIDS litigation
KW - rape
U1 - ISSN 0456-8125
N1 - Raped inmate can press case against officials for contracting
AIDS
ER -
It looks like some of the columns are separted by CR but part of the
same colum. For instance 'KW' is seen multiple times per record but
should be one field called 'KW'.
Any idea how I would import this to SQL Server with the DTS?"Tmuld" <tmuldoon@.spliced.com> wrote in message
news:1113921439.658252.91360@.g14g2000cwa.googlegro ups.com...
> Hello!
> I have data exported from a Reference Manager 11, and need to import it
> into and SQL database.
> Each record has different number of fields. It is used to cite journal
> articles.
> (more about the format at
> http://www.adeptscience.co.uk/kb/article/A626)
> The format is very strange:
<snip
> It looks like some of the columns are separted by CR but part of the
> same colum. For instance 'KW' is seen multiple times per record but
> should be one field called 'KW'.
> Any idea how I would import this to SQL Server with the DTS?
The format looks rather awkward to handle in DTS transformations, so you'll
probably need some other solution. One would be to load the data into a
staging table, then use stored procedures to clean it up and INSERT it into
the destination tables. A better option is probably to parse the data
outside the database, using C#/Perl or whatever your preferred language is,
then load it. You might be able to transform it into XML, for example, then
use OPENXML or SQLXML Bulk Load to import it:
http://www.sqlxml.org/faqs.aspx?faq=13
Whatever solution you decide on, you could still use a DTS package to manage
all the steps, whether they are stored procedures or external programs.
Simon|||I think that your best bet would be to loop through the file in a VB
application which handles the logic. Usually, files that hold data in a
"record" format have the columns going across the file and rows going
down. They also usually have a set number of columns. While this format
isn't unheard of (I've worked with similar formats in the past) most of
the standard tools for importing files (BCP, DTS, BULK INSERT) are not
designed to deal with it. BCP might be able to get around it if you had
a fixed number of columns, but the variable number of columns would
present a problem there.
If you have no skill in a programming language like VB and you don't
have access to a resource who has those skills then you could BCP the
file into a staging table with an IDENTITY column. Make sure that the
ordering of the records in the file is preserved in the rows of the
table. From there you can loop through the rows sequentially with a
cursor. This is one of those VERY rare instances where I think a cursor
is called for. As I said though, I think you're better off using VB or
something similar if you can.
By looping through the records you can create a new row in your base
table every time that you hit a new record tag in the file then you can
create rows in children tables for each repeatable column type.
Good luck,
-Tom.
Import question with imbedded tags
I'm having a heck of a time trying to make this work, and not being an
expert on XML I'm not even sure what to look for. Can someone point me
in the right direction to import an XML file that contains this:
--snip--
<client>
<Updated>7/1/2006</updated>
<Business Number="11223">
<Name>Smith Glass</name>
<Added>4/2/2005</Added>
<PrimaryContact>
<Name>John Smith</name>
<Phone>301-222-4433</phone>
</PrimaryContact>
<SecondaryContact>
<Name>Jack Jones</name>
<Phone>301-222-4322</phone>
</SecondaryContact>
</Business>
<Business Number="44332">
<Name>Anderson Drywall</name>
<Added>5/1/2005</Added>
<PrimaryContact>
<Name>Mike Anderson</name>
<Phone>301-223-6689</phone>
</PrimaryContact>
<SecondaryContact>
<Name>Nancy Taylor</name>
<Phone>301-542-6643</phone>
</SecondaryContact>
</Business>
</client>
--snip--
Into a SQL table like this:
ID Name Added Person1 Person2
11223 Smith Glass 4/2/2005 John Smith Jack Jones
44332 Anderson Drywall 5/1/2005 Mike Anderson Nancy Taylor
The spaces probably skewed alittle, but hopefully it comes over okay.
Thanks :)
AlexYou can use the nodes() function with CROSS APPLY to shred your xml. Here
is a example cut down from your example:
shred the input file into this table
CREATE TABLE ClientExm
(
ID INT,
Name NVARCHAR(100),
Added DateTime,
Person1 NVARCHAR(100),
Person2 NVARCHAR(100),
)
INSERT INTO ClientExm
SELECT
X.C.value('@.Number', 'INT'),
X.C.value('name[1]', 'NVARCHAR(100)'),
X.C.value('added[1]', 'DATETIME'),
X.C.value('primaryContact[1]/name[1]', 'NVARCHAR(100)'),
X.C.value('secondaryContact[1]/name[1]', 'NVARCHAR(100)')
FROM (SELECT CAST(BulkColumn AS XML) AS B FROM OPENROWSET(
BULK 'C:\Clientexm.xml', SINGLE_BLOB) AS X) AS S
CROSS APPLY
S.B.nodes('/client/business') AS X(C)
SELECT * from ClientExm
ID Name
Added Person1
Person2
-- ---
--
-- ----
---
----
--
11223 Smith Glass
2005-04-02 00:00:00.000 John Smith
Jack Jones
44332 Anderson Drywall
2005-05-01 00:00:00.000 Mike Anderson
Nancy Taylor
11223 Smith Glass
2005-04-02 00:00:00.000 John Smith
Jack Jones
44332 Anderson Drywall
2005-05-01 00:00:00.000 Mike Anderson
Nancy Taylor
the file c:\clientexm.xml contains
<client>
<updated>7/1/2006</updated>
<business Number="11223">
<name>Smith Glass</name>
<added>4/2/2005</added>
<primaryContact>
<name>John Smith</name>
<phone>301-222-4433</phone>
</primaryContact>
<secondaryContact>
<name>Jack Jones</name>
<phone>301-222-4322</phone>
</secondaryContact>
</business>
<business Number="44332">
<name>Anderson Drywall</name>
<added>5/1/2005</added>
<primaryContact>
<name>Mike Anderson</name>
<phone>301-223-6689</phone>
</primaryContact>
<secondaryContact>
<name>Nancy Taylor</name>
<phone>301-542-6643</phone>
</secondaryContact>
</business>
<business Number="11223">
<name>Smith Glass</name>
<added>4/2/2005</added>
<primaryContact>
<name>John Smith</name>
<phone>301-222-4433</phone>
</primaryContact>
<secondaryContact>
<name>Jack Jones</name>
<phone>301-222-4322</phone>
</secondaryContact>
</business>
<business Number="44332">
<name>Anderson Drywall</name>
<added>5/1/2005</added>
<primaryContact>
<name>Mike Anderson</name>
<phone>301-223-6689</phone>
</primaryContact>
<secondaryContact>
<name>Nancy Taylor</name>
<phone>301-542-6643</phone>
</secondaryContact>
</business>
</client>
Dan
> Hello,
> I'm having a heck of a time trying to make this work, and not being an
> expert on XML I'm not even sure what to look for. Can someone point
> me in the right direction to import an XML file that contains this:
> --snip--
> <client>
> <Updated>7/1/2006</updated>
> <Business Number="11223">
> <Name>Smith Glass</name>
> <Added>4/2/2005</Added>
> <PrimaryContact>
> <Name>John Smith</name>
> <Phone>301-222-4433</phone>
> </PrimaryContact>
> <SecondaryContact>
> <Name>Jack Jones</name>
> <Phone>301-222-4322</phone>
> </SecondaryContact>
> </Business>
> <Business Number="44332">
> <Name>Anderson Drywall</name>
> <Added>5/1/2005</Added>
> <PrimaryContact>
> <Name>Mike Anderson</name>
> <Phone>301-223-6689</phone>
> </PrimaryContact>
> <SecondaryContact>
> <Name>Nancy Taylor</name>
> <Phone>301-542-6643</phone>
> </SecondaryContact>
> </Business>
> </client>
> --snip--
> Into a SQL table like this:
> ID Name Added Person1 Person2 11223 Smith
> Glass 4/2/2005 John Smith Jack Jones 44332 Anderson Drywall
> 5/1/2005 Mike Anderson Nancy Taylor
> The spaces probably skewed alittle, but hopefully it comes over okay.
> Thanks :)
> Alex
>|||Dan wrote:
> You can use the nodes() function with CROSS APPLY to shred your xml. Here
> is a example cut down from your example:
> shred the input file into this table
> CREATE TABLE ClientExm
> (
> ID INT,
> Name NVARCHAR(100),
> Added DateTime,
> Person1 NVARCHAR(100),
> Person2 NVARCHAR(100),
> )
>
> INSERT INTO ClientExm
> SELECT
> X.C.value('@.Number', 'INT'),
> X.C.value('name[1]', 'NVARCHAR(100)'),
> X.C.value('added[1]', 'DATETIME'),
> X.C.value('primaryContact[1]/name[1]', 'NVARCHAR(100)'),
> X.C.value('secondaryContact[1]/name[1]', 'NVARCHAR(100)')
> FROM (SELECT CAST(BulkColumn AS XML) AS B FROM OPENROWSET(
> BULK 'C:\Clientexm.xml', SINGLE_BLOB) AS X) AS S
> CROSS APPLY
> S.B.nodes('/client/business') AS X(C)
>
> SELECT * from ClientExm
> ID Name
> Added Person1
> Person2
> -- ---
---
> -- ----
----
> ----
--
> 11223 Smith Glass
> 2005-04-02 00:00:00.000 John Smith
> Jack Jones
> 44332 Anderson Drywall
> 2005-05-01 00:00:00.000 Mike Ander
son
> Nancy Taylor
> 11223 Smith Glass
> 2005-04-02 00:00:00.000 John Smith
> Jack Jones
> 44332 Anderson Drywall
> 2005-05-01 00:00:00.000 Mike Ander
son
> Nancy Taylor
>
Hi Dan,
Too awesome! Thanks for putting this together for me, but I guess I
should've told you I'm using MS SQL 2000... when I try to run this in
query analyzer it gives a syntax error with the BULK command. Hmmm...
if this only works on SQL2000 then I'll hold onto the snippet for when
we upgrade, which no telling when that'll be.
Any suggestions for running this on SQL 2000? If not, i'm back to my
digging :) Thanks,
Alex|||You may be able to do what you need with OPENXML. Below is an example but
look up OPENXML and sp_xml_preparedocument in the BOL for more details.
Also lookup SQLXML and annotated schema for another way to accomlish this.
DECLARE @.hdoc INT
EXEC sp_xml_preparedocument @.hdoc OUTPUT,
'<client>
<updated>7/1/2006</updated>
<business Number="11223">
<name>Smith Glass</name>
<added>4/2/2005</added>
<primaryContact>
<name>John Smith</name>
<phone>301-222-4433</phone>
</primaryContact>
<secondaryContact>
<name>Jack Jones</name>
<phone>301-222-4322</phone>
</secondaryContact>
</business>
<business Number="44332">
<name>Anderson Drywall</name>
<added>5/1/2005</added>
<primaryContact>
<name>Mike Anderson</name>
<phone>301-223-6689</phone>
</primaryContact>
<secondaryContact>
<name>Nancy Taylor</name>
<phone>301-542-6643</phone>
</secondaryContact>
</business>
<business Number="11223">
<name>Smith Glass</name>
<added>4/2/2005</added>
<primaryContact>
<name>John Smith</name>
<phone>301-222-4433</phone>
</primaryContact>
<secondaryContact>
<name>Jack Jones</name>
<phone>301-222-4322</phone>
</secondaryContact>
</business>
<business Number="44332">
<name>Anderson Drywall</name>
<added>5/1/2005</added>
<primaryContact>
<name>Mike Anderson</name>
<phone>301-223-6689</phone>
</primaryContact>
<secondaryContact>
<name>Nancy Taylor</name>
<phone>301-542-6643</phone>
</secondaryContact>
</business>
</client>'
INSERT INTO ClientExm
SELECT *
FROM OPENXML(@.hdoc, '/client/business', 2)
WITH
([Number] INT '@.Number',
[name] NVARCHAR(100) 'name',
[added] DATETIME 'added',
[primaryContact] NVARCHAR(100) 'primaryContact/name',
[secondaryContact] NVARCHAR(100)'secondaryContact/name'
)
Dan
EXEC sp_xml_removedocument @.hdoc
> Dan wrote:
>
> Hi Dan,
> Too awesome! Thanks for putting this together for me, but I guess I
> should've told you I'm using MS SQL 2000... when I try to run this in
> query analyzer it gives a syntax error with the BULK command. Hmmm...
> if this only works on SQL2000 then I'll hold onto the snippet for when
> we upgrade, which no telling when that'll be.
> Any suggestions for running this on SQL 2000? If not, i'm back to my
> digging :) Thanks,
> Alex
>sql
import question
Column A contains names and Column B contains zips.
The problem is Column A is not consistent in it's structure. Some columns only contain one name where others contain two names just by last name seperated by a space or a slash.
For example
BOWERING WILKENSON
GEERS/DOODS
I'm trying to clean the data before importing into a sql table. I want to seperate the records for column A where there are two names and move the second name to a new row. Is there a method to do this during the import? Or, within Excel prior to the import using replace? I want to find instances where there is a space or slash and move the name after the space or slash and move to a new row?
Thanks,
-D-I think in Excel you can do a text to column and put "/" or space as the delimitter|||Instead of leaning up just import the fist column in a temporary table and clean it by using substring and charindex.
Import Question
database.
The problem is that when I run this process it Appends the Access details to
the SQL Server when I want to replace the SQL Server data.
Can anyone tell me how to get around this. I'm pretty new to this sort of
thing.
TIAPoppy,
You can make use of DTS (SQL Server) which will have an option to "drop existing table" or "replace
existing date". For instance if you open DTS import/export wizard. choose source and destination of
which one will be access. On the next screen select "copy tables/views from source database" . on
the next screen select the required tables to transfer and click on button of "transform" you will
get relevant options here to transfer the data. The one which will be relevent to you will be
"delete rows in destination table"
- Vishal|||Thanks
"Vishal Parkar" <_vgparkar@.yahoo.co.in> wrote in message
news:#mdDhckmDHA.2200@.TK2MSFTNGP12.phx.gbl...
> Poppy,
> You can make use of DTS (SQL Server) which will have an option to "drop
existing table" or "replace
> existing date". For instance if you open DTS import/export wizard. choose
source and destination of
> which one will be access. On the next screen select "copy tables/views
from source database" . on
> the next screen select the required tables to transfer and click on button
of "transform" you will
> get relevant options here to transfer the data. The one which will be
relevent to you will be
> "delete rows in destination table"
>
> --
> - Vishal
>
>
Import queries
data-mart database table (not previously existing) using the SQL Server 2005
Import/Export Wizard. I would like to do it using an import query that
recasts the data types, as opposed to copying the Excel data and letting the
Wizard set the data types. Below is a small sample of the code I'm trying t
o
use. I always get an error message (also below). What do I do to make this
work?
select
cast(REPORT_NAME as char(10)) as [Report_Name],
cast(SYSTEM as char(10)) as [System],
cast(PRIN as char(10)) as [Prin],
cast(AGENT as char(10)) as [Agent],
cast(BANK_NAME as char(50)) as [Bank_Name],
Report_Date as [Report_Date]
TITLE: SQL Server Import and Export Wizard
--
The statement could not be parsed.
ADDITIONAL INFORMATION:
IErrorInfo.GetDescription failed with E_FAIL(0x80004005). (System.Data)You will get this error if you use a keyword or a reserved word as the table
or column name you are using.
Check it out with the list of keyowrds. You may want to try this link..
http://msdn.microsoft.com/library/d...getkeywords.asp
or try googling to get the latest list.
Hope this helps.|||No reserved words were column headings or table names. It still doesn't
work.
"Omnibuzz" wrote:
> You will get this error if you use a keyword or a reserved word as the tab
le
> or column name you are using.
> Check it out with the list of keyowrds. You may want to try this link..
> http://msdn.microsoft.com/library/d...getkeywords.asp
> or try googling to get the latest list.
> Hope this helps.|||Is all the data within the ranges specified.
if importing to char(10) does all data fit (only have 10 chars)?
"Mitch" wrote:
> No reserved words were column headings or table names. It still doesn't
> work.
> "Omnibuzz" wrote:
>|||Yes... I have been very careful with the cast statements.
"MattB" wrote:
> Is all the data within the ranges specified.
> if importing to char(10) does all data fit (only have 10 chars)?
> "Mitch" wrote:
>|||I discovered an easy solution to this problem. In the Import/Export Wizard
do not use a query to transfer the table. The wizard brings you to a point
where you can click on the EDIT button for the sheet or table selected. Thi
s
bring up a form that allows you to recast each column. It works beautifully
!
"Mitch" wrote:
> I am copying data from an Excel workbook/worksheet to a SQL Server 2005
> data-mart database table (not previously existing) using the SQL Server 20
05
> Import/Export Wizard. I would like to do it using an import query that
> recasts the data types, as opposed to copying the Excel data and letting t
he
> Wizard set the data types. Below is a small sample of the code I'm trying
to
> use. I always get an error message (also below). What do I do to make th
is
> work?
> select
> cast(REPORT_NAME as char(10)) as [Report_Name],
> cast(SYSTEM as char(10)) as [System],
> cast(PRIN as char(10)) as [Prin],
> cast(AGENT as char(10)) as [Agent],
> cast(BANK_NAME as char(50)) as [Bank_Name],
> Report_Date as [Report_Date]
> TITLE: SQL Server Import and Export Wizard
> --
> The statement could not be parsed.
> --
> ADDITIONAL INFORMATION:
> IErrorInfo.GetDescription failed with E_FAIL(0x80004005). (System.Data)
>
Import Puzzle
I have three feeds from sources around the world, each coming in at a
separate time.
These feeds move into a large table (3GB) that is queried by managers.
The feeds are loaded sequentially, and then the previous day's feed
rows are deleted from the table (this is done so that the user's
application is never without data).
The issue is that the import takes a lot of time, as do the deletes.
These is hurting performance significantly. I attempted to fix the
problem by creating separate tables for each feed. I then created a
view with the original table's name and used UNION ALL's. My intention
was that as each feed came in, I'd alter the view with the new table's
name, and then truncate the older table. This met both goals of
concurrency and import/delete speed.
Unfortunately, this view seems to ignore the indexes on the underlying
tables, which devastates performance. I can't index the view, since
altering it makes the index less useful.
I'm looking for a different strategy for loading and deleting the
data, all without disruption to the applications. I'd appreciate any
suggestions...woodyb@.hotmail.com (Buck Woody) wrote in message news:<d4e6e94d.0311250951.76030a77@.posting.google.com>...
> Hello -
> I have three feeds from sources around the world, each coming in at a
> separate time.
> These feeds move into a large table (3GB) that is queried by managers.
> The feeds are loaded sequentially, and then the previous day's feed
> rows are deleted from the table (this is done so that the user's
> application is never without data).
? delete so you never be w/o data ?
> The issue is that the import takes a lot of time, as do the deletes.
> These is hurting performance significantly. I attempted to fix the
This is probably due to your managers querying the data and applying
shared read locks -- while you're trying to insert/delete.
> problem by creating separate tables for each feed. I then created a
> view with the original table's name and used UNION ALL's. My intention
> was that as each feed came in, I'd alter the view with the new table's
> name, and then truncate the older table. This met both goals of
> concurrency and import/delete speed.
Did you put NOLOCK on the view?
> Unfortunately, this view seems to ignore the indexes on the underlying
> tables, which devastates performance. I can't index the view, since
> altering it makes the index less useful.
Which version of MSSQLSERVER are you running? Only 2000 has indexed
views.
> I'm looking for a different strategy for loading and deleting the
> data, all without disruption to the applications. I'd appreciate any
> suggestions...
Ideally you would have 2 separate tables: "real data" and a nice
pretty one for your managers. But if you don't have space etc...,
make sure you put NOLOCKS or "set transaction isolation level read
uncommitted" on every stored procedure and view your managers use.
Hey don't sweat the small stuff. Everything is small stuff. Good
luck.
-- Louis|||"Buck Woody" <woodyb@.hotmail.com> wrote in message
news:d4e6e94d.0311250951.76030a77@.posting.google.c om...
> Hello -
> I have three feeds from sources around the world, each coming in at a
> separate time.
> These feeds move into a large table (3GB) that is queried by managers.
> The feeds are loaded sequentially, and then the previous day's feed
> rows are deleted from the table (this is done so that the user's
> application is never without data).
> The issue is that the import takes a lot of time, as do the deletes.
> These is hurting performance significantly. I attempted to fix the
> problem by creating separate tables for each feed. I then created a
> view with the original table's name and used UNION ALL's. My intention
> was that as each feed came in, I'd alter the view with the new table's
> name, and then truncate the older table. This met both goals of
> concurrency and import/delete speed.
> Unfortunately, this view seems to ignore the indexes on the underlying
> tables, which devastates performance. I can't index the view, since
> altering it makes the index less useful.
> I'm looking for a different strategy for loading and deleting the
> data, all without disruption to the applications. I'd appreciate any
> suggestions...
The two parts that often slow down inserts, updates and deletes are the
indexes and transaction log.
For transactions. If data protection is not required, since you can just
import the feeds again and they are useless after one day, then maybe you
could use a # temp table. The tempdb is more ram based so it works much
faster and has less transaction logs.
For indexes, why not drop the indexes before you do the import and delete
and put them back later. This is much faster than having the (b-tree)
indexes do their balancing tricks after each row is added.
If you can't drop the indexes then consider a low fill factor on the indexes
so they have plenty of space the grow without rebalancing.|||Quick thought off the top of my head.
Generate each table (x3). Create a view of these and generate a new
table from this to combine the data. Then create a final table for
your guys to work with. You can fully control the indexes that is on
the final table. You can mess about with the data in each of the three
sets. You will see the performance issues when creating the combined
table, but the users won't. The move from that into the one to be
reported on should be reasonably OK in comparison as you are simply
moving bulk data, not combining three sets and moving them at the same
time.
HTH
louisducnguyen@.hotmail.com (louis nguyen) wrote in message news:<b0e9d53.0311251501.3e74e293@.posting.google.com>...
> woodyb@.hotmail.com (Buck Woody) wrote in message news:<d4e6e94d.0311250951.76030a77@.posting.google.com>...
> > Hello -
> > I have three feeds from sources around the world, each coming in at a
> > separate time.
> > These feeds move into a large table (3GB) that is queried by managers.
> > The feeds are loaded sequentially, and then the previous day's feed
> > rows are deleted from the table (this is done so that the user's
> > application is never without data).
> ? delete so you never be w/o data ?
> > The issue is that the import takes a lot of time, as do the deletes.
> > These is hurting performance significantly. I attempted to fix the
> This is probably due to your managers querying the data and applying
> shared read locks -- while you're trying to insert/delete.
> > problem by creating separate tables for each feed. I then created a
> > view with the original table's name and used UNION ALL's. My intention
> > was that as each feed came in, I'd alter the view with the new table's
> > name, and then truncate the older table. This met both goals of
> > concurrency and import/delete speed.
> Did you put NOLOCK on the view?
> > Unfortunately, this view seems to ignore the indexes on the underlying
> > tables, which devastates performance. I can't index the view, since
> > altering it makes the index less useful.
> Which version of MSSQLSERVER are you running? Only 2000 has indexed
> views.
> > I'm looking for a different strategy for loading and deleting the
> > data, all without disruption to the applications. I'd appreciate any
> > suggestions...
> Ideally you would have 2 separate tables: "real data" and a nice
> pretty one for your managers. But if you don't have space etc...,
> make sure you put NOLOCKS or "set transaction isolation level read
> uncommitted" on every stored procedure and view your managers use.
> Hey don't sweat the small stuff. Everything is small stuff. Good
> luck.
> -- Louis
Import Prs
I was curious how other people create several 100 SQL objects on a
new database. Example: I have each procedure in its own text file (IE
100 .sql files), so I just check them out of source control, then run
a perl script that just dumps them into one txt file. Then I paste
that mess into Query Analyzer.
So to reiterate my question, how do other people get sql objects
into a database. Obviously I'd do an object copy if they resided in
some other database.
While my solution works, there is always a better way. Thanks for
your suggestions.
- CptI don't know Perl but I assume you could execute OSQL from your script.
Another method is to invoke a Windows FOR command in a command prompt
window to execute OSQL. For example:
CD C:\SQLScripts
FOR %v in (*.sql) DO OSQL -i "%v" -o "%v.out" -E
--
Hope this helps.
Dan Guzman
SQL Server MVP
--------
SQL FAQ links (courtesy Neil Pike):
http://www.ntfaq.com/Articles/Index...epartmentID=800
http://www.sqlserverfaq.com
http://www.mssqlserver.com/faq
--------
"CptVorpal" <cptvorpal@.hotmail.com> wrote in message
news:76cffbe0.0310081530.6c7a034b@.posting.google.c om...
> Hey guys,
> I was curious how other people create several 100 SQL objects on a
> new database. Example: I have each procedure in its own text file (IE
> 100 .sql files), so I just check them out of source control, then run
> a perl script that just dumps them into one txt file. Then I paste
> that mess into Query Analyzer.
> So to reiterate my question, how do other people get sql objects
> into a database. Obviously I'd do an object copy if they resided in
> some other database.
> While my solution works, there is always a better way. Thanks for
> your suggestions.
> - Cpt|||cptvorpal@.hotmail.com (CptVorpal) wrote in message news:<76cffbe0.0310081530.6c7a034b@.posting.google.com>...
> Hey guys,
> I was curious how other people create several 100 SQL objects on a
> new database. Example: I have each procedure in its own text file (IE
> 100 .sql files), so I just check them out of source control, then run
> a perl script that just dumps them into one txt file. Then I paste
> that mess into Query Analyzer.
> So to reiterate my question, how do other people get sql objects
> into a database. Obviously I'd do an object copy if they resided in
> some other database.
> While my solution works, there is always a better way. Thanks for
> your suggestions.
> - Cpt
You could start by looking at using OSQL.EXE to run your .sql scripts
from the command line - it should be straightforward to wrap that in a
script of some sort which gets the list of scripts, then executes them
one by one. You can trap the output from OSQL and use it for error
handling as well.
Simon|||[posted and mailed]
CptVorpal (cptvorpal@.hotmail.com) writes:
> I was curious how other people create several 100 SQL objects on a
> new database. Example: I have each procedure in its own text file (IE
> 100 .sql files), so I just check them out of source control, then run
> a perl script that just dumps them into one txt file. Then I paste
> that mess into Query Analyzer.
> So to reiterate my question, how do other people get sql objects
> into a database. Obviously I'd do an object copy if they resided in
> some other database.
> While my solution works, there is always a better way. Thanks for
> your suggestions.
The suggestion from Dan and Simon to use OSQL is a good one. I'll
add that you can invoke OSQL for each file. This could be help to
track any errors.
For a faster execution you could look into to connect to the database
from Perl using some interface. (There is a short overview on my
web site at: http://www.algonet.se/~sommar/mssql...ternatives.html.
And if you want a ton of bells and whistles, you can look at
http://www.abaris.se/abaperls/. This is the load tool that we use
in our shop, and as the name indicates it's all Perl. You could
say that I started where you are now, and this is what I have seven
years later. :-)
--
Erland Sommarskog, SQL Server MVP, sommar@.algonet.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.aspsql
Import problem with varchar(max) field
I'm trying to import some Assessor data from a text file into a table and the field for Legal Description (column 2 in the source text file) in the table is of data type varchar(max) because some of the data goes over the 8K size. I get an error on the first row of importing that refers to column 2 (see 'Initial errors' below). I read the related post and changed the size of input column 2 to 8000 and got this error. Finally I set the size of the of input column 2 to 4000 and it ran. So I'm thinking there is a limit on the size of varchar data that can be imported. Just want to clarify what that limit is and how I might go about importing this data.
Thanks, John
Error with input column 2 set to size of 8000:
Setting Destination Connection (Error)
Messages
Error 0xc0204016: DTS.Pipeline: The "output column "Column 2" (388)" has a length that is not valid. The length must be between 0 and 4000.
(SQL Server Import and Export Wizard)
Exception from HRESULT: 0xC0204016 (Microsoft.SqlServer.DTSPipelineWrap)
Initial errors:
Executing (Error)
Messages
Error 0xc02020a1: Data Flow Task: Data conversion failed. The data conversion for column "Column 2" returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page.".
(SQL Server Import and Export Wizard)
Error 0xc020902a: Data Flow Task: The "output column "Column 2" (18)" failed because truncation occurred, and the truncation row disposition on "output column "Column 2" (18)" specifies failure on truncation. A truncation error occurred on the specified object of the specified component.
(SQL Server Import and Export Wizard)
Error 0xc0202092: Data Flow Task: An error occurred while processing file "\\Scux00\assrdumps\SQLServerDB\exportsql.txt" on data row 1.
(SQL Server Import and Export Wizard)
Error 0xc0047038: Data Flow Task: The PrimeOutput method on component "Source - exportsql_txt" (1) returned error code 0xC0202092. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.
(SQL Server Import and Export Wizard)
Error 0xc0047021: Data Flow Task: Thread "SourceThread0" has exited with error code 0xC0047038.
(SQL Server Import and Export Wizard)
Error 0xc0047039: Data Flow Task: Thread "WorkThread0" received a shutdown signal and is terminating. The user requested a shutdown, or an error in another thread is causing the pipeline to shutdown.
(SQL Server Import and Export Wizard)
Error 0xc0047021: Data Flow Task: Thread "WorkThread0" has exited with error code 0xC0047039.
(SQL Server Import and Export Wizard)
I'm trying the SQL Server Import and Export Wizard. I've set the DataType to most of the fields as 'string[DT_STR]' and the Unicode box is unchecked, but when I look at column mappings after clicking 'Edit Mappings', all the columns say they're type nvarchar on the screen and it doesn't appear to be something I can change. If I highlight a column on that screen it shows the setting that I set in the Advanced screen for the text file, ie .. Column 2 string[DT_STR](4000). Since I'm getting the limit of 4k it must be because of this. I'm not sure how to modify it. I'll save the package as a file and open it up in BID Studio ... john
Import Performance file Run Command
Enterprise with SP3A. I trying to set up a job to import
Performance Monitors daily into SQL Server table. How
would I set up Performance Monitor to stop at 7AM close
the present file, create a new file at 7:02AM start
collecting statistics. Then I will DTS file into a SQL
Server table.
My performance monitor job name is SAVE_STATISTICS what
run command should I use to start my SAVE_STATISTICS at 7
AM and stop it daily 7:02 AM?
This would be a very help to resolve this issue.
Thank You,
Dan
you could potentially use LOGMAN native command from a remote XP and Windows
2003 computer to Start and Stop the logs at the specified time each day - ie
a Daily scheduled task (Control Panel/ Scheduled Tasks) to do it.
And then as part of the definition of the Perfmon Log / Schedule add the
"Run this command" to have a batch file that calls DTSRUN and executes your
DTS package to import the Data.
a bit messy I admit
regards
Andy.
"Dan" <anonymous@.discussions.microsoft.com> wrote in message
news:26ad201c461ce$4a260760$a301280a@.phx.gbl...
>
> I have Windows 2000 Advanced Server with SQL Server 2000
> Enterprise with SP3A. I trying to set up a job to import
> Performance Monitors daily into SQL Server table. How
> would I set up Performance Monitor to stop at 7AM close
> the present file, create a new file at 7:02AM start
> collecting statistics. Then I will DTS file into a SQL
> Server table.
> My performance monitor job name is SAVE_STATISTICS what
> run command should I use to start my SAVE_STATISTICS at 7
> AM and stop it daily 7:02 AM?
> This would be a very help to resolve this issue.
> Thank You,
> Dan
>
|||Hi
I replied to you previous post on this subject! Perfmon has it's own method
of scheduling, through the Scheduling tab on the Properties dialog. You can
tell perfmon to stop the performance monitor and start a new log after a
given period. You can also specify that it runs a command when this happens.
This could be your DTS job. To change the name of the input file check out:
http://www.sqldts.com/default.aspx?200
John
"Dan" <anonymous@.discussions.microsoft.com> wrote in message
news:26ad201c461ce$4a260760$a301280a@.phx.gbl...
>
> I have Windows 2000 Advanced Server with SQL Server 2000
> Enterprise with SP3A. I trying to set up a job to import
> Performance Monitors daily into SQL Server table. How
> would I set up Performance Monitor to stop at 7AM close
> the present file, create a new file at 7:02AM start
> collecting statistics. Then I will DTS file into a SQL
> Server table.
> My performance monitor job name is SAVE_STATISTICS what
> run command should I use to start my SAVE_STATISTICS at 7
> AM and stop it daily 7:02 AM?
> This would be a very help to resolve this issue.
> Thank You,
> Dan
>
Import Performance file Run Command
Enterprise with SP3A. I trying to set up a job to import
Performance Monitors daily into SQL Server table. How
would I set up Performance Monitor to stop at 7AM close
the present file, create a new file at 7:02AM start
collecting statistics. Then I will DTS file into a SQL
Server table.
My performance monitor job name is SAVE_STATISTICS what
run command should I use to start my SAVE_STATISTICS at 7
AM and stop it daily 7:02 AM?
This would be a very help to resolve this issue.
Thank You,
Danyou could potentially use LOGMAN native command from a remote XP and Windows
2003 computer to Start and Stop the logs at the specified time each day - ie
a Daily scheduled task (Control Panel/ Scheduled Tasks) to do it.
And then as part of the definition of the Perfmon Log / Schedule add the
"Run this command" to have a batch file that calls DTSRUN and executes your
DTS package to import the Data.
a bit messy I admit
regards
Andy.
"Dan" <anonymous@.discussions.microsoft.com> wrote in message
news:26ad201c461ce$4a260760$a301280a@.phx.gbl...
>
> I have Windows 2000 Advanced Server with SQL Server 2000
> Enterprise with SP3A. I trying to set up a job to import
> Performance Monitors daily into SQL Server table. How
> would I set up Performance Monitor to stop at 7AM close
> the present file, create a new file at 7:02AM start
> collecting statistics. Then I will DTS file into a SQL
> Server table.
> My performance monitor job name is SAVE_STATISTICS what
> run command should I use to start my SAVE_STATISTICS at 7
> AM and stop it daily 7:02 AM?
> This would be a very help to resolve this issue.
> Thank You,
> Dan
>|||Hi
I replied to you previous post on this subject! Perfmon has it's own method
of scheduling, through the Scheduling tab on the Properties dialog. You can
tell perfmon to stop the performance monitor and start a new log after a
given period. You can also specify that it runs a command when this happens.
This could be your DTS job. To change the name of the input file check out:
http://www.sqldts.com/default.aspx?200
John
"Dan" <anonymous@.discussions.microsoft.com> wrote in message
news:26ad201c461ce$4a260760$a301280a@.phx.gbl...
>
> I have Windows 2000 Advanced Server with SQL Server 2000
> Enterprise with SP3A. I trying to set up a job to import
> Performance Monitors daily into SQL Server table. How
> would I set up Performance Monitor to stop at 7AM close
> the present file, create a new file at 7:02AM start
> collecting statistics. Then I will DTS file into a SQL
> Server table.
> My performance monitor job name is SAVE_STATISTICS what
> run command should I use to start my SAVE_STATISTICS at 7
> AM and stop it daily 7:02 AM?
> This would be a very help to resolve this issue.
> Thank You,
> Dan
>
Import Performance file Run Command
Enterprise with SP3A. I trying to set up a job to import
Performance Monitors daily into SQL Server table. How
would I set up Performance Monitor to stop at 7AM close
the present file, create a new file at 7:02AM start
collecting statistics. Then I will DTS file into a SQL
Server table.
My performance monitor job name is SAVE_STATISTICS what
run command should I use to start my SAVE_STATISTICS at 7
AM and stop it daily 7:02 AM?
This would be a very help to resolve this issue.
Thank You,
Danyou could potentially use LOGMAN native command from a remote XP and Windows
2003 computer to Start and Stop the logs at the specified time each day - ie
a Daily scheduled task (Control Panel/ Scheduled Tasks) to do it.
And then as part of the definition of the Perfmon Log / Schedule add the
"Run this command" to have a batch file that calls DTSRUN and executes your
DTS package to import the Data.
a bit messy I admit
regards
Andy.
"Dan" <anonymous@.discussions.microsoft.com> wrote in message
news:26ad201c461ce$4a260760$a301280a@.phx
.gbl...
>
> I have Windows 2000 Advanced Server with SQL Server 2000
> Enterprise with SP3A. I trying to set up a job to import
> Performance Monitors daily into SQL Server table. How
> would I set up Performance Monitor to stop at 7AM close
> the present file, create a new file at 7:02AM start
> collecting statistics. Then I will DTS file into a SQL
> Server table.
> My performance monitor job name is SAVE_STATISTICS what
> run command should I use to start my SAVE_STATISTICS at 7
> AM and stop it daily 7:02 AM?
> This would be a very help to resolve this issue.
> Thank You,
> Dan
>|||Hi
I replied to you previous post on this subject! Perfmon has it's own method
of scheduling, through the Scheduling tab on the Properties dialog. You can
tell perfmon to stop the performance monitor and start a new log after a
given period. You can also specify that it runs a command when this happens.
This could be your DTS job. To change the name of the input file check out:
http://www.sqldts.com/default.aspx?200
John
"Dan" <anonymous@.discussions.microsoft.com> wrote in message
news:26ad201c461ce$4a260760$a301280a@.phx
.gbl...
>
> I have Windows 2000 Advanced Server with SQL Server 2000
> Enterprise with SP3A. I trying to set up a job to import
> Performance Monitors daily into SQL Server table. How
> would I set up Performance Monitor to stop at 7AM close
> the present file, create a new file at 7:02AM start
> collecting statistics. Then I will DTS file into a SQL
> Server table.
> My performance monitor job name is SAVE_STATISTICS what
> run command should I use to start my SAVE_STATISTICS at 7
> AM and stop it daily 7:02 AM?
> This would be a very help to resolve this issue.
> Thank You,
> Dan
>
Import Performance file into SQL Server table
Enterprise with SP3A. I trying to set up a job to import
Performance Monitors daily into SQL Server table. How
would I set up Performance Monitor to stop at 7AM close
the present file, create a new file at 7:01AM start
collecting statistics. Then I will DTS file into a SQL
Server table.
This would be a very help to resolve this issue.
Thank You,
Dan
Hi
You can schedule the start/stop of performance monitor from within Perfmon
itself. This also allows you to run a script when you close it.
John
"Dan" <anonymous@.discussions.microsoft.com> wrote in message
news:265b501c46109$08226bd0$a301280a@.phx.gbl...
> I have Windows 2000 Advanced Server with SQL Server 2000
> Enterprise with SP3A. I trying to set up a job to import
> Performance Monitors daily into SQL Server table. How
> would I set up Performance Monitor to stop at 7AM close
> the present file, create a new file at 7:01AM start
> collecting statistics. Then I will DTS file into a SQL
> Server table.
> This would be a very help to resolve this issue.
> Thank You,
> Dan
sql
Import Performance file into SQL Server table
Enterprise with SP3A. I trying to set up a job to import
Performance Monitors daily into SQL Server table. How
would I set up Performance Monitor to stop at 7AM close
the present file, create a new file at 7:01AM start
collecting statistics. Then I will DTS file into a SQL
Server table.
This would be a very help to resolve this issue.
Thank You,
DanHi
You can schedule the start/stop of performance monitor from within Perfmon
itself. This also allows you to run a script when you close it.
John
"Dan" <anonymous@.discussions.microsoft.com> wrote in message
news:265b501c46109$08226bd0$a301280a@.phx
.gbl...
> I have Windows 2000 Advanced Server with SQL Server 2000
> Enterprise with SP3A. I trying to set up a job to import
> Performance Monitors daily into SQL Server table. How
> would I set up Performance Monitor to stop at 7AM close
> the present file, create a new file at 7:01AM start
> collecting statistics. Then I will DTS file into a SQL
> Server table.
> This would be a very help to resolve this issue.
> Thank You,
> Dan
Import Performance file into SQL Server table
Enterprise with SP3A. I trying to set up a job to import
Performance Monitors daily into SQL Server table. How
would I set up Performance Monitor to stop at 7AM close
the present file, create a new file at 7:01AM start
collecting statistics. Then I will DTS file into a SQL
Server table.
This would be a very help to resolve this issue.
Thank You,
DanHi
You can schedule the start/stop of performance monitor from within Perfmon
itself. This also allows you to run a script when you close it.
John
"Dan" <anonymous@.discussions.microsoft.com> wrote in message
news:265b501c46109$08226bd0$a301280a@.phx.gbl...
> I have Windows 2000 Advanced Server with SQL Server 2000
> Enterprise with SP3A. I trying to set up a job to import
> Performance Monitors daily into SQL Server table. How
> would I set up Performance Monitor to stop at 7AM close
> the present file, create a new file at 7:01AM start
> collecting statistics. Then I will DTS file into a SQL
> Server table.
> This would be a very help to resolve this issue.
> Thank You,
> Dan
Import Password Protected File Via DTS
using DTS. I am getting an error that it can't decrypt file. Does
anyone know how I can pass the password to the file during the DTS
execution. Please help.
Thanks,
MichelleThe Excel connection object has a Password property - you can set it
using a dynamic properties task or an ActiveX task.
Simon