Friday, March 30, 2012
Import question with imbedded tags
I'm having a heck of a time trying to make this work, and not being an
expert on XML I'm not even sure what to look for. Can someone point me
in the right direction to import an XML file that contains this:
--snip--
<client>
<Updated>7/1/2006</updated>
<Business Number="11223">
<Name>Smith Glass</name>
<Added>4/2/2005</Added>
<PrimaryContact>
<Name>John Smith</name>
<Phone>301-222-4433</phone>
</PrimaryContact>
<SecondaryContact>
<Name>Jack Jones</name>
<Phone>301-222-4322</phone>
</SecondaryContact>
</Business>
<Business Number="44332">
<Name>Anderson Drywall</name>
<Added>5/1/2005</Added>
<PrimaryContact>
<Name>Mike Anderson</name>
<Phone>301-223-6689</phone>
</PrimaryContact>
<SecondaryContact>
<Name>Nancy Taylor</name>
<Phone>301-542-6643</phone>
</SecondaryContact>
</Business>
</client>
--snip--
Into a SQL table like this:
ID Name Added Person1 Person2
11223 Smith Glass 4/2/2005 John Smith Jack Jones
44332 Anderson Drywall 5/1/2005 Mike Anderson Nancy Taylor
The spaces probably skewed alittle, but hopefully it comes over okay.
Thanks :)
AlexYou can use the nodes() function with CROSS APPLY to shred your xml. Here
is a example cut down from your example:
shred the input file into this table
CREATE TABLE ClientExm
(
ID INT,
Name NVARCHAR(100),
Added DateTime,
Person1 NVARCHAR(100),
Person2 NVARCHAR(100),
)
INSERT INTO ClientExm
SELECT
X.C.value('@.Number', 'INT'),
X.C.value('name[1]', 'NVARCHAR(100)'),
X.C.value('added[1]', 'DATETIME'),
X.C.value('primaryContact[1]/name[1]', 'NVARCHAR(100)'),
X.C.value('secondaryContact[1]/name[1]', 'NVARCHAR(100)')
FROM (SELECT CAST(BulkColumn AS XML) AS B FROM OPENROWSET(
BULK 'C:\Clientexm.xml', SINGLE_BLOB) AS X) AS S
CROSS APPLY
S.B.nodes('/client/business') AS X(C)
SELECT * from ClientExm
ID Name
Added Person1
Person2
-- ---
--
-- ----
---
----
--
11223 Smith Glass
2005-04-02 00:00:00.000 John Smith
Jack Jones
44332 Anderson Drywall
2005-05-01 00:00:00.000 Mike Anderson
Nancy Taylor
11223 Smith Glass
2005-04-02 00:00:00.000 John Smith
Jack Jones
44332 Anderson Drywall
2005-05-01 00:00:00.000 Mike Anderson
Nancy Taylor
the file c:\clientexm.xml contains
<client>
<updated>7/1/2006</updated>
<business Number="11223">
<name>Smith Glass</name>
<added>4/2/2005</added>
<primaryContact>
<name>John Smith</name>
<phone>301-222-4433</phone>
</primaryContact>
<secondaryContact>
<name>Jack Jones</name>
<phone>301-222-4322</phone>
</secondaryContact>
</business>
<business Number="44332">
<name>Anderson Drywall</name>
<added>5/1/2005</added>
<primaryContact>
<name>Mike Anderson</name>
<phone>301-223-6689</phone>
</primaryContact>
<secondaryContact>
<name>Nancy Taylor</name>
<phone>301-542-6643</phone>
</secondaryContact>
</business>
<business Number="11223">
<name>Smith Glass</name>
<added>4/2/2005</added>
<primaryContact>
<name>John Smith</name>
<phone>301-222-4433</phone>
</primaryContact>
<secondaryContact>
<name>Jack Jones</name>
<phone>301-222-4322</phone>
</secondaryContact>
</business>
<business Number="44332">
<name>Anderson Drywall</name>
<added>5/1/2005</added>
<primaryContact>
<name>Mike Anderson</name>
<phone>301-223-6689</phone>
</primaryContact>
<secondaryContact>
<name>Nancy Taylor</name>
<phone>301-542-6643</phone>
</secondaryContact>
</business>
</client>
Dan
> Hello,
> I'm having a heck of a time trying to make this work, and not being an
> expert on XML I'm not even sure what to look for. Can someone point
> me in the right direction to import an XML file that contains this:
> --snip--
> <client>
> <Updated>7/1/2006</updated>
> <Business Number="11223">
> <Name>Smith Glass</name>
> <Added>4/2/2005</Added>
> <PrimaryContact>
> <Name>John Smith</name>
> <Phone>301-222-4433</phone>
> </PrimaryContact>
> <SecondaryContact>
> <Name>Jack Jones</name>
> <Phone>301-222-4322</phone>
> </SecondaryContact>
> </Business>
> <Business Number="44332">
> <Name>Anderson Drywall</name>
> <Added>5/1/2005</Added>
> <PrimaryContact>
> <Name>Mike Anderson</name>
> <Phone>301-223-6689</phone>
> </PrimaryContact>
> <SecondaryContact>
> <Name>Nancy Taylor</name>
> <Phone>301-542-6643</phone>
> </SecondaryContact>
> </Business>
> </client>
> --snip--
> Into a SQL table like this:
> ID Name Added Person1 Person2 11223 Smith
> Glass 4/2/2005 John Smith Jack Jones 44332 Anderson Drywall
> 5/1/2005 Mike Anderson Nancy Taylor
> The spaces probably skewed alittle, but hopefully it comes over okay.
> Thanks :)
> Alex
>|||Dan wrote:
> You can use the nodes() function with CROSS APPLY to shred your xml. Here
> is a example cut down from your example:
> shred the input file into this table
> CREATE TABLE ClientExm
> (
> ID INT,
> Name NVARCHAR(100),
> Added DateTime,
> Person1 NVARCHAR(100),
> Person2 NVARCHAR(100),
> )
>
> INSERT INTO ClientExm
> SELECT
> X.C.value('@.Number', 'INT'),
> X.C.value('name[1]', 'NVARCHAR(100)'),
> X.C.value('added[1]', 'DATETIME'),
> X.C.value('primaryContact[1]/name[1]', 'NVARCHAR(100)'),
> X.C.value('secondaryContact[1]/name[1]', 'NVARCHAR(100)')
> FROM (SELECT CAST(BulkColumn AS XML) AS B FROM OPENROWSET(
> BULK 'C:\Clientexm.xml', SINGLE_BLOB) AS X) AS S
> CROSS APPLY
> S.B.nodes('/client/business') AS X(C)
>
> SELECT * from ClientExm
> ID Name
> Added Person1
> Person2
> -- ---
---
> -- ----
----
> ----
--
> 11223 Smith Glass
> 2005-04-02 00:00:00.000 John Smith
> Jack Jones
> 44332 Anderson Drywall
> 2005-05-01 00:00:00.000 Mike Ander
son
> Nancy Taylor
> 11223 Smith Glass
> 2005-04-02 00:00:00.000 John Smith
> Jack Jones
> 44332 Anderson Drywall
> 2005-05-01 00:00:00.000 Mike Ander
son
> Nancy Taylor
>
Hi Dan,
Too awesome! Thanks for putting this together for me, but I guess I
should've told you I'm using MS SQL 2000... when I try to run this in
query analyzer it gives a syntax error with the BULK command. Hmmm...
if this only works on SQL2000 then I'll hold onto the snippet for when
we upgrade, which no telling when that'll be.
Any suggestions for running this on SQL 2000? If not, i'm back to my
digging :) Thanks,
Alex|||You may be able to do what you need with OPENXML. Below is an example but
look up OPENXML and sp_xml_preparedocument in the BOL for more details.
Also lookup SQLXML and annotated schema for another way to accomlish this.
DECLARE @.hdoc INT
EXEC sp_xml_preparedocument @.hdoc OUTPUT,
'<client>
<updated>7/1/2006</updated>
<business Number="11223">
<name>Smith Glass</name>
<added>4/2/2005</added>
<primaryContact>
<name>John Smith</name>
<phone>301-222-4433</phone>
</primaryContact>
<secondaryContact>
<name>Jack Jones</name>
<phone>301-222-4322</phone>
</secondaryContact>
</business>
<business Number="44332">
<name>Anderson Drywall</name>
<added>5/1/2005</added>
<primaryContact>
<name>Mike Anderson</name>
<phone>301-223-6689</phone>
</primaryContact>
<secondaryContact>
<name>Nancy Taylor</name>
<phone>301-542-6643</phone>
</secondaryContact>
</business>
<business Number="11223">
<name>Smith Glass</name>
<added>4/2/2005</added>
<primaryContact>
<name>John Smith</name>
<phone>301-222-4433</phone>
</primaryContact>
<secondaryContact>
<name>Jack Jones</name>
<phone>301-222-4322</phone>
</secondaryContact>
</business>
<business Number="44332">
<name>Anderson Drywall</name>
<added>5/1/2005</added>
<primaryContact>
<name>Mike Anderson</name>
<phone>301-223-6689</phone>
</primaryContact>
<secondaryContact>
<name>Nancy Taylor</name>
<phone>301-542-6643</phone>
</secondaryContact>
</business>
</client>'
INSERT INTO ClientExm
SELECT *
FROM OPENXML(@.hdoc, '/client/business', 2)
WITH
([Number] INT '@.Number',
[name] NVARCHAR(100) 'name',
[added] DATETIME 'added',
[primaryContact] NVARCHAR(100) 'primaryContact/name',
[secondaryContact] NVARCHAR(100)'secondaryContact/name'
)
Dan
EXEC sp_xml_removedocument @.hdoc
> Dan wrote:
>
> Hi Dan,
> Too awesome! Thanks for putting this together for me, but I guess I
> should've told you I'm using MS SQL 2000... when I try to run this in
> query analyzer it gives a syntax error with the BULK command. Hmmm...
> if this only works on SQL2000 then I'll hold onto the snippet for when
> we upgrade, which no telling when that'll be.
> Any suggestions for running this on SQL 2000? If not, i'm back to my
> digging :) Thanks,
> Alex
>sql
Import Puzzle
I have three feeds from sources around the world, each coming in at a
separate time.
These feeds move into a large table (3GB) that is queried by managers.
The feeds are loaded sequentially, and then the previous day's feed
rows are deleted from the table (this is done so that the user's
application is never without data).
The issue is that the import takes a lot of time, as do the deletes.
These is hurting performance significantly. I attempted to fix the
problem by creating separate tables for each feed. I then created a
view with the original table's name and used UNION ALL's. My intention
was that as each feed came in, I'd alter the view with the new table's
name, and then truncate the older table. This met both goals of
concurrency and import/delete speed.
Unfortunately, this view seems to ignore the indexes on the underlying
tables, which devastates performance. I can't index the view, since
altering it makes the index less useful.
I'm looking for a different strategy for loading and deleting the
data, all without disruption to the applications. I'd appreciate any
suggestions...woodyb@.hotmail.com (Buck Woody) wrote in message news:<d4e6e94d.0311250951.76030a77@.posting.google.com>...
> Hello -
> I have three feeds from sources around the world, each coming in at a
> separate time.
> These feeds move into a large table (3GB) that is queried by managers.
> The feeds are loaded sequentially, and then the previous day's feed
> rows are deleted from the table (this is done so that the user's
> application is never without data).
? delete so you never be w/o data ?
> The issue is that the import takes a lot of time, as do the deletes.
> These is hurting performance significantly. I attempted to fix the
This is probably due to your managers querying the data and applying
shared read locks -- while you're trying to insert/delete.
> problem by creating separate tables for each feed. I then created a
> view with the original table's name and used UNION ALL's. My intention
> was that as each feed came in, I'd alter the view with the new table's
> name, and then truncate the older table. This met both goals of
> concurrency and import/delete speed.
Did you put NOLOCK on the view?
> Unfortunately, this view seems to ignore the indexes on the underlying
> tables, which devastates performance. I can't index the view, since
> altering it makes the index less useful.
Which version of MSSQLSERVER are you running? Only 2000 has indexed
views.
> I'm looking for a different strategy for loading and deleting the
> data, all without disruption to the applications. I'd appreciate any
> suggestions...
Ideally you would have 2 separate tables: "real data" and a nice
pretty one for your managers. But if you don't have space etc...,
make sure you put NOLOCKS or "set transaction isolation level read
uncommitted" on every stored procedure and view your managers use.
Hey don't sweat the small stuff. Everything is small stuff. Good
luck.
-- Louis|||"Buck Woody" <woodyb@.hotmail.com> wrote in message
news:d4e6e94d.0311250951.76030a77@.posting.google.c om...
> Hello -
> I have three feeds from sources around the world, each coming in at a
> separate time.
> These feeds move into a large table (3GB) that is queried by managers.
> The feeds are loaded sequentially, and then the previous day's feed
> rows are deleted from the table (this is done so that the user's
> application is never without data).
> The issue is that the import takes a lot of time, as do the deletes.
> These is hurting performance significantly. I attempted to fix the
> problem by creating separate tables for each feed. I then created a
> view with the original table's name and used UNION ALL's. My intention
> was that as each feed came in, I'd alter the view with the new table's
> name, and then truncate the older table. This met both goals of
> concurrency and import/delete speed.
> Unfortunately, this view seems to ignore the indexes on the underlying
> tables, which devastates performance. I can't index the view, since
> altering it makes the index less useful.
> I'm looking for a different strategy for loading and deleting the
> data, all without disruption to the applications. I'd appreciate any
> suggestions...
The two parts that often slow down inserts, updates and deletes are the
indexes and transaction log.
For transactions. If data protection is not required, since you can just
import the feeds again and they are useless after one day, then maybe you
could use a # temp table. The tempdb is more ram based so it works much
faster and has less transaction logs.
For indexes, why not drop the indexes before you do the import and delete
and put them back later. This is much faster than having the (b-tree)
indexes do their balancing tricks after each row is added.
If you can't drop the indexes then consider a low fill factor on the indexes
so they have plenty of space the grow without rebalancing.|||Quick thought off the top of my head.
Generate each table (x3). Create a view of these and generate a new
table from this to combine the data. Then create a final table for
your guys to work with. You can fully control the indexes that is on
the final table. You can mess about with the data in each of the three
sets. You will see the performance issues when creating the combined
table, but the users won't. The move from that into the one to be
reported on should be reasonably OK in comparison as you are simply
moving bulk data, not combining three sets and moving them at the same
time.
HTH
louisducnguyen@.hotmail.com (louis nguyen) wrote in message news:<b0e9d53.0311251501.3e74e293@.posting.google.com>...
> woodyb@.hotmail.com (Buck Woody) wrote in message news:<d4e6e94d.0311250951.76030a77@.posting.google.com>...
> > Hello -
> > I have three feeds from sources around the world, each coming in at a
> > separate time.
> > These feeds move into a large table (3GB) that is queried by managers.
> > The feeds are loaded sequentially, and then the previous day's feed
> > rows are deleted from the table (this is done so that the user's
> > application is never without data).
> ? delete so you never be w/o data ?
> > The issue is that the import takes a lot of time, as do the deletes.
> > These is hurting performance significantly. I attempted to fix the
> This is probably due to your managers querying the data and applying
> shared read locks -- while you're trying to insert/delete.
> > problem by creating separate tables for each feed. I then created a
> > view with the original table's name and used UNION ALL's. My intention
> > was that as each feed came in, I'd alter the view with the new table's
> > name, and then truncate the older table. This met both goals of
> > concurrency and import/delete speed.
> Did you put NOLOCK on the view?
> > Unfortunately, this view seems to ignore the indexes on the underlying
> > tables, which devastates performance. I can't index the view, since
> > altering it makes the index less useful.
> Which version of MSSQLSERVER are you running? Only 2000 has indexed
> views.
> > I'm looking for a different strategy for loading and deleting the
> > data, all without disruption to the applications. I'd appreciate any
> > suggestions...
> Ideally you would have 2 separate tables: "real data" and a nice
> pretty one for your managers. But if you don't have space etc...,
> make sure you put NOLOCKS or "set transaction isolation level read
> uncommitted" on every stored procedure and view your managers use.
> Hey don't sweat the small stuff. Everything is small stuff. Good
> luck.
> -- Louis
Import only yhe changed data
THX
InonIt would be bit difficult to import without a time field, which is key to compile the data.
If your task is about changed data then why not consider replication.|||Sure, but you need a staging environment...
1. bcp (I can't abvide DTS unless the data is in Excel or Access, even then...) in to a stage table
2. Write 3 sql statements to compare new data with old...determine, based on keys, which data was added, which data was deleted, and which data was updated..
3. Then INSERT, DELETE and UPDATE those sets of data...
Got a link somewhere...
Hold on...|||Here's the code:
http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=28305|||You can use the BINARY_CHECKSUM function to determine whether data was changed.sql
Monday, March 26, 2012
Import from MSSQL2000 (8) to 7?
I developed a intranet site using MSSQL7/win2000 some time ago.
The target environment used MSSQL2000/8.
We were (almost painlessly) able to import the db-scheme and data from 7
to 8. (Bravo MSSQL)
Now I need to do some upgrading on the application and I would like to
have a copy of the database from MSSQL2000/8 to MSSQL7.
Is that also possible?
Or should I download Microsoft SQL Server Express and use that instead
of my MSSQL7? Is it better?
I hope I can get the relationsheet too (that one with Foreign Keys
mapped in a nice graphical way).
Any advise highly appreciated.
I am good with Postgresql, but my MSSQL skills leave a lot to be
desired. :-/
For an outsider like me the many versions and OS's are quite confusing.
Do I need special commands on MSSQL2000/8 to create a MSSQL7 compatible
export?
Thanks in advance!
Regards,
Erwin MollerErwin Moller (since_humans_read_this_I_am_spammed_too_much@.spam yourself.com)
writes:
Quote:
Originally Posted by
Now I need to do some upgrading on the application and I would like to
have a copy of the database from MSSQL2000/8 to MSSQL7.
Is that also possible?
You would have to build the database on SQL 7 from scripts, and move the
data with bulk load. I don't know you did the import in the other direction,
but going to a higher version it's possible by backup/restore or simply
attaching the database to the higher version. Going to a lower version
this is not possibe.
Quote:
Originally Posted by
Or should I download Microsoft SQL Server Express and use that instead
of my MSSQL7? Is it better?
Well, SQL 7 came out in 1998, SQL Express in 2005, so there are some
more features. But there are also some things that don't really work
as in SQL 7. For instance the old-style join *= is available only
in compatibility level 80 and lower.
Nevertheless, I would recommend going to SQL 2005 if possible.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Erland Sommarskog schreef:
Quote:
Originally Posted by
Erwin Moller (since_humans_read_this_I_am_spammed_too_much@.spam yourself.com)
writes:
Quote:
Originally Posted by
>Now I need to do some upgrading on the application and I would like to
>have a copy of the database from MSSQL2000/8 to MSSQL7.
>Is that also possible?
>
You would have to build the database on SQL 7 from scripts, and move the
data with bulk load. I don't know you did the import in the other direction,
but going to a higher version it's possible by backup/restore or simply
attaching the database to the higher version. Going to a lower version
this is not possibe.
>
Quote:
Originally Posted by
>Or should I download Microsoft SQL Server Express and use that instead
>of my MSSQL7? Is it better?
>
Well, SQL 7 came out in 1998, SQL Express in 2005, so there are some
more features. But there are also some things that don't really work
as in SQL 7. For instance the old-style join *= is available only
in compatibility level 80 and lower.
>
Nevertheless, I would recommend going to SQL 2005 if possible.
Hi Erland,
Thanks for your answers.
I think I'll switch to SQL Express 2005.
One last question: If I switch to SQL Express 2005, do you expect that I
can copy the database from MSSQL2000 to SQL Express 2005 without scripts
and bulk data load?
Regards,
Erwin Moller|||Erwin Moller (since_humans_read_this_I_am_spammed_too_much@.spam yourself.com)
writes:
Quote:
Originally Posted by
I think I'll switch to SQL Express 2005.
One last question: If I switch to SQL Express 2005, do you expect that I
can copy the database from MSSQL2000 to SQL Express 2005 without scripts
and bulk data load?
Yes. You can copy with attach/detach or BACKUP/RESTORE. Note that you
cannot copy back this way. This must be done with scripts and bulk load.
Also, be sure to run sp_updatestats when you have copied the database
to SQL 2005, as the migration invalidates all statistics.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspxsql
Wednesday, March 21, 2012
Import Errors from MS Access
that errored had date/time fields and it seems SQL didn't like them. The tab
les work just fine in Access so I'm surprised SQL didn't like them. Anyone s
ee this before?Alan,
Most likely cause: MS Access supports dates as early as Jan 1, 100, while
SQL Server only supports dates back to Jan 1, 1753. You probably have a date
prior to Jan 1, 1753. Before you dismiss this idea, keep in mind that a
simple data entry error can morph Jul 15, 1999 to Jul 15, 199 :-)
Use an Access query to locate dates earlier than Jan 1, 1753.
Chief Tenaya
"Alan Fisher" <anonymous@.discussions.microsoft.com> wrote in message
news:838D8D52-BB99-416A-A4D2-4781DCE7040F@.microsoft.com...
> I encountered errors while importing tables from an Access. The only
tbales that errored had date/time fields and it seems SQL didn't like them.
The tables work just fine in Access so I'm surprised SQL didn't like them.
Anyone see this before?|||On Tue, 23 Mar 2004 16:01:06 -0800, "Alan Fisher"
<anonymous@.discussions.microsoft.com> wrote:
Different versions of Access have different levels of support for
upsizing. Try with Access XP or 2003.
When push comes to shov, it's not so hard to build those few tables
yourself, and write append queries to copy over the data.
-Tom.
>I encountered errors while importing tables from an Access. The only tbales that er
rored had date/time fields and it seems SQL didn't like them. The tables work just f
ine in Access so I'm surprised SQL didn't like them. Anyone see this before?
Monday, March 19, 2012
Import Database
I doing some Online projects...
for that some time i want to import database from online database server(located in Newyork), some time i want to transfer datatabse from one server to another server.
At this time i was used DTS Import/Export wizard, but i am not able to copy the relatioships of the tables(Primary key,foreign key,defaults,views,storedprocedures,triggers).
My query is How i downlod a database with exact structure and all data as it as in remote server.
thanks in advansecan't u use Backup & restore
:)|||thanks for ur rply,,
then how can i import online database to local..with databse structure and data.|||You copy the .bak file from the remote server to the local server and then do a restore.
Friday, March 9, 2012
Import data error while delete rows in the destination table
Both SQLServer2005, different server, import tableA from serverA to serverB.
When we import tableA for the first time, serverB will get the whole
structure and data of tableA. So far is fine. By the way, we need to change
the type of a column from varchar to nvarchar.
Second time, we want to save the original structure and delete the exist data.
So we choose the "delete rows in the destination table" property. But when
we execute this, there is an error below while validation.
/******************
Error: 0xC02020F4 at Data Flow Task: The column "dbname" cannot be processed
because more than one code page (936 and 1252) are specified for it.
******************/
All the columns in this table have the same errors.
I don't know why. Thank you.
If you made this operation in a SSIS package to work , you have to change the property AlwaysUseDefaultCodepage of a OLEDB Source in a Data Flow Task . The value have to be "false"
import data
I think you want to create a trigger (or maybe a couple of triggers). I think you would wind up with something like:
CREATE Table DDL:
CREATE TABLE LateFee (
LateFeeID Int Identity (1,1) NOT NULL,
LateFee Decimal(10,2) NULL,
LateFeeReceived datetime NULL
)
CREATE TABLE LateFeeLog (
LateFeeLogID Int Identity (1,1) NOT NULL,
LateFeeID Int NOT NULL,
LateFee Decimal(10,2) NULL,
LateFeeReceived datetime NULL
)
Then you would need to create a couple of triggers:
-- First trigger for inserts
CREATE TRIGGER trgLateFeeInsert
ON LateFee
FOR INSERT
AS
INSERT INTO LateFeeLog (LateFeeID, LateFee)
SELECT LateFeeID, LateFee FROM inserted
-- second trigger for updates
CREATE TRIGGER trgLateFeeUpdate
ON LateFee
FOR UPDATE
AS UPDATE dbo.LateFeeLog
SET LateFeeReceived = inserted.LateFeeReceived
FROM dbo.LateFeeLog inner join inserted ON
dbo.LateFeeLog.LateFeeID = inserted.LateFeeID|||Thanks. This is big help.
Friday, February 24, 2012
Import Access table to SQL Server
rouble with one field type. It a short time in Accws, defined date time and
displayed as hh.mm AM. I can't import this field (I get an overflow message)
. Any ideas.
thans,
StanYou can't directly import it into a smalldatetime or
datetime field as SQL Server doesn't support just a time
field. I can't remember all the details but if you try to
import the column, it uses a date that is out of the range
for SQL Server's destination column. I can't remember the
date Access tries to use but you could try changing the
destination to datetime if it's set to smalldatetime. Or you
may need to use a varchar or some logic for a date
calculation using something like DTS.
-Sue
On Mon, 19 Jul 2004 13:22:02 -0700, "Stanley"
<Stanley@.discussions.microsoft.com> wrote:
>I have an Access 2002 database that I am importing into SQL Server. I have
trouble with one field type. It a short time in Accws, defined date time and
displayed as hh.mm AM. I can't import this field (I get an overflow message
). Any ideas.
>thans,
>Stan
Import Access table to SQL Server
thans,
Stan
You can't directly import it into a smalldatetime or
datetime field as SQL Server doesn't support just a time
field. I can't remember all the details but if you try to
import the column, it uses a date that is out of the range
for SQL Server's destination column. I can't remember the
date Access tries to use but you could try changing the
destination to datetime if it's set to smalldatetime. Or you
may need to use a varchar or some logic for a date
calculation using something like DTS.
-Sue
On Mon, 19 Jul 2004 13:22:02 -0700, "Stanley"
<Stanley@.discussions.microsoft.com> wrote:
>I have an Access 2002 database that I am importing into SQL Server. I have trouble with one field type. It a short time in Accws, defined date time and displayed as hh.mm AM. I can't import this field (I get an overflow message). Any ideas.
>thans,
>Stan
Sunday, February 19, 2012
import .sql script into sql server
Maybe this is an easy task, but I'm having a really hard time figuring
out how to do this. I'm a complete newbie to SQL Server.
I have a database dump file from MySQL that's in .sql format. I'm
trying to figure out how to import that into SQL Server 2000 so that
I'll be able to manipulate it in a gui format, rather than command
line. I can't find any import that takes a .sql file. I've been
trying to load it into the query analyzer and am also having problems
with that. Initially I had problems because there were question marks
within some of my data. I've removed those but when I run the query I
still get a bunch of syntax errors. Looking at the errors, the syntax
seems correct.
Does anyone know of a way to import a .sql script without having to
import it into the query analyzer? It seems like it should be a
no-brainer.... but perhaps my brain is lacking... I don't know!
Thanks for the help!
--jet
the following are the errors that I get and the associated code:
------------------------
Server: Msg 170, Level 15, State 1, Line 17
Line 17: Incorrect syntax near ','.
CODE: INSERT INTO answers VALUES
('travelMethodRadio','jackwichita@.montana.com','te lemark'),
The second line is line 17, and it is followed by more insert values.
------------------------
Server: Msg 170, Level 15, State 1, Line 7093
Line 7093: Incorrect syntax near 'a'.
CODE: ('groupSlopeTravelText','kathryn232@.jhmg.com','Thi s is
completely dependent on conditions and if it is a heavily used area or
a fairly pristine area'),
This is just one in a long list of insert values... no idea why there
is a problem with this particular one.
------------------------"jet" <jessey_tase@.yahoo.com> wrote:
> Hi,
> Maybe this is an easy task, but I'm having a really hard time figuring
> out how to do this. I'm a complete newbie to SQL Server.
> I have a database dump file from MySQL that's in .sql format. I'm
> trying to figure out how to import that into SQL Server 2000 so that
> I'll be able to manipulate it in a gui format, rather than command
> line. I can't find any import that takes a .sql file. I've been
> trying to load it into the query analyzer and am also having problems
> with that. Initially I had problems because there were question marks
> within some of my data. I've removed those but when I run the query I
> still get a bunch of syntax errors. Looking at the errors, the syntax
> seems correct.
> Does anyone know of a way to import a .sql script without having to
> import it into the query analyzer? It seems like it should be a
> no-brainer.... but perhaps my brain is lacking... I don't know!
> Thanks for the help!
> --jet
>
> the following are the errors that I get and the associated code:
> -----------------------
--
> Server: Msg 170, Level 15, State 1, Line 17
> Line 17: Incorrect syntax near ','.
> CODE: INSERT INTO answers VALUES
> ('travelMethodRadio','jackwichita@.montana.com','te lemark'),
> The second line is line 17, and it is followed by more insert values.
> -----------------------
--
> Server: Msg 170, Level 15, State 1, Line 7093
> Line 7093: Incorrect syntax near 'a'.
> CODE: ('groupSlopeTravelText','kathryn232@.jhmg.com','Thi s is
> completely dependent on conditions and if it is a heavily used area or
> a fairly pristine area'),
> This is just one in a long list of insert values... no idea why there
> is a problem with this particular one.
> -----------------------
--
jet,
Before answering your question... if you use the ODBC driver for MySQL, you
can use SQL Server DTS to get table structure and data to/from MySQL. IMHO,
it's much easier that way. However, to answer your question...
The problem you're having would be the same going from SQL Server back to
MySQL: they both use extensions to SQL. Query Analyzer is definitely the
tool to use if you want to run a MySQL .sql file vs your SQL Server.
However, some things to keep in mind..
- Neither SQL Server nor Query Analyzer recognize "#" as a comment
character: use "--" instead.
- MySQL escapes single quotes in a string literal with a backslash (\'): SQL
Server escapes them with an extra single quote ('').
- MySQL allows you to "chain" records for an insert (this is the error
you're getting above). When I generate a SQL file from MySQL I use
phpMyAdmin and if I de-select the option "Extended Inserts" it generates an
insert per record instead of a single insert (what you want for SQL Server).
Basically, you're getting the error because you're trying
INSERT SomeTable VALUES ('a1', 'a2', a3'), ('b1', 'b2', b3')
when you want
INSERT SomeTable VALUES ('b1', 'b2', b3')
INSERT SomeTable VALUES ('a1', 'a2', a3')
Hope this helps!
Craig|||Jet,
If you are a newbie to MS SQL Server 2000 and want to get up to speed
really quickly, our videos give you expert instruction on what you really
need to know about SQL Server. It's like reading a 1200 page book in a few
hours.
To answer your question below, a great way to do this is by using a DTS
package. It's easy to write, makes importing as simple as Excel, and can be
re-used.
For full info and examples of using a DTS package, download our videos at
www.technicalVideos.net
Best regards,
Chuck Conover
www.TechnicalVideos.net
"jet" <jessey_tase@.yahoo.com> wrote in message
news:c3fc98c2.0401251736.2fb4db30@.posting.google.c om...
> Hi,
> Maybe this is an easy task, but I'm having a really hard time figuring
> out how to do this. I'm a complete newbie to SQL Server.
> I have a database dump file from MySQL that's in .sql format. I'm
> trying to figure out how to import that into SQL Server 2000 so that
> I'll be able to manipulate it in a gui format, rather than command
> line. I can't find any import that takes a .sql file. I've been
> trying to load it into the query analyzer and am also having problems
> with that. Initially I had problems because there were question marks
> within some of my data. I've removed those but when I run the query I
> still get a bunch of syntax errors. Looking at the errors, the syntax
> seems correct.
> Does anyone know of a way to import a .sql script without having to
> import it into the query analyzer? It seems like it should be a
> no-brainer.... but perhaps my brain is lacking... I don't know!
> Thanks for the help!
> --jet
>
> the following are the errors that I get and the associated code:
> -----------------------
--
> Server: Msg 170, Level 15, State 1, Line 17
> Line 17: Incorrect syntax near ','.
> CODE: INSERT INTO answers VALUES
> ('travelMethodRadio','jackwichita@.montana.com','te lemark'),
> The second line is line 17, and it is followed by more insert values.
> -----------------------
--
> Server: Msg 170, Level 15, State 1, Line 7093
> Line 7093: Incorrect syntax near 'a'.
> CODE: ('groupSlopeTravelText','kathryn232@.jhmg.com','Thi s is
> completely dependent on conditions and if it is a heavily used area or
> a fairly pristine area'),
> This is just one in a long list of insert values... no idea why there
> is a problem with this particular one.
> -----------------------
--