Showing posts with label records. Show all posts
Showing posts with label records. Show all posts

Wednesday, March 21, 2012

Maybe the Records Are There After All

Your explanation makes SENSE and yes, it appears that I was checking
immediately after the change. (Bear in mind that I am still feeling my way
along.)
The indication that the records were DELETED is that Server Enterprise
Manager reports that the table has ZERO Rows. I had made the change in Data
Type in Server Enterprise Manager and then was making notes and documenting
what I had done. I went back to note the record count (which I had
previously found was conveniently reported in the Table Properties in Server
Enterprise Manager and it gave me the sensation that ALL of the data had
been deleted.
Now, I am thinking that it is doing precisely as you describe. I was able
to use the Import and Export Data tool to COPY all of the data from that
table to another table in a different database, which reports that it has
387,825 records (CORRECT). The original table STILL says in the Properties
that it has ZERO records, but that table has several Indices that are
probably being rebuilt.
But, the larger database from last night STILL SAYS that it has ZERO records
nine hours after the records appeared to all be deleted. But it has nine
million records and more than a few indices. And I am running this SQL
Evaulation on an older 866 MHz single processor system with only 512 Mbs of
memory.
Is there some way to monitor the things that SQL is doing in the BACKGROUND
to these tables?
"Jeff Dillon" <jeff@.removeemergencyreporting.com> wrote in message
news:ORaVrEcvEHA.2584@.TK2MSFTNGP10.phx.gbl...
> Of course you backed up your database, prior to making major changes to
> tables, correct?
> Whew..thank goodness.
> And no, generally records are not deleted. However, behind the scenes,
it's[vbcol=seagreen]
> probably renaming the table, creating a new structure, then inserting the
> records, then dropping and renaming the temp table.
> Did you check if the data was there immediately after changing the column
> datatype?
> Jeff
> "John Bishop" <ugradfrnd@.aol.com> wrote in message
> news:eDbrR9bvEHA.3896@.TK2MSFTNGP09.phx.gbl...
work[vbcol=seagreen]
> the
extract.[vbcol=seagreen]
> think).
> two
> Thereafter,
> datastructure
IS[vbcol=seagreen]
within[vbcol=seagreen]
> a
data
>
Run DBCC UPDATEUSAGE to correct the row count display in Enterprise Mangler.
Geoff N. Hiten
Microsoft SQL Server MVP
Senior Database Administrator
Careerbuilder.com
I support the Professional Association for SQL Server
www.sqlpass.org
"John Bishop" <ugradfrnd@.aol.com> wrote in message
news:%231Z4HYcvEHA.2016@.TK2MSFTNGP15.phx.gbl...
> Your explanation makes SENSE and yes, it appears that I was checking
> immediately after the change. (Bear in mind that I am still feeling my
way
> along.)
> The indication that the records were DELETED is that Server Enterprise
> Manager reports that the table has ZERO Rows. I had made the change in
Data
> Type in Server Enterprise Manager and then was making notes and
documenting
> what I had done. I went back to note the record count (which I had
> previously found was conveniently reported in the Table Properties in
Server
> Enterprise Manager and it gave me the sensation that ALL of the data had
> been deleted.
> Now, I am thinking that it is doing precisely as you describe. I was able
> to use the Import and Export Data tool to COPY all of the data from that
> table to another table in a different database, which reports that it has
> 387,825 records (CORRECT). The original table STILL says in the
Properties
> that it has ZERO records, but that table has several Indices that are
> probably being rebuilt.
> But, the larger database from last night STILL SAYS that it has ZERO
records
> nine hours after the records appeared to all be deleted. But it has nine
> million records and more than a few indices. And I am running this SQL
> Evaulation on an older 866 MHz single processor system with only 512 Mbs
of
> memory.
> Is there some way to monitor the things that SQL is doing in the
BACKGROUND[vbcol=seagreen]
> to these tables?
> "Jeff Dillon" <jeff@.removeemergencyreporting.com> wrote in message
> news:ORaVrEcvEHA.2584@.TK2MSFTNGP10.phx.gbl...
> it's
the[vbcol=seagreen]
column[vbcol=seagreen]
(Friday,[vbcol=seagreen]
> work
of[vbcol=seagreen]
> extract.
of
> IS
> within
> data
>
sql

Maybe the Records Are There After All

Your explanation makes SENSE and yes, it appears that I was checking
immediately after the change. (Bear in mind that I am still feeling my way
along.)
The indication that the records were DELETED is that Server Enterprise
Manager reports that the table has ZERO Rows. I had made the change in Data
Type in Server Enterprise Manager and then was making notes and documenting
what I had done. I went back to note the record count (which I had
previously found was conveniently reported in the Table Properties in Server
Enterprise Manager and it gave me the sensation that ALL of the data had
been deleted.
Now, I am thinking that it is doing precisely as you describe. I was able
to use the Import and Export Data tool to COPY all of the data from that
table to another table in a different database, which reports that it has
387,825 records (CORRECT). The original table STILL says in the Properties
that it has ZERO records, but that table has several Indices that are
probably being rebuilt.
But, the larger database from last night STILL SAYS that it has ZERO records
nine hours after the records appeared to all be deleted. But it has nine
million records and more than a few indices. And I am running this SQL
Evaulation on an older 866 MHz single processor system with only 512 Mbs of
memory.
Is there some way to monitor the things that SQL is doing in the BACKGROUND
to these tables?
"Jeff Dillon" <jeff@.removeemergencyreporting.com> wrote in message
news:ORaVrEcvEHA.2584@.TK2MSFTNGP10.phx.gbl...
> Of course you backed up your database, prior to making major changes to
> tables, correct?
> Whew..thank goodness.
> And no, generally records are not deleted. However, behind the scenes,
it's
> probably renaming the table, creating a new structure, then inserting the
> records, then dropping and renaming the temp table.
> Did you check if the data was there immediately after changing the column
> datatype?
> Jeff
> "John Bishop" <ugradfrnd@.aol.com> wrote in message
> news:eDbrR9bvEHA.3896@.TK2MSFTNGP09.phx.gbl...
work[vbcol=seagreen]
> the
extract.[vbcol=seagreen]
> think).
> two
> Thereafter,
> datastructure
IS[vbcol=seagreen]
within[vbcol=seagreen]
> a
data[vbcol=seagreen]
>Run DBCC UPDATEUSAGE to correct the row count display in Enterprise Mangler.
Geoff N. Hiten
Microsoft SQL Server MVP
Senior Database Administrator
Careerbuilder.com
I support the Professional Association for SQL Server
www.sqlpass.org
"John Bishop" <ugradfrnd@.aol.com> wrote in message
news:%231Z4HYcvEHA.2016@.TK2MSFTNGP15.phx.gbl...
> Your explanation makes SENSE and yes, it appears that I was checking
> immediately after the change. (Bear in mind that I am still feeling my
way
> along.)
> The indication that the records were DELETED is that Server Enterprise
> Manager reports that the table has ZERO Rows. I had made the change in
Data
> Type in Server Enterprise Manager and then was making notes and
documenting
> what I had done. I went back to note the record count (which I had
> previously found was conveniently reported in the Table Properties in
Server
> Enterprise Manager and it gave me the sensation that ALL of the data had
> been deleted.
> Now, I am thinking that it is doing precisely as you describe. I was able
> to use the Import and Export Data tool to COPY all of the data from that
> table to another table in a different database, which reports that it has
> 387,825 records (CORRECT). The original table STILL says in the
Properties
> that it has ZERO records, but that table has several Indices that are
> probably being rebuilt.
> But, the larger database from last night STILL SAYS that it has ZERO
records
> nine hours after the records appeared to all be deleted. But it has nine
> million records and more than a few indices. And I am running this SQL
> Evaulation on an older 866 MHz single processor system with only 512 Mbs
of
> memory.
> Is there some way to monitor the things that SQL is doing in the
BACKGROUND
> to these tables?
> "Jeff Dillon" <jeff@.removeemergencyreporting.com> wrote in message
> news:ORaVrEcvEHA.2584@.TK2MSFTNGP10.phx.gbl...
> it's
the[vbcol=seagreen]
column[vbcol=seagreen]
(Friday,[vbcol=seagreen]
> work
of[vbcol=seagreen]
> extract.
of[vbcol=seagreen]
> IS
> within
> data
>

Maxrows in Drillthrough

Even when I set the maxrows in my drilltrough action to 2500, the data sample viewer still states that 1000 records were sampled.

Is this 1) a limitation of the viewer (ie - I would see all of my rows if I used a different application to view the data)

or 2) A limitation of SSAS - no more than 1000 rows will ever be returned

or 3) Something that the interface has hard coded into the screen that does not reflect what is actually happening.

Thanks for the help.

Ann

It definitely isn't: "(2) A limitation of SSAS - no more than 1000 rows will ever be returned". If you execute this drillthrough query on Adventure Works in Management Studio, 1899 rows are returned - adding "maxrows 1500" limits output to 1500 rows:

Drillthrough

select {[Measures].[Reseller Order Quantity]} on 0,

[Product].[Product Categories].[Subcategory].&[17] on 1

from [Adventure Works]

MaximunErrorCount issue

Hi,

I need to launch a SSIS process that has several errors (the most common is insert records with a PK that already exists in the destination table). My process may continue to the end despite these errors, in order to see them in the log when the process has finished.

I have set "MaximunErrorCount" to 999999 , but the process stop the first time i attempt to write a duplaicate PK.

I have also set MaximunErrorCount of every Data flow taks to this value...

What's going wrong?

Thanks!

Hi Alberttoim,

I am not sure of this.

But there is a better solution, If you want to get unique rows finally in to your destination table:

- First output to a Staging Table whose structure is same as your Destination table and dedupe them.

Thanks

Subhash Subramanyam

|||

Hi Subhash,

The thing is the source table is huge, and doing this is so expensive... in addition to this i want to know how to manage errors since this is not the only error i can find in a future.

I have try to "omit error" but in this case the error is not recorded in the log.

I theory , if i set MaximumErrorCount to a high value the process should continue executing ,is'nt it? What is going worng?

|||Albertoim,
Have you read through this thread? http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1211340&SiteID=1|||

Yes, i have read it. I do a lookup before insert the record, the problem is that i am not using a for each bucle, i am treating a bunch of rows.. thats why the lookup transformation doesnt find any record with this PK because it actually doesnt already exists fisically in the database.

Anyway, i need to know how to manage errors, how to make the process continue executing in spite of errors.

|||You can redirect errors out of the OLE DB Destination component. That will allow you to redirect them (or even ignore if you want) without raising a package error.|||And a lookup would work if your destination is failing because of primary key violations. If you are expecting the lookup to refresh with each inserted record from the data flow, you can setup the lookup to not cache it's lookup results, which means that every time a new records gets inserted, the next record through the lookup could find it.|||

Thanks Phil,

I have been doing this to prove that the whole package works ok, but now i am trying to use the other solution because the flow is quite big and i have to redirect errors in so many places... the package becomes too confusing.

The question is, can't i allow the package continue executing when this error happens and record it in the log?

If i use an event handler, can i send to it any info (columns of the duplicate record for example)... when the event is triggered the task stop executing?

Thanks

|||

Phil, regarding your second solution...

Thats the situation: Source table has info that already exists in the destination... i have to update/add records in the destination deppending if the record already exists or not.

If i disable cache and in the source table two records have the same PK, in the destination , in the first round the record will be inserted, and in the second will be updated... no error but i cant detect what is happening. I need to detect these errors to advice the user to correct it, i can't touch the source table...

In addition to this, i think disabling cache of lookup transformations that manage a great amount of info will produce a low performance.. am i wrong?

I simply need to allow the process to support continue executing in spite of errors and record them on a log...

|||

Albertoim wrote:

Phil, regarding your second solution...

Thats the situation: Source table has info that already exists in the destination... i have to update/add records in the destination deppending if the record already exists or not.

If i disable cache and in the source table two records have the same PK, in the destination , the first round the record will be inserted, and in the second will be updated... no error but i cant detect what is happening. I need to detect these errors to advice the user to correct it, i can't touch the source table...

In addition to this, i think disabling cache of lookup transformations that manage a great amount of info will produce a low performance.. am i wrong?

I simply need to allow the process to support continue executing in spite of errors and record them on a log...

Why can't you redirect the errors as is the appropriate solution?|||I can do that, but i am trying to find the other way to do that because introducing a lot of redirections makes the flow so confusing (i have to introduce them in many places)... the question is if i can ignore errors, continue executing but record them on the log (i supposed MaximumErrorCount property is for that purpose).|||

Albertoim wrote:

I can do that, but i am trying to find the other way to do that because introducing a lot of redirections makes the flow so confusing (i have to introduce them in many places)... the question is if i can ignore errors, continue executing but record them on the log (i supposed MaximumErrorCount property is for that purpose).

It's the "record them on the log" that's causing you problems. You want to continue processing, yet capture the error rows, but do not want to redirect them. I fail to see how you can do this without redirecting the errors.

Do you want to know which rows were in error, or just the error message?|||

Phil,

The main problem i have is a cant let the proccess continue when the error happens... despite i set MaximumErrorCount value to 10000 the process stop.

Once i solve it i would see how to get the row that produce the error: maybe with a custom log, creating an event handler... whatever... but firstly i would like to solve the other issue regardless of knowing which row is duplicated... just see the error message in the log.

Thank youuu

|||Make sure that FailPackageOnFailure and FailParentOnFailure aren't set to TRUE for the data flow.|||

Both properties are set to False... any other hint?

Thanks you very much for your help Phil...

Edit: i've been doing tests in a crappy package and the same problem... execution stops when this specific error happens. MaximunErrorCount set to 100, FailPackageOnFailure = False and failParentOnFailure = False. I have seen the problem is not the whole package stops, its the Dataflow that contains the error what is sttoping, only it...

|||You should set the MaximumErrorNumber to zero of the parent task. It's better than setting a high value. I Hope it helps.sql

Monday, March 19, 2012

Maximum value of mulitple columns

Hi all

Using sql server 2005, Im trying in a query to get the maximum value of multiple columns of a table for each of its records.

What im trying to get is the last date an index was used using the table sys.dm_db_usage_stats using the date fields (last_user_seek, last_user_update...).

I looked around the forum for a solution but those i found dont seem to apply really easily to a query.

Anyone got a suggestion?

Dale:

There have been a number of discussions about a similar issue; give a look to this post:

http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=733186&SiteID=1

|||

Dale L.,

Try using new operators UNPIVOT and "CROSS APPLY".

Code Snippet

create table dbo.t1 (

pk int not null identity unique,

c1 int,

c2 int,

c3 int

)

go

insert into dbo.t1(c1, c2, c3) values(1, 2, 3)

insert into dbo.t1(c1, c2, c3) values(4, 6, 5)

insert into dbo.t1(c1, c2, c3) values(9, 7, 8)

go

select

a.pk,

b.max_value

from

dbo.t1 as a

cross apply

(

select

max(unpvt.[value]) as max_value

from

(

select

t.pk,

t.c1 as [1],

t.c2 as [2],

t.c3 as [3]

from

dbo.t1 as t

where

t.pk = a.pk

) as pvt

unpivot

([value] for [col] in ([1], [2], [3])) as unpvt

group by

unpvt.pk

) as b

go

drop table dbo.t1

go

AMB

Monday, March 12, 2012

Maximum number of records SQL 2000

Hi,
What is the maximum number of records I can have in a SQL 2000 table? or Is
there a max.
Thanks
MikeI don't know the exact answer, but I am positive that
that limit is so high that you can say there is no limit,
because before you hit that limit you will hit the limit
on your filesystems.
>--Original Message--
>Hi,
>What is the maximum number of records I can have in a SQL
2000 table? or Is
>there a max.
>Thanks
>Mike
>
>.
>|||In terms of how many records can be in a table depends on
how much info is in each record. Theoretically SQL Server
can handle up to a Terabyte of data (1,000 Gigabytes).
Hope this helps.
Chad
>--Original Message--
>Hi,
>What is the maximum number of records I can have in a SQL
2000 table? or Is
>there a max.
>Thanks
>Mike
>
>.
>|||The answer like any performance question is 'it depends'.
I've worked on big ES7000 boxes with 300 million row tables. It's all a
function of the hardware and what you're trying to do with the data.
--
Brian Moran
Principal Mentor
Solid Quality Learning
SQL Server MVP
http://www.solidqualitylearning.com
"Chad" <ChadLucy@.msn.com> wrote in message
news:045301c38dca$f004ea30$a101280a@.phx.gbl...
> In terms of how many records can be in a table depends on
> how much info is in each record. Theoretically SQL Server
> can handle up to a Terabyte of data (1,000 Gigabytes).
> Hope this helps.
> Chad
> >--Original Message--
> >Hi,
> >
> >What is the maximum number of records I can have in a SQL
> 2000 table? or Is
> >there a max.
> >
> >Thanks
> >
> >Mike
> >
> >
> >.
> >|||> Theoretically SQL Server
> can handle up to a Terabyte of data (1,000 Gigabytes).
According to the Books Online, rows per table is 'Limited by available
storage'.
--
Hope this helps.
Dan Guzman
SQL Server MVP
--
SQL FAQ links (courtesy Neil Pike):
http://www.ntfaq.com/Articles/Index.cfm?DepartmentID=800
http://www.sqlserverfaq.com
http://www.mssqlserver.com/faq
--
"Chad" <ChadLucy@.msn.com> wrote in message
news:045301c38dca$f004ea30$a101280a@.phx.gbl...
> In terms of how many records can be in a table depends on
> how much info is in each record. Theoretically SQL Server
> can handle up to a Terabyte of data (1,000 Gigabytes).
> Hope this helps.
> Chad
> >--Original Message--
> >Hi,
> >
> >What is the maximum number of records I can have in a SQL
> 2000 table? or Is
> >there a max.
> >
> >Thanks
> >
> >Mike
> >
> >
> >.
> >|||"Mike R" <n@.nospam.co.uk> wrote in message
news:bm1fml$301$1$8302bc10@.news.demon.co.uk...
> Hi,
> What is the maximum number of records I can have in a SQL 2000 table? or
Is
> there a max.
> Thanks
> Mike
>
Thanks for all you help
Kind Regards
Mike

Maximum number of records per second that can be inserted into SQLServer 2000.

Summary: Maximum number of records per second that can be inserted into
SQLServer 2000.

I am trying to insert hundreds (preferably even thousands) of records
per second in to SQLServer table (see below) but I am getting the
following error in the Windows Event Viewer Application log file:

"Insufficent Memory....."

And very few records were inserted and no errors where sent back via
the JDBC.

By removing the indexes on the table we have stopped getting the error
message and have managed to load the table at 300 records per second.
However I have couple of questions:

1) Are the indexes definitely to blame for this error and is there
anyway of getting around this problem i.e. keeping the indexes in place
when inserting?

2) How should I configure SQLServer to maximise the speed of
inserts?

3) What is the limiting factor for inserting into SQLServer?

4) Does anyone know of any metrics for inserting records? At want
point should we consider load balancing across DBs.

I currently populate 1.6 million records into this table. Once again
thanks for the help!!

CREATE TABLE [result] (

[id] numeric(20,0) NOT NULL,

[iid] numeric(20,0) NOT NULL,

[sid] numeric(20,0) NOT NULL,

[pn] varchar(30) NOT NULL,

[tid] numeric(20,0) NOT NULL,

[stid] numeric(6,0) NOT NULL,

[cid] numeric(20,0) NOT NULL,

[start] datetime NOT NULL,

[ec] numeric(5,0) NOT NULL,

)

GO

CREATE INDEX [ix_resultstart]

ON [dbo].[result]([start])

GO

CREATE INDEX [indx_result_1]

ON [dbo].[result]([id], [sid], [start], [ec])

GO

CREATE INDEX [indx_result_3]

ON [dbo].[result]([id], [sid], [stid], [start])

GO

CREATE INDEX [indx_result_2]

ON [dbo].[result]([id], [sid], [start])

GOHi

Is it possible to set this up so that you use DTS to do this insert? Write
the data to a text file and then run the DTS package. Perhaps start a
scheduled job that runs the DTS Package.

This would be considerably faster than individual insert statements.

--
-Dick Christoph
<JSParker1@.hotmail.co.uk> wrote in message
news:1144416921.518307.31270@.j33g2000cwa.googlegro ups.com...
> Summary: Maximum number of records per second that can be inserted into
> SQLServer 2000.
> I am trying to insert hundreds (preferably even thousands) of records
> per second in to SQLServer table (see below) but I am getting the
> following error in the Windows Event Viewer Application log file:
> "Insufficent Memory....."
> And very few records were inserted and no errors where sent back via
> the JDBC.
> By removing the indexes on the table we have stopped getting the error
> message and have managed to load the table at 300 records per second.
> However I have couple of questions:
> 1) Are the indexes definitely to blame for this error and is there
> anyway of getting around this problem i.e. keeping the indexes in place
> when inserting?
> 2) How should I configure SQLServer to maximise the speed of
> inserts?
> 3) What is the limiting factor for inserting into SQLServer?
> 4) Does anyone know of any metrics for inserting records? At want
> point should we consider load balancing across DBs.
>
> I currently populate 1.6 million records into this table. Once again
> thanks for the help!!
>
> CREATE TABLE [result] (
> [id] numeric(20,0) NOT NULL,
> [iid] numeric(20,0) NOT NULL,
> [sid] numeric(20,0) NOT NULL,
> [pn] varchar(30) NOT NULL,
> [tid] numeric(20,0) NOT NULL,
> [stid] numeric(6,0) NOT NULL,
> [cid] numeric(20,0) NOT NULL,
> [start] datetime NOT NULL,
> [ec] numeric(5,0) NOT NULL,
> )
> GO
> CREATE INDEX [ix_resultstart]
> ON [dbo].[result]([start])
> GO
> CREATE INDEX [indx_result_1]
> ON [dbo].[result]([id], [sid], [start], [ec])
> GO
> CREATE INDEX [indx_result_3]
> ON [dbo].[result]([id], [sid], [stid], [start])
> GO
> CREATE INDEX [indx_result_2]
> ON [dbo].[result]([id], [sid], [start])
> GO|||You could also pass in an XML file and directly insert its contents
into a table. I've found this method several times faster (in my case,
8x) than calling an stored procedure for each record.|||<JSParker1@.hotmail.co.uk> wrote in message
news:1144416921.518307.31270@.j33g2000cwa.googlegro ups.com...
> Summary: Maximum number of records per second that can be inserted into
> SQLServer 2000.

"Quite a few". I don't know what the limits aer and Id oubt anyone can say
for sure. But you can look up the TPC benchmarks for ideas.

> I am trying to insert hundreds (preferably even thousands) of records
> per second in to SQLServer table (see below) but I am getting the
> following error in the Windows Event Viewer Application log file:
> "Insufficent Memory....."
> And very few records were inserted and no errors where sent back via
> the JDBC.

I believe JDBC has (had?) some performance issues, so it may not be your
best choice.

> By removing the indexes on the table we have stopped getting the error
> message and have managed to load the table at 300 records per second.
> However I have couple of questions:
> 1) Are the indexes definitely to blame for this error and is there
> anyway of getting around this problem i.e. keeping the indexes in place
> when inserting?

Well, not sure they are "definitely" to blame, but they will slow down DML
statements since they increase the overhead.

But there's ways around this.

> 2) How should I configure SQLServer to maximise the speed of
> inserts?

Well, for one thing, "how do you need to do it" BULK INSERT or BCP will be
far faster than individual inserts.

Inserting a row at a time will be slower than "N". What's N? It depends.
To many and the commits will take too long and slow things down. To few and
you're committing more often than needed.

You can try putting your indexes on a different set of disks.
Aslo, pay VERY close attention to your disk setup. Hardware RAID over
Software RAID, RAID 10 is probably going to be better for RAID 5. Keep in
mind the logging has to be synchronous, so often that's where the disk
bottle neck will be.

Take advantage of perfmon to track disk queues and other metrics.

> 3) What is the limiting factor for inserting into SQLServer?
> 4) Does anyone know of any metrics for inserting records? At want
> point should we consider load balancing across DBs.

SQL doesn't necessarily do load balancing as you may think.

But again, is this constant inserts over the course of the day or a bulk
insert?

I do a quartly load of millions of records (somewhat wide) and can insert
and rebuild the indices in about 2-3 hours.

Hope some of this helps.

>
> I currently populate 1.6 million records into this table. Once again
> thanks for the help!!
>
> CREATE TABLE [result] (
> [id] numeric(20,0) NOT NULL,
> [iid] numeric(20,0) NOT NULL,
> [sid] numeric(20,0) NOT NULL,
> [pn] varchar(30) NOT NULL,
> [tid] numeric(20,0) NOT NULL,
> [stid] numeric(6,0) NOT NULL,
> [cid] numeric(20,0) NOT NULL,
> [start] datetime NOT NULL,
> [ec] numeric(5,0) NOT NULL,
> )
> GO
> CREATE INDEX [ix_resultstart]
> ON [dbo].[result]([start])
> GO
> CREATE INDEX [indx_result_1]
> ON [dbo].[result]([id], [sid], [start], [ec])
> GO
> CREATE INDEX [indx_result_3]
> ON [dbo].[result]([id], [sid], [stid], [start])
> GO
> CREATE INDEX [indx_result_2]
> ON [dbo].[result]([id], [sid], [start])
> GO|||(JSParker1@.hotmail.co.uk) writes:
> By removing the indexes on the table we have stopped getting the error
> message and have managed to load the table at 300 records per second.
> However I have couple of questions:
> 1) Are the indexes definitely to blame for this error and is there
> anyway of getting around this problem i.e. keeping the indexes in place
> when inserting?
> 2) How should I configure SQLServer to maximise the speed of
> inserts?
> 3) What is the limiting factor for inserting into SQLServer?
> 4) Does anyone know of any metrics for inserting records? At want
> point should we consider load balancing across DBs.

1) Indexes does add overhead to inserts, that cannot be denied.

2) That depends a little on the answer to the question you did not ask.
But a standard reply would be: you shouldn't.

3) A lot of things: network, CPU, disk etc.

4) I guess that st some point, it may pay off to set up partitioned
views over partitioned servers, but with 1.6 million rows you are
not there yet.

But you did not ask the most important question: how do I insert many
rows into SQL Server effeciently.

If you are sending INSERT statements that look like:

INSERT result (id, iid, sid, pb, tid, stid, cid, start, ec)
VALUES(9, 9, 9, '99999', 9, 9, 9, '20060408 12:12:12', 9)

you have chosen the slowest option available.

If you use a parameterised query, you will be better off, and probably
even a little better if you use a stored procedure.

But since you would still be sending one row at a time, there is a lot
of network overhead, so if it's possible to use some bulk mechanism,
there is a lot to gain. I don't know if JDBC exposeses any bulk-copy
facilities, but that can very well be an option. Using XML as suggested
in one post is also an option.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||JSParker1@.hotmail.co.uk wrote:
> Summary: Maximum number of records per second that can be inserted into
> SQLServer 2000.
> I am trying to insert hundreds (preferably even thousands) of records
> per second in to SQLServer table (see below) but I am getting the
> following error in the Windows Event Viewer Application log file:
> "Insufficent Memory....."

Did you maybe misconfigure your system? You might have set up SQL
Server to use more mem than you have virtual mem in your machine.
Otherwise I don't see how SQL Server should bail out with this error.

> And very few records were inserted and no errors where sent back via
> the JDBC.

Hint: use batch mode if you don't yet. Alternatives: bcp, DTS.

Kind regards

robert

Friday, March 9, 2012

Maximum Length Issue

Hi ,
I perform the query show below to find the duplicate records within the
table itself
"SELECT f_table.flt_Id, f_table.Psg_Id, f_table.Flt_dt_Id
FROM
[SELECT
Count(F_Sgmt_History.flt_Id),
Count(F_Sgmt_History.Psg_Id),
Count(F_Sgmt_History.Flt_dt_Id),
F_Sgmt_History.flt_Id, F_Sgmt_History.Psg_Id, F_Sgmt_History.Flt_dt_Id
FROM F_Sgmt_History
GROUP BY F_Sgmt_History.flt_Id, F_Sgmt_History.Psg_Id,
F_Sgmt_History.Flt_dt_Id
HAVING (((Count(F_Sgmt_History.flt_Id))>1) AND
((Count(F_Sgmt_History.Psg_Id))>1) AND
((Count(F_Sgmt_History.Flt_dt_Id))>1))]. AS f_table INNER JOIN F_Sgmt_History
ON (f_table.Flt_dt_Id = F_Sgmt_History.Flt_dt_Id) AND (f_table.Psg_Id =
F_Sgmt_History.Psg_Id) AND (f_table.flt_Id = F_Sgmt_History.flt_Id)"
But I get the following error
"Server: Msg 103, Level 15, State 7, Line 3
The identifier that starts with 'SELECT
Count(F_Sgmt_History.flt_Id),
Count(F_Sgmt_History.Psg_Id),
Count(F_Sgmt_History.Flt_dt_Id),
F_Sgmt_History.flt_Id, ' is too long. Maximum length is 128.
Server: Msg 156, Level 15, State 1, Line 3
Incorrect syntax near the keyword 'AS'. "
I am not sure wat's wrong with the statment , please help
Travis Tan
instead of '[' ,']' (square baracket ) use '(' ,')' and do not use '.'
before 'AS'.
your problem will be solved
"Travis" ?? ????:

> Hi ,
> I perform the query show below to find the duplicate records within the
> table itself
> "SELECT f_table.flt_Id, f_table.Psg_Id, f_table.Flt_dt_Id
> FROM
> [SELECT
> Count(F_Sgmt_History.flt_Id),
> Count(F_Sgmt_History.Psg_Id),
> Count(F_Sgmt_History.Flt_dt_Id),
> F_Sgmt_History.flt_Id, F_Sgmt_History.Psg_Id, F_Sgmt_History.Flt_dt_Id
> FROM F_Sgmt_History
> GROUP BY F_Sgmt_History.flt_Id, F_Sgmt_History.Psg_Id,
> F_Sgmt_History.Flt_dt_Id
> HAVING (((Count(F_Sgmt_History.flt_Id))>1) AND
> ((Count(F_Sgmt_History.Psg_Id))>1) AND
> ((Count(F_Sgmt_History.Flt_dt_Id))>1))]. AS f_table INNER JOIN F_Sgmt_History
> ON (f_table.Flt_dt_Id = F_Sgmt_History.Flt_dt_Id) AND (f_table.Psg_Id =
> F_Sgmt_History.Psg_Id) AND (f_table.flt_Id = F_Sgmt_History.flt_Id)"
> But I get the following error
> "Server: Msg 103, Level 15, State 7, Line 3
> The identifier that starts with 'SELECT
> Count(F_Sgmt_History.flt_Id),
> Count(F_Sgmt_History.Psg_Id),
> Count(F_Sgmt_History.Flt_dt_Id),
> F_Sgmt_History.flt_Id, ' is too long. Maximum length is 128.
> Server: Msg 156, Level 15, State 1, Line 3
> Incorrect syntax near the keyword 'AS'. "
> I am not sure wat's wrong with the statment , please help
> --
> Travis Tan

Maximum insert commit size

Hi, All,

if I set the "Maximum insert commit size" to 10 ( 0 is the default) in a OLE destination,

what does the 10 means? 10 records or 10 MB/KB of data?


Thanks

10 records.

Wednesday, March 7, 2012

Maximum Date in a group of records

I have a table that I need to find the maximum date per Doc_No per Parent_Doccategory. I can get the maximum date per Doc_No but I can't get the information for the next level. The following script is getting the maximum date per airplane but I should get a maximum date each time the parent_doccategory changes. Can anyone help? This is the script so far:

CREATE PROCEDURE [dbo].[sp_Maximum_Change_Date] AS

select Effective_Updated , parent_doc_no, parent_doccategory from gdb_01_4.dbo.aircraft_changes As S
where effective_updated = (select max(effective_updated)
from gdb_01_4_test.dbo.aircraft_changes where
parent_doc_no = S.parent_doc_no) group by parent_doc_no,parent_doccategory, effective_updated, order by parent_doc_no
GO

PARENT_DOC_NOPARENT_DOCCATEGORYEFFECTIVE_UPDATED
129 AC 3/24/2004 1:54:29 PM
129 AC 4/2/2004 1:44:39 PM
129 AE 6/24/2004 8:49:13 AM
129 AU 9/28/2004 12:16:30 PM

I want to get the one record for AC with the 4/2 date, the one for AE and the one for AU. My script is only returning the AU record because it has the maximum date for Parent_Doc_No 129.

Quote:

Originally Posted by crackerbox

I have a table that I need to find the maximum date per Doc_No per Parent_Doccategory. I can get the maximum date per Doc_No but I can't get the information for the next level. The following script is getting the maximum date per airplane but I should get a maximum date each time the parent_doccategory changes. Can anyone help? This is the script so far:

CREATE PROCEDURE [dbo].[sp_Maximum_Change_Date] AS

select Effective_Updated , parent_doc_no, parent_doccategory from gdb_01_4.dbo.aircraft_changes As S
where effective_updated = (select max(effective_updated)
from gdb_01_4_test.dbo.aircraft_changes where
parent_doc_no = S.parent_doc_no) group by parent_doc_no,parent_doccategory, effective_updated, order by parent_doc_no
GO

PARENT_DOC_NOPARENT_DOCCATEGORYEFFECTIVE_UPDATED
129 AC 3/24/2004 1:54:29 PM
129 AC 4/2/2004 1:44:39 PM
129 AE 6/24/2004 8:49:13 AM
129 AU 9/28/2004 12:16:30 PM

I want to get the one record for AC with the 4/2 date, the one for AE and the one for AU. My script is only returning the AU record because it has the maximum date for Parent_Doc_No 129.


Try adding another argument in the first where clause saying which one you want like example add

and parent_doccategory = 'AE'

before the group by clause|||

Quote:

Originally Posted by Taftheman

Try adding another argument in the first where clause saying which one you want like example add

and parent_doccategory = 'AE'

before the group by clause


That will work but then it eliminates the other two lines that I also need. I basinally need two group levels, one for the Parent_doc_No and the other for the Parent_DocCategory so that it first looks at the Parent_Doc_No and then at the Parent_Doc_Category and gets the most up to date for each category.

Maximum Date in a group of records

hi all,
i got is table:
Id StartDate EndDate
a 19/03/2001 18/03/2002
a 19/03/2002 18/04/2002*
b 13/08/2000 12/08/2001
b 13/08/2001 12/08/2002
b 13/08/2002 10/07/2002*

Sort command and groupins i am ok but i need to select only the records that
has the latest enddate. (See *)
any ideas? thanks in advance
rashidThis query returns row(s) with the latest Enddate for each ID but the row
for ID='B' isn't the one you highlighted.

SELECT id, startdate, enddate
FROM SomeTable AS S
WHERE enddate =
(SELECT MAX(enddate)
FROM SomeTable
WHERE id = S.id)

Hope this helps.

--
David Portas
SQL Server MVP
--

Maximum date calculation for all records

For Crystal Reports version 8.5, I need a report for last year's amounts and this year's amounts. My raw data looks like: fund-type, fund-id, customer-id, check amount, check date, last year's beginning date, last year's ending date, this year's beginning date, and this year's ending date. The beginning and ending dates are subject to change. Also, out of the 2,000 records in the raw data, about 5 of the records have the beginning and ending dates attached. I have used the summary calculation of Maximum to find the beginning and ending dates. My problem is that I cannot use the Maximum function in the formula fields to determine if a check is for last year or this year. I also need totals for a customer's checks (last year and this year), fund-id amounts (this year and last year), and fund-type (last year and this year). When I try to create running-totals, I receive the error message that the summary/running totals cannot be created. Is there a way to create this report?
Thank you for any and all help and advice,
TracySHCCreate a formula having the code

If year(datecol)<year(currentdate) then
1
else
2
Place it in details section
Now group the report by this formula and use summary accordingly