Wednesday, March 28, 2012
MDAC 2.8 Issues?
MDAC 2.8 have an issue when posting a transaction against
a SQL Server 2000 SP2 database. OLE DB Provider for SQL
Server is the database connector and the application
making the call is coded in Visual Basic 6.1.
The client sees this error returned as an "Runtime 6 -
Overflow Error" on the Visual Basic end as reported by the
OLE DB Provider. Below is a snippet as taken from the SQL
Server Profiler trace:
declare @.P1 int
set @.P1=180150062
declare @.P2 int
set @.P2=2
declare @.P3 int
set @.P3=4
declare @.P4 int
set @.P4=-1
exec sp_cursoropen @.P1 output, N'SELECT TblJobs.[Control
Status Code], TblJobs.[Sales Date], TblJobs.[Promise
Date], TblCust.Last, TblCust.First, TblCust.MI, TblCust.
[Home Phone] FROM TblCust RIGHT JOIN TblJobs ON TblCust.
[Customer ID] = TblJobs.[Customer ID] WHERE TblJobs.
[Control Number] = 00801328', @.P2 output, @.P3 output, @.P4
output
select @.P1, @.P2, @.P3, @.P4
exec sp_cursorfetch 180150062, 32, 1, 1
exec sp_cursorclose 180150062
Error: 2809, Severity: 18, State: 1
Error: 16945, Severity: 16, State: 1
declare @.P1 int
set @.P1=0
declare @.P2 int
set @.P2=229378
declare @.P3 int
set @.P3=294916
declare @.P4 int
set @.P4=0
exec sp_cursoropen @.P1 output, N'exec TblTransactions',
@.P2 output, @.P3 output, @.P4 output
select @.P1, @.P2, @.P3, @.P4
Error: 2809, Severity: 18, State: 1
Error: 16945, Severity: 16, State: 1
Any suggestions? Would using a stored procedure benefit
any? Are there known issues with MDAC 2.8 operating in
this manner? When I rollback to MDAC 2.5 or 2.6 the
operation goes through successfully. I don't relish
rolling back a decent number of client workstations seeing
other MDAC-dependent applications work fine. Unfortunately
I don't have the VB source code to post, so I might be
grasping at straws by posting here.
Any help would be appreciated!Howdy
MDAC 2.8 should really be run against SQL 2000 SP3A - nothing less.
You are running SQL 2000 SP2.
Seems to be lots of problems with SQL 2000 and MDAC & VB ( 6 & .NET )
Cheers,
SG|||I just tested this out by upgrading my local MSDE to sp3a. Restarted and tried the same transaction type and it failed just the same. I really am dreading rolling back all of these workstations, especially since just this one custom app is the one giving me trouble. Wish I myself was a VB programmer so I could dig into things for myself!
Can't imagine something as simple as trying to append a row to an existing set could offer problems. And if MDAC 2.8 is the culprit (since rolling back seems to fix things) I can't imagine no one else piping up about odd MADC 2.8 behavior. Oh well...*sigh*|||Howdy,
It might be worth trying setting nocount on in the sprocs to see what that does...
It may be the "chatter" that the app produces may be causing buffer overflows perhaps. I found setting nocount to on helps with some apps
Cheers,
SG.
Monday, March 26, 2012
MDAC 2.6 and SQL Server 2000
I have an weird error where by when the transaction calls the second stored proc the data from the first stored proc is gone. I have put a break point in VB before the second stored proc is executed and the data exists but after execution of the second stored proc the data is gone.
The issue gets more weird becuase the app runs fine on a Win 2k machine with MDAC 2.5 but not with MDAC 2.6. I have other transactions that update via two stored procs but they are fine. I tried the app under MDAC 2.7 and that doesn't work either.
I have no rollbacks or commits in the stored procs. Also worth noting that if I am only upadting the second table by adding 5 rows for example, only the last row will be written.
Any suggestions would be most welcome.RE: Q1 Any suggestions would be most welcome.
A1 Are you quite certain you have tested your applications and results using the same MDAC levels accross the server, and the clients being tested (each time)? (There is an executable utility available to check and verify MDAC version levels.) Some symptoms sound like they may result from isolation level issues, while others sound quite peculiar given the data as presented.|||What is the "weird error" you are receiving ? The data that exists and then is gone - is it stored in a table and then "rolled back" out of the table ? I have attached ms component checker if you are concerned about mdac. You can either extract it or rename is to *.exe and it will extract automatically ( I have to a zip extension on it because exe's are not uploadable).|||A possible simple explanation for the "weird error" as described, is that it might be nothing more than an ordinary dirty read (made before a successful commit). The "when upadting the second table by adding 5 rows for example, only the last row will be written" issue is more peculiar. I've sometimes seen such kinds of things in procedures that use cursors (when "unexpected" conditions exist) and / or errors are not handled well.|||Is each stored procedure doing a commit or rollback (within the stored procedure) ? Can you post the stored procedures ( or a simpler version of what you are doing ) and the vb code ?|||The more I thought about your problem, I remembered a similar situation a couple of weeks ago - take a look at the following post and see if it fits:
post (http://dbforums.com/showthread.php?s=&threadid=554748)|||Hi.
Firstly, thankyou for your help. For reference, I am developing on a MDAC 2.6 machine running windows 2k, sp2, but I have referenced mdac 2.5 in the VB project references. The SQL server is running MDAC 2.6.
I can confirm that the stored procedures do not have any rollbacks, this is all handled by the VB code. The stored proc is called once for each row that is to be updated, as the VB code searches through a grid row by row. If any changes are detected on a row then the stored proc is called to perform the update.Once all rows have been searched and updated (if necessary) then the commit is called. In the event of an error then a rollback is called.
Thanks for your help!|||I would run sql profiler for both the successful and unsuccessful transactions - then look at the two and see what is different. If you don't see an answer within these results, please post the profiler output.|||RE: "I can confirm that the stored procedures do not have any rollbacks, this is all handled by the VB code. The stored proc is called once for each row that is to be updated, as the VB code searches through a grid row by row. If any changes are detected on a row then the stored proc is called to perform the update.Once all rows have been searched and updated (if necessary) then the commit is called. In the event of an error then a rollback is called."
Q1 I'm guessing "this is all handled by the VB code" means the ADO, ODBC, etc., API is making the actual transaction syntax on the Sql Server. That could account for the apparent MDAC related differences you are seeing. The rnealejr suggestion to run sql profiler traces for both the successful and unsuccessful transactions and compare them in minute detail is a very good one.
Q2 Just to clarify, this sounds like a description of a series of transactions (one for each updated row), collectively nested in a single transaction for the entire grid. Is that correct?|||Hi.
I have finally managed to make some progress on this matter. I ran the profiler, with a trace on two different situations, i.e. one which works OK and one which doesn't. I kept things simple and simply traced 1 - a deletion of 1 row, on its own, and 2 - a deletion of 2 rows, which rolls back the first deletion so that only the last worked on is actually committed (deleted). I am not sure how to post the trace files on here properly (forgive me, I am new to the profiler) but the text output is:-
Firstly for the deletion of one row which works..
---
set implicit_transactions on (Visual Basic)
---
declare @.P1 varchar(255) (Visual Basic)
set @.P1='OK'
exec UOM_CHANGE @.P1 output, 1, '108', '108', 'tracetest', 'TRC', '12/31/3000'
select @.P1
---
IF @.@.TRANCOUNT > 0 COMMIT TRAN (Visual Basic)
---
Now secondly the deletion of 2 rows which doesnt work so well....
---
set implicit_transactions on (Visual Basic)
---
declare @.P1 varchar(255) (Visual Basic)
set @.P1='OK'
exec UOM_CHANGE @.P1 output, 1, '108', '108', 'trace test 1', 'TRC1', '12/31/3000'
select @.P1
---
SET NO_BROWSETABLE ON (Visual Basic) *
---
SET FMTONLY ON EXEC UOM_CHANGE ' ',0,' ',' ',' ',' ',' ' SET FMTONLY OFF (Visual Basic) *
---
set fmtonly off (Visual Basic) *
---
SET NO_BROWSETABLE OFF (Visual Basic) *
---
declare @.P1 varchar(255) (Visual Basic)
set @.P1='OK'
exec UOM_CHANGE @.P1 output, 1, '109', '109', 'trace test 2', 'TRC2', '12/31/3000'
select @.P1
---
IF @.@.TRANCOUNT > 0 COMMIT TRAN (Visual Basic)
---
Now then, the only difference between the two (apart from of course that the stored proc is called twice in the second trace) are the four operations that I have highlighted above with an asterisk. From reading MSDN it would seem that there is a known bug that a patch is available for, so I will need to look into this further.
Please see this article, ref Q281633
http://support.microsoft.com/default.aspx?scid=KB;en-us;q281633
Our customers build is MDAC 2.5 so we must code to their requirements, but possible the MDAC 2.6 service pack may work. Watch this space!
Thanks for your help, its proven very useful.|||Bingo ! Are you running the original version of mdac 2.6 (no service packs) ? If so, then you hit the nail on the head. Actually, I ran into this problem a few months ago - once I saw your profiler with the set fmtonly - it reminded me.sql
Wednesday, March 21, 2012
MaxSize on Transaction_Log
If I set a MaxSize on my transaction log, what happens when that size is
reached? Will it automatically overwrite old data, or will it be an error?
Our transaction log has suddenly become real huge. I have read a lot in this
list about this, but I haven't found anything about what happens with the
MaxSize enabled.
Regards,
/Mikael SWhat recovery model do you have the database set to..
--
HTH
Ryan Waight, MCDBA, MCSE
"Mikael" <mikael@.DELETETHISslk.nu> wrote in message
news:%23gxY%23QMtDHA.1788@.tk2msftngp13.phx.gbl...
> Hello,
> If I set a MaxSize on my transaction log, what happens when that size is
> reached? Will it automatically overwrite old data, or will it be an error?
> Our transaction log has suddenly become real huge. I have read a lot in
this
> list about this, but I haven't found anything about what happens with the
> MaxSize enabled.
> Regards,
> /Mikael S
>|||I have Recovery Model: FULL
The Transaction Log is set to Automatically Growth
Auto Shrink is not set
I use SQL2000 on a SRV2000.
Thank you for your reply,
/Mikael S
"Ryan Waight" <Ryan_Waight@.nospam.hotmail.com> skrev i meddelandet
news:uvN4pMNtDHA.2416@.TK2MSFTNGP10.phx.gbl...
> What recovery model do you have the database set to..
> --
> HTH
> Ryan Waight, MCDBA, MCSE
> "Mikael" <mikael@.DELETETHISslk.nu> wrote in message
> news:%23gxY%23QMtDHA.1788@.tk2msftngp13.phx.gbl...
> > Hello,
> >
> > If I set a MaxSize on my transaction log, what happens when that size is
> > reached? Will it automatically overwrite old data, or will it be an
error?
> > Our transaction log has suddenly become real huge. I have read a lot in
> this
> > list about this, but I haven't found anything about what happens with
the
> > MaxSize enabled.
> >
> > Regards,
> >
> > /Mikael S
> >
> >
>|||If you reach maxsize, then the connections that executes commands which need to write to the log
(all modification commands, like INSERT, UPDATE, DELETE etc) will receive error number 9002.
--
Tibor Karaszi, SQL Server MVP
Archive at: http://groups.google.com/groups?oi=djq&as_ugroup=microsoft.public.sqlserver
"Mikael" <mikael@.DELETETHISslk.nu> wrote in message news:O9JIaBOtDHA.1088@.tk2msftngp13.phx.gbl...
> I have Recovery Model: FULL
> The Transaction Log is set to Automatically Growth
> Auto Shrink is not set
> I use SQL2000 on a SRV2000.
> Thank you for your reply,
> /Mikael S
>
>
> "Ryan Waight" <Ryan_Waight@.nospam.hotmail.com> skrev i meddelandet
> news:uvN4pMNtDHA.2416@.TK2MSFTNGP10.phx.gbl...
> > What recovery model do you have the database set to..
> >
> > --
> > HTH
> > Ryan Waight, MCDBA, MCSE
> >
> > "Mikael" <mikael@.DELETETHISslk.nu> wrote in message
> > news:%23gxY%23QMtDHA.1788@.tk2msftngp13.phx.gbl...
> > > Hello,
> > >
> > > If I set a MaxSize on my transaction log, what happens when that size is
> > > reached? Will it automatically overwrite old data, or will it be an
> error?
> > > Our transaction log has suddenly become real huge. I have read a lot in
> > this
> > > list about this, but I haven't found anything about what happens with
> the
> > > MaxSize enabled.
> > >
> > > Regards,
> > >
> > > /Mikael S
> > >
> > >
> >
> >
>|||OK I see.
Is there a way to analyze the Transaction Log to see what caused the file to
rapidly grow the last couple of days? It is now up to 20GB, the database is
400MB. This has worked fine for a year, but the latest days (perhaphs some
weeks) the log has "gone mad".
We use some Stored Procedures and connections, but not very big
transactions. I have read the KBs earlier posted here, and I will backup and
shrink the database (log) this evening. It would be interesting to see what
caused the log to be so big suddenly.
Regards,
/Mikael S
"Tibor Karaszi" <tibor.please_reply_to_public_forum.karaszi@.cornerstone.se>
skrev i meddelandet news:OPWXmHOtDHA.2340@.TK2MSFTNGP12.phx.gbl...
> If you reach maxsize, then the connections that executes commands which
need to write to the log
> (all modification commands, like INSERT, UPDATE, DELETE etc) will receive
error number 9002.
> --
> Tibor Karaszi, SQL Server MVP
> Archive at:
http://groups.google.com/groups?oi=djq&as_ugroup=microsoft.public.sqlserver
>
> "Mikael" <mikael@.DELETETHISslk.nu> wrote in message
news:O9JIaBOtDHA.1088@.tk2msftngp13.phx.gbl...
> > I have Recovery Model: FULL
> > The Transaction Log is set to Automatically Growth
> > Auto Shrink is not set
> >
> > I use SQL2000 on a SRV2000.
> >
> > Thank you for your reply,
> >
> > /Mikael S
> >
> >
> >
> >
> > "Ryan Waight" <Ryan_Waight@.nospam.hotmail.com> skrev i meddelandet
> > news:uvN4pMNtDHA.2416@.TK2MSFTNGP10.phx.gbl...
> >
> > > What recovery model do you have the database set to..
> > >
> > > --
> > > HTH
> > > Ryan Waight, MCDBA, MCSE
> > >
> > > "Mikael" <mikael@.DELETETHISslk.nu> wrote in message
> > > news:%23gxY%23QMtDHA.1788@.tk2msftngp13.phx.gbl...
> > > > Hello,
> > > >
> > > > If I set a MaxSize on my transaction log, what happens when that
size is
> > > > reached? Will it automatically overwrite old data, or will it be an
> > error?
> > > > Our transaction log has suddenly become real huge. I have read a lot
in
> > > this
> > > > list about this, but I haven't found anything about what happens
with
> > the
> > > > MaxSize enabled.
> > > >
> > > > Regards,
> > > >
> > > > /Mikael S
> > > >
> > > >
> > >
> > >
> >
> >
>|||It is possible to view the Tran log with a 3rd party tool lumigent log
explorer. In the meantime have a read through the following articles on Tran
Log :-
http://www.support.microsoft.com/?id=256650 INF: How to Shrink the SQL
Server 7.0 Tran Log
http://www.support.microsoft.com/?id=317375 Log File Grows too big
http://www.support.microsoft.com/?id=110139 Log file filling up
http://www.mssqlserver.com/faq/logs-shrinklog.asp Shrink File
http://www.support.microsoft.com/?id=315512 Considerations for Autogrow
and AutoShrink
http://www.support.microsoft.com/?id=272318 INF: Shrinking Log in SQL
Server 2000 with DBCC SHRINKFILE
http://www.support.microsoft.com/?id=317375 Log File Grows too big
http://www.support.microsoft.com/?id=110139 Log file filling up
http://www.mssqlserver.com/faq/logs-shrinklog.asp Shrink File
http://www.support.microsoft.com/?id=315512 Considerations for Autogrow
and AutoShrink
HTH
Ryan Waight, MCDBA, MCSE
"Mikael" <mikael@.DELETETHISslk.nu> wrote in message
news:ePlNYYOtDHA.700@.TK2MSFTNGP11.phx.gbl...
> OK I see.
> Is there a way to analyze the Transaction Log to see what caused the file
to
> rapidly grow the last couple of days? It is now up to 20GB, the database
is
> 400MB. This has worked fine for a year, but the latest days (perhaphs some
> weeks) the log has "gone mad".
> We use some Stored Procedures and connections, but not very big
> transactions. I have read the KBs earlier posted here, and I will backup
and
> shrink the database (log) this evening. It would be interesting to see
what
> caused the log to be so big suddenly.
> Regards,
> /Mikael S
>
>
> "Tibor Karaszi"
<tibor.please_reply_to_public_forum.karaszi@.cornerstone.se>
> skrev i meddelandet news:OPWXmHOtDHA.2340@.TK2MSFTNGP12.phx.gbl...
> > If you reach maxsize, then the connections that executes commands which
> need to write to the log
> > (all modification commands, like INSERT, UPDATE, DELETE etc) will
receive
> error number 9002.
> >
> > --
> > Tibor Karaszi, SQL Server MVP
> > Archive at:
>
http://groups.google.com/groups?oi=djq&as_ugroup=microsoft.public.sqlserver
> >
> >
> > "Mikael" <mikael@.DELETETHISslk.nu> wrote in message
> news:O9JIaBOtDHA.1088@.tk2msftngp13.phx.gbl...
> > > I have Recovery Model: FULL
> > > The Transaction Log is set to Automatically Growth
> > > Auto Shrink is not set
> > >
> > > I use SQL2000 on a SRV2000.
> > >
> > > Thank you for your reply,
> > >
> > > /Mikael S
> > >
> > >
> > >
> > >
> > > "Ryan Waight" <Ryan_Waight@.nospam.hotmail.com> skrev i meddelandet
> > > news:uvN4pMNtDHA.2416@.TK2MSFTNGP10.phx.gbl...
> > >
> > > > What recovery model do you have the database set to..
> > > >
> > > > --
> > > > HTH
> > > > Ryan Waight, MCDBA, MCSE
> > > >
> > > > "Mikael" <mikael@.DELETETHISslk.nu> wrote in message
> > > > news:%23gxY%23QMtDHA.1788@.tk2msftngp13.phx.gbl...
> > > > > Hello,
> > > > >
> > > > > If I set a MaxSize on my transaction log, what happens when that
> size is
> > > > > reached? Will it automatically overwrite old data, or will it be
an
> > > error?
> > > > > Our transaction log has suddenly become real huge. I have read a
lot
> in
> > > > this
> > > > > list about this, but I haven't found anything about what happens
> with
> > > the
> > > > > MaxSize enabled.
> > > > >
> > > > > Regards,
> > > > >
> > > > > /Mikael S
> > > > >
> > > > >
> > > >
> > > >
> > >
> > >
> >
> >
>
Maximum Worker Threads
transaction log creation.
SQL Server was unable create to worker thread to create the transaction log
file.
The database had several blocked processes but no deadlocks.
The next time period the database transaction log file was created without a
problem.
This is the first time the maximum threads has been reached.
This system has run for 2 years know.
Should I change the Maximum Worker Theads?
Thanks,What exactly do you mean by "SQL Server had reached it's maximum (255) worker
threads during the transaction log creation" ? Was this is the error message?
And what OS and SQL server version are you using?
Thanks
GYK
"Joe K.." wrote:
> SQL Server had reached it's maximum (255) worker threads during the
> transaction log creation.
> SQL Server was unable create to worker thread to create the transaction log
> file.
> The database had several blocked processes but no deadlocks.
> The next time period the database transaction log file was created without a
> problem.
> This is the first time the maximum threads has been reached.
> This system has run for 2 years know.
> Should I change the Maximum Worker Theads?
> Thanks,
>|||Yes, I reached the default maximum worker threads of 255.
The SQL Server errorlog messages is:
2004-12-08 10:31:12.88 spid239 startsubprocess: Limit on 'Max worker
threads' reached.
Thanks,
"Joe K.." wrote:
> SQL Server had reached it's maximum (255) worker threads during the
> transaction log creation.
> SQL Server was unable create to worker thread to create the transaction log
> file.
> The database had several blocked processes but no deadlocks.
> The next time period the database transaction log file was created without a
> problem.
> This is the first time the maximum threads has been reached.
> This system has run for 2 years know.
> Should I change the Maximum Worker Theads?
> Thanks,
>
Monday, March 19, 2012
maximum transaction retention property ?
transaction retention property for distribution database is
2Also i was looking at the properties and it says that the subscription
could expire or get dropped if not synchronised within 72 hours. So my
question here is under what condition would it expire and when would it get
dropped
Thanks
Please see my reply to your other post.
Rgds,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
Monday, March 12, 2012
Maximum number of updates within a Transaction
Good day to all
SQL Server 2000.
I have a problem with a piece of code, which updates some tables using transaction. This process brings the program to a halt when updating large files.
With smaller files, the process finishes without problems.
I have noticed that, if a comment out the "begin transaction" and respective "commit", the same code executes without problems, even when updating large files.
I suspect that there is a limit on the number of records a transaction can hold before a commit is issued. I am surprised however, that SQL Server halts, without messages or warnings.
Is this a configuration issue? If there is a limit on the number of records a transaction can hold, what is this limit:? Anything I can do to have a warning form SQL Server when a situation like this is reached (or indeed to avoid this situation) ?
Thank you in advance for any help
Cecil Ricardo
hi cecil,
my guess is that the transaction is being involved in a recursive operation which means it triggers something and then it triggers back the transaction in a continous loop. thats why it halts your system
With out the begin transaction and commit transaction clause recursive operations end until 32 layers deep. Which cause your batch or sp to commit sucessfully
regards,
joey
|||
There are no limits to the number of rows you can commit in a transaction. It is bound by the size of your transaction log file(s). Did you check for any error messages in the errorlog? Note that the delay you may have observed was probably due to the commit operation itself. Depending on the number of changes you did within the transaction a commit/rollback can take a long time. And the semantics for using begin / commit for say 100 insert statements is different from executing 100 individual insert statements. So you cannot really compare the two approaches. By default, if you do not specify a begin/commit transaction the statement runs in auto-commit mode meaning each statement is a transaction by itself. Often, you will get better performance by using begin transaction/commit due to buffering of log writes. The batch size however depends on lot of factors. So do the following:
1. If you really need to use a transaction for updating a large table then ensure that you have enough space in your drives to account for log growth. Time for commit/rollback depends on the changes
2. If you just need to update a large number of rows then you can use a batch mechanims where you commit only say 1000 rows at a time and use a simple loop. You can use SET ROWCOUNT to restrict the rows affected by DML statements in older versions of SQL Server and use TOP clause in SQL Server 2005
3. Transactional consistency and semantics is different if you run a bunch of DML statements within a transaction vs running each DML statement separately. They are not the same so you need to be aware of the differences. See Books Online topics on auto-commit transactions and user specified transactions.
|||I have checked that. It is not the case.
But thank you for the valuable information about the end of recursive operations ending at 32 layers deep.
Thank you Joey
|||Thank you so much for the comprehensive explanation, Umachandar.
I have applyied your suggestion aof commiting smaller chunks of rows. It worked fine.
Wednesday, March 7, 2012
maximum datafile/log file size
Database size 1,048,516 TB ***
Databases per instance of SQL Server32,767
Filegroups per database 256
Files per database 32,767
File size (data) 32 TB
File size (log) 32 TB
footnote ***
The data portion of a database cannot exceed 2 GB in size when using the SQL Server 2000 Desktop Engine (MSDE 2000) or the Microsoft Data Engine (MSDE) 1.0. The total size of the database, including log files, can exceed 2 GB provided the sum of the sizes of the data files remains 2 GB or lower.
in addition, Database objects include all tables, views, stored procedures, extended stored procedures, triggers, rules, defaults, and constraints. The sum of the number of all these objects in a database cannot exceed 2,147,483,647.|||The database I have in mind is going to grow to be order of Terabytes in size, my question about size limits has been answered but can someone put this into the context of performance and admin implications.|||there is no one fix for this issue.
querying large sets of data has many obstacles to overcome.
index creation:
clustered, non-clustered, composite, indexed views, computed columns
query creation:
Probably the most misunderstood part of the dba's job.
what are your indexes?
what are yhour search arguments?
are you computing columns in the queries?
join strategies
and too many others
keep your result sets as small as you can and use effective search arguments.
use stored procedures views create statistics on commonly searched columns that will not be indexed.
Dont forget that transactions and queries are natural enemies and there is a great benefit to creating an OLAP solution for decision support services.
Microsoft SQL Server 2000 Performance Tuning Technical Reference (http://www.microsoft.com/MSPress/books/4944.asp)
Microsoft SQL Server 2000 RDBMS Performance Tuning Guide for Data Warehousing (http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/rdbmspft.mspx)
Maximum Articles for Transaction Replication SQL 2000 and SQl 2005
We are planing to implement transaction replication for 800 DB on SQL 2000
custer (as a publisher) and have seperate servers for Distribution and
Subscriber.
What is the maximum nnumber of articles I can have per DB.
What are the capacity limitations for this type of setup other than hardware
What other areas i need to make sure are in place before this is implemented.
Appreciate any comments
Please advise...
Thanks
Is that 800 databases or 1 db with 800 articles?
800 articles is no problem, I am replicating 850 with no problems, however
all in 1 db.
The problem you will have is on the distributor, you will define 800
publications, and 800 subscriptions - if only replicating to 1 server.
On distributor you will see a seperate exe for each replication agent. And
with trans you will have 1 Log Reader, 1 Snapshot, and 1 Distribution Agent
for each pub. Each exe will try to capture anywhere from 2 to 5Mb of RAM. You
do the math.
Best to test solution before implementation. And only replicate what you
absolutely need...
Good luck - post results if you implement in prod.
ChrisB MCDBA
MSSQLConsulting.com
"KetanB" wrote:
> Hello,
> We are planing to implement transaction replication for 800 DB on SQL 2000
> custer (as a publisher) and have seperate servers for Distribution and
> Subscriber.
> What is the maximum nnumber of articles I can have per DB.
> What are the capacity limitations for this type of setup other than hardware
> What other areas i need to make sure are in place before this is implemented.
> Appreciate any comments
> Please advise...
> Thanks