Showing posts with label talks. Show all posts
Showing posts with label talks. Show all posts

Monday, March 26, 2012

MDAC 2.6 and SQL Server 2000

I have been working on a VB6 app that talks to a SQL Server 2000 (SP2) engine via OLEDB. All works well with the exception of one transaction. The particular a transaction updates two tables via stored procs and then is either committed or rolled back.

I have an weird error where by when the transaction calls the second stored proc the data from the first stored proc is gone. I have put a break point in VB before the second stored proc is executed and the data exists but after execution of the second stored proc the data is gone.

The issue gets more weird becuase the app runs fine on a Win 2k machine with MDAC 2.5 but not with MDAC 2.6. I have other transactions that update via two stored procs but they are fine. I tried the app under MDAC 2.7 and that doesn't work either.

I have no rollbacks or commits in the stored procs. Also worth noting that if I am only upadting the second table by adding 5 rows for example, only the last row will be written.

Any suggestions would be most welcome.RE: Q1 Any suggestions would be most welcome.

A1 Are you quite certain you have tested your applications and results using the same MDAC levels accross the server, and the clients being tested (each time)? (There is an executable utility available to check and verify MDAC version levels.) Some symptoms sound like they may result from isolation level issues, while others sound quite peculiar given the data as presented.|||What is the "weird error" you are receiving ? The data that exists and then is gone - is it stored in a table and then "rolled back" out of the table ? I have attached ms component checker if you are concerned about mdac. You can either extract it or rename is to *.exe and it will extract automatically ( I have to a zip extension on it because exe's are not uploadable).|||A possible simple explanation for the "weird error" as described, is that it might be nothing more than an ordinary dirty read (made before a successful commit). The "when upadting the second table by adding 5 rows for example, only the last row will be written" issue is more peculiar. I've sometimes seen such kinds of things in procedures that use cursors (when "unexpected" conditions exist) and / or errors are not handled well.|||Is each stored procedure doing a commit or rollback (within the stored procedure) ? Can you post the stored procedures ( or a simpler version of what you are doing ) and the vb code ?|||The more I thought about your problem, I remembered a similar situation a couple of weeks ago - take a look at the following post and see if it fits:

post (http://dbforums.com/showthread.php?s=&threadid=554748)|||Hi.
Firstly, thankyou for your help. For reference, I am developing on a MDAC 2.6 machine running windows 2k, sp2, but I have referenced mdac 2.5 in the VB project references. The SQL server is running MDAC 2.6.
I can confirm that the stored procedures do not have any rollbacks, this is all handled by the VB code. The stored proc is called once for each row that is to be updated, as the VB code searches through a grid row by row. If any changes are detected on a row then the stored proc is called to perform the update.Once all rows have been searched and updated (if necessary) then the commit is called. In the event of an error then a rollback is called.
Thanks for your help!|||I would run sql profiler for both the successful and unsuccessful transactions - then look at the two and see what is different. If you don't see an answer within these results, please post the profiler output.|||RE: "I can confirm that the stored procedures do not have any rollbacks, this is all handled by the VB code. The stored proc is called once for each row that is to be updated, as the VB code searches through a grid row by row. If any changes are detected on a row then the stored proc is called to perform the update.Once all rows have been searched and updated (if necessary) then the commit is called. In the event of an error then a rollback is called."

Q1 I'm guessing "this is all handled by the VB code" means the ADO, ODBC, etc., API is making the actual transaction syntax on the Sql Server. That could account for the apparent MDAC related differences you are seeing. The rnealejr suggestion to run sql profiler traces for both the successful and unsuccessful transactions and compare them in minute detail is a very good one.

Q2 Just to clarify, this sounds like a description of a series of transactions (one for each updated row), collectively nested in a single transaction for the entire grid. Is that correct?|||Hi.
I have finally managed to make some progress on this matter. I ran the profiler, with a trace on two different situations, i.e. one which works OK and one which doesn't. I kept things simple and simply traced 1 - a deletion of 1 row, on its own, and 2 - a deletion of 2 rows, which rolls back the first deletion so that only the last worked on is actually committed (deleted). I am not sure how to post the trace files on here properly (forgive me, I am new to the profiler) but the text output is:-

Firstly for the deletion of one row which works..
---
set implicit_transactions on (Visual Basic)
---
declare @.P1 varchar(255) (Visual Basic)
set @.P1='OK'
exec UOM_CHANGE @.P1 output, 1, '108', '108', 'tracetest', 'TRC', '12/31/3000'
select @.P1
---
IF @.@.TRANCOUNT > 0 COMMIT TRAN (Visual Basic)
---

Now secondly the deletion of 2 rows which doesnt work so well....
---
set implicit_transactions on (Visual Basic)
---
declare @.P1 varchar(255) (Visual Basic)
set @.P1='OK'
exec UOM_CHANGE @.P1 output, 1, '108', '108', 'trace test 1', 'TRC1', '12/31/3000'
select @.P1
---
SET NO_BROWSETABLE ON (Visual Basic) *
---
SET FMTONLY ON EXEC UOM_CHANGE ' ',0,' ',' ',' ',' ',' ' SET FMTONLY OFF (Visual Basic) *
---
set fmtonly off (Visual Basic) *
---
SET NO_BROWSETABLE OFF (Visual Basic) *
---
declare @.P1 varchar(255) (Visual Basic)
set @.P1='OK'
exec UOM_CHANGE @.P1 output, 1, '109', '109', 'trace test 2', 'TRC2', '12/31/3000'
select @.P1
---
IF @.@.TRANCOUNT > 0 COMMIT TRAN (Visual Basic)
---

Now then, the only difference between the two (apart from of course that the stored proc is called twice in the second trace) are the four operations that I have highlighted above with an asterisk. From reading MSDN it would seem that there is a known bug that a patch is available for, so I will need to look into this further.
Please see this article, ref Q281633

http://support.microsoft.com/default.aspx?scid=KB;en-us;q281633

Our customers build is MDAC 2.5 so we must code to their requirements, but possible the MDAC 2.6 service pack may work. Watch this space!
Thanks for your help, its proven very useful.|||Bingo ! Are you running the original version of mdac 2.6 (no service packs) ? If so, then you hit the nail on the head. Actually, I ran into this problem a few months ago - once I saw your profiler with the set fmtonly - it reminded me.sql

Monday, March 19, 2012

MAXIMUM PUBLICATION LIMITATION IN SUBSCRIPTION

(1) do you mean the no of subscribers in a publication?
In that case the answer is no designed limit, and hte
merge optimization paper talks about scaling out to 1000s
of subscribers.
(2) have a look at the merge agent's profile for upload
and download changes per batch.
(3) this message doesn't look like an error, just a
warning based on the inactivity threshold of replication
monitor. My guess is blocking was the cause if it
completed after the others had synchronized. You can
verify this next time by running sp_who2 and looking at
the blocking column (or better still, sp_who3 if you have
it).
HTH,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)

>--Original Message--
>Hi,
>I am dealing with merge replication for 20 databases
(each of size 2 GB).
>Average transactions per day is 3000.
>1. I want to know, is there any limitations in the
number of publication
>that can be
> set in a subscription(now we have 20 DBs).
>2. The updates are merging in batches of 100 updates. Is
there any way to
>increase the batch size.
>3. There is a delay in merging of data in a particular
database(DB1), on
>that day
> totally more than 4,00,000 updates were done. Two
times the error
> message 'There is no response since last 30
minutes'. After completion
>of replication in other databases, DB1 starts
synchronizing. What is the
>reason for those errors.
>
>Thanks,
>Soura
>
>.
>
Hi,
Thanks for ur reply.
(1) I am replicating 2 servers(local & remote. Remote sr. is publisher &
local sr. is Subscriber). In that, currently 20 databases are replicated. I
am in need to replicate some more databases. Is there any limitation in the
numbere of databases to be replicated & is there any implications?
(2) Now the batch size is 100. I want to change this into some huge value.
Is there any implications due to this?
Thanks,
Soura
"Paul Ibison" wrote:

> (1) do you mean the no of subscribers in a publication?
> In that case the answer is no designed limit, and hte
> merge optimization paper talks about scaling out to 1000s
> of subscribers.
> (2) have a look at the merge agent's profile for upload
> and download changes per batch.
> (3) this message doesn't look like an error, just a
> warning based on the inactivity threshold of replication
> monitor. My guess is blocking was the cause if it
> completed after the others had synchronized. You can
> verify this next time by running sp_who2 and looking at
> the blocking column (or better still, sp_who3 if you have
> it).
> HTH,
> Paul Ibison SQL Server MVP, www.replicationanswers.com
> (recommended sql server 2000 replication book:
> http://www.nwsu.com/0974973602p.html)
> (each of size 2 GB).
> number of publication
> there any way to
> database(DB1), on
> times the error
> minutes'. After completion
> synchronizing. What is the
>
|||(1) no limit on the number of databases. The only
iomplication I would consider is the hit on the server
when all these merge agents are running, and I'd try to
ensure the schedules don't overlap as much as possible.
(2) setting the batch size it to a large value - each
batch is a transaction, so an error in one insert eg due
to connection failure - will cause the entire batch to
fail. For a large batch this could be an issue on some
networks.
HTH,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
|||Thank u for ur reply.
"Paul Ibison" wrote:

> (1) no limit on the number of databases. The only
> iomplication I would consider is the hit on the server
> when all these merge agents are running, and I'd try to
> ensure the schedules don't overlap as much as possible.
> (2) setting the batch size it to a large value - each
> batch is a transaction, so an error in one insert eg due
> to connection failure - will cause the entire batch to
> fail. For a large batch this could be an issue on some
> networks.
> HTH,
> Paul Ibison SQL Server MVP, www.replicationanswers.com
> (recommended sql server 2000 replication book:
> http://www.nwsu.com/0974973602p.html)
>