Good day to all
SQL Server 2000.
I have a problem with a piece of code, which updates some tables using transaction. This process brings the program to a halt when updating large files.
With smaller files, the process finishes without problems.
I have noticed that, if a comment out the "begin transaction" and respective "commit", the same code executes without problems, even when updating large files.
I suspect that there is a limit on the number of records a transaction can hold before a commit is issued. I am surprised however, that SQL Server halts, without messages or warnings.
Is this a configuration issue? If there is a limit on the number of records a transaction can hold, what is this limit:? Anything I can do to have a warning form SQL Server when a situation like this is reached (or indeed to avoid this situation) ?
Thank you in advance for any help
Cecil Ricardo
hi cecil,
my guess is that the transaction is being involved in a recursive operation which means it triggers something and then it triggers back the transaction in a continous loop. thats why it halts your system
With out the begin transaction and commit transaction clause recursive operations end until 32 layers deep. Which cause your batch or sp to commit sucessfully
regards,
joey
|||
There are no limits to the number of rows you can commit in a transaction. It is bound by the size of your transaction log file(s). Did you check for any error messages in the errorlog? Note that the delay you may have observed was probably due to the commit operation itself. Depending on the number of changes you did within the transaction a commit/rollback can take a long time. And the semantics for using begin / commit for say 100 insert statements is different from executing 100 individual insert statements. So you cannot really compare the two approaches. By default, if you do not specify a begin/commit transaction the statement runs in auto-commit mode meaning each statement is a transaction by itself. Often, you will get better performance by using begin transaction/commit due to buffering of log writes. The batch size however depends on lot of factors. So do the following:
1. If you really need to use a transaction for updating a large table then ensure that you have enough space in your drives to account for log growth. Time for commit/rollback depends on the changes
2. If you just need to update a large number of rows then you can use a batch mechanims where you commit only say 1000 rows at a time and use a simple loop. You can use SET ROWCOUNT to restrict the rows affected by DML statements in older versions of SQL Server and use TOP clause in SQL Server 2005
3. Transactional consistency and semantics is different if you run a bunch of DML statements within a transaction vs running each DML statement separately. They are not the same so you need to be aware of the differences. See Books Online topics on auto-commit transactions and user specified transactions.
|||I have checked that. It is not the case.
But thank you for the valuable information about the end of recursive operations ending at 32 layers deep.
Thank you Joey
|||Thank you so much for the comprehensive explanation, Umachandar.
I have applyied your suggestion aof commiting smaller chunks of rows. It worked fine.