Handling locks in a concurrent environment

7

I am working on a system with 15 concurrent processes that perform several operations against a SQL SERVER database, the versions can go from 2005 to 2014. Each process has its particular data in several tables, no process queries / deletes / updates / inserts information from other processes. We have three tables involved, R, E and P, the main process is executed in a stored procedure with a transaction. The operations he does are in this sequence:

BEGIN TRANSATION UNCOMMITED

  • INSERT HUNDREDS OF ROWS IN R
  • UPDATE in P with FK of the process id setting the maximum date of R
  • DELETE E with FK of the id of the process and filtering by dates
  • INSERT INTO E SELECT FROM R FK of the process id and filtering by dates
  • COMMIT

    Every so often there are deadlocks, what I have done is to create a CLUSTERED index for processID and for the date field in table R and table E. I have also created NON CLUSTERED indexes for both tables including these processID, date and some more fields.

    I have set READUNCOMMITED because I do not have problems with the dirty read because I do not share information between processes, I have disabled the LOCK SCALATION. The performance seems acceptable but I wanted to try to improve it. When the deadlock is triggered, what I do is to retry the transaction and in most cases it is conveniently executed. I understand that the memory consumption could be increased by setting the LOCK SCALATION to DISABLE. I could also disable the ROW_lock and page_lock in the indexes, but I do not know if this would be appropriate. How could this environment improve to allow existing concurrency? What other mechanisms could it integrate? Thanks

        
    asked by rbentx 19.04.2017 в 11:08
    source

    1 answer

    0

    Blockage problems occur when several processes compete to update the same records. This is not a problem if the processes are fast enough, because all jobs are glued and processed as they enter. I do not quite understand what your process does, but normally I usually resolve it in two ways:

    • Separating tables for each process: If you can, the ideal is that each process uses its tables and in the last step (the insert), you do it on the final table common to all.
    • Rethinking the process: Verify that all the actions you take are necessary and that they are justified. Think if it could be done in another way, identify which processes are the ones that CONSUME MORE TIME of execution and try to give them another approach.

    As a general rule you can also check the indexes you have in the tables. The indexes improve the performance in the queries but penalize the rest of insertion and update operations. Verify that you have the strictly necessary indexes. Eliminate all that you do not need and if you can try to make operations in smaller blocks of records. I leave a small script for you to see how to perform a basic example task:

    DECLARE @RowCount INT
    SET @RowCount = 1
    WHILE (@RowCount > 0) 
    BEGIN  
        delete from [XXXXX] where Pky IN (
            select top 500 [XXXXXX].Pky
            from [XXXXXX] with (nolock)
            inner join [YYYYYYYY] with (nolock) ON [XXXXXX].AAAAA = [YYYYYYY].BBBBB
        )
        SET @RowCount = @@ROWCOUNT
        WAITFOR DELAY '00:00:05'
    END
    

    In any case, I refer you to my two comments at the beginning. First verify that the process is correctly planned and that there is no way to do it better, if there is not, check the subject of the indexes and finally, try to do it using shorter processes using scripts like the one I give you as an example.

        
    answered by 04.05.2018 в 09:06