I am working on a system with 15 concurrent processes that perform several operations against a SQL SERVER database, the versions can go from 2005 to 2014. Each process has its particular data in several tables, no process queries / deletes / updates / inserts information from other processes. We have three tables involved, R, E and P, the main process is executed in a stored procedure with a transaction. The operations he does are in this sequence:
BEGIN TRANSATION UNCOMMITED
Every so often there are deadlocks, what I have done is to create a CLUSTERED index for processID and for the date field in table R and table E. I have also created NON CLUSTERED indexes for both tables including these processID, date and some more fields.
I have set READUNCOMMITED because I do not have problems with the dirty read because I do not share information between processes, I have disabled the LOCK SCALATION. The performance seems acceptable but I wanted to try to improve it. When the deadlock is triggered, what I do is to retry the transaction and in most cases it is conveniently executed. I understand that the memory consumption could be increased by setting the LOCK SCALATION to DISABLE. I could also disable the ROW_lock and page_lock in the indexes, but I do not know if this would be appropriate. How could this environment improve to allow existing concurrency? What other mechanisms could it integrate? Thanks