When Transactions Aren't Enough: Fixing a SQL Server Race Condition With Locking Hints
You’d think wrapping things in a transaction would be enough. It usually is… until it isn’t. This post is about one of those edge cases: an import process that should have been atomic, but wasn’t.
The failure? A classic: “These columns don’t currently have unique values.” The root cause? A sneaky race condition buried inside a multi-threaded process. A situation where two threads, both acting in good faith and under a transaction, still managed to violate a uniqueness built into the business logic.
The Symptom
The logs were full of red flags:
System.ArgumentException: These columns don't currently have unique values.
at System.Data.DataTable.set_PrimaryKey()
It all stemmed from one of our import routines failing during a cache load. The underlying cause was a duplicate value in a column that was supposed to be unique.
We traced it back to a query, running as part of multiple different SSIS packages, creating rows. The query looked like this:
DECLARE @ItemId INT;
SELECT @ItemId = ItemId FROM dbo.ItemKeyMap
WHERE ResourceId = @ResourceId
AND KeyValue = @KeyValue;
IF @ItemId IS NULL
BEGIN;
INSERT INTO dbo.Item (Name, Type, Owner)
VALUES(@Name, @Type, @Owner);
SELECT @ItemId = SCOPE_IDENTITY();
INSERT INTO dbo.ItemKeyMap(ItemId, KeyValue, ResourceId)
VALUES(@ItemId, @KeyValue, @ResourceId);
END;
This was all run inside a transaction. So how were duplicates still getting through?
The Problem: Race Conditions Within Transactions
In a high-concurrency environment, multiple threads were evaluating the SELECT condition at the same time. Each would find nothing matching the KeyValue/ResourceId pair, then happily proceed to INSERT. Two threads, one gap, two inserts. Boom—violation.
Transactions don’t protect you from this kind of race condition unless the reads themselves are protected. SQL Server’s default isolation level (READ COMMITTED) doesn’t prevent another session from inserting a conflicting row between your SELECT and your INSERT.
The Fix: UPDLOCK and HOLDLOCK
What we needed wasn’t just isolation—it was protection from the future. Enter the classic locking hints:
SELECT @ItemId = ItemId FROM dbo.ItemKeyMap WITH (UPDLOCK, HOLDLOCK)
WHERE ResourceId = @ResourceId
AND KeyValue = @KeyValue;
Now, if one thread gets there first, others are forced to wait until it commits or rolls back. No more two-at-once issues.
This approach ensured that either:
- Thread A inserts the record and commits, then Thread B sees the new record and does nothing.
- Or Thread A is still checking/inserting, and Thread B has to wait.
Create Table Script
CREATE TABLE dbo.Item (
Id INT IDENTITY(1,1) PRIMARY KEY,
Name NVARCHAR(100) NOT NULL,
Type NVARCHAR(50) NOT NULL,
Owner NVARCHAR(50) NOT NULL
);
CREATE TABLE dbo.ItemKeyMap (
Id INT IDENTITY(1,1) PRIMARY KEY,
ItemId INT NOT NULL,
KeyValue NVARCHAR(100) NOT NULL,
ResourceId INT NOT NULL,
Deleted BIT NOT NULL DEFAULT 0,
EffectiveFrom DATETIME2 NOT NULL DEFAULT SYSUTCDATETIME(),
EffectiveTo DATETIME2 NULL,
FOREIGN KEY (ItemId) REFERENCES dbo.Item(Id)
);
This table does not have a unique constraint on (KeyValue, ResourceId). Why?
- There’s a soft delete model — when Deleted = 1, a new row with the same KeyValue and ResourceId is allowed.
- There’s a concept of effectiveness — records are valid between EffectiveFrom and EffectiveTo, and no two valid records should overlap for the same KeyValue/ResourceId combo.
Enforcing this constraint in SQL Server would add overhead and complexity that doesn’t justify the performance hit. So we enforce it in the application logic.
Validation: PowerShell Stress Testing
To prove the theory, we used two versions of a PowerShell script to simulate concurrency: one without the locking hints (to replicate the issue), and one with the fix in place.
Without Locks (duplicates happen)
This version omits UPDLOCK and HOLDLOCK, simulating the original logic under multi-threaded load. The test ran 5,000 iterations across 50 parallel threads, each inserting records with randomly selected KeyValues in a narrow range.
$items = 1..5000
$items | ForEach-Object -Parallel {
try {
Start-Sleep -Milliseconds 100
$connection = New-Object System.Data.SqlClient.SqlConnection("Data Source=.;Integrated Security=SSPI;Initial Catalog=YourDb")
$connection.Open()
$command = $connection.CreateCommand()
$command.Transaction = $connection.BeginTransaction()
$command.Parameters.AddWithValue("@Name", "Old_Code") | Out-Null
$command.Parameters.AddWithValue("@KeyValue", (Get-Random -Minimum 2000 -Maximum 2100)) | Out-Null
$command.Parameters.AddWithValue("@ResourceId", 49) | Out-Null
$command.Parameters.AddWithValue("@Type", "X") | Out-Null
$command.Parameters.AddWithValue("@Owner", "System") | Out-Null
$command.CommandText = "DECLARE @ItemId INT;
SELECT @ItemId = ItemId FROM dbo.ItemKeyMap
WHERE ResourceId = @ResourceId AND KeyValue = @KeyValue;
IF @ItemId IS NULL BEGIN;
INSERT INTO dbo.Item (Name, Type, Owner)
VALUES(@Name, @Type, @Owner);
SELECT @ItemId = SCOPE_IDENTITY();
INSERT INTO dbo.ItemKeyMap(ItemId, KeyValue, ResourceId)
VALUES(@ItemId, @KeyValue, @ResourceId);
END;
SELECT @ItemId AS IdentityValue;"
$command.ExecuteNonQuery() | Out-Null
$command.Transaction.Commit()
} catch {
$command.Transaction.Rollback()
Write-Error $_
} finally {
$connection.Close()
}
} -ThrottleLimit 50
We saw multiple duplicate rows in the ItemKeyMap table—pairs of rows with the same KeyValue and ResourceId. These violated the business logic and would trigger failures in downstream processing.
With Locks (no duplicates)
This version adds WITH (UPDLOCK, HOLDLOCK) to the SELECT statement. It was executed under identical load: 5,000 iterations, 50 threads, with a different but similarly narrow KeyValue range.
$items = 1..5000
$items | ForEach-Object -Parallel {
try {
Start-Sleep -Milliseconds 100
$connection = New-Object System.Data.SqlClient.SqlConnection("Data Source=.;Integrated Security=SSPI;Initial Catalog=YourDb")
$connection.Open()
$command = $connection.CreateCommand()
$command.Transaction = $connection.BeginTransaction()
$command.Parameters.AddWithValue("@Name", "New_Code") | Out-Null
$command.Parameters.AddWithValue("@KeyValue", (Get-Random -Minimum 5000 -Maximum 5100)) | Out-Null
$command.Parameters.AddWithValue("@ResourceId", 49) | Out-Null
$command.Parameters.AddWithValue("@Type", "X") | Out-Null
$command.Parameters.AddWithValue("@Owner", "System") | Out-Null
$command.CommandText = "DECLARE @ItemId INT;
SELECT @ItemId = ItemId FROM dbo.ItemKeyMap WITH(UPDLOCK, HOLDLOCK)
WHERE ResourceId = @ResourceId AND KeyValue = @KeyValue;
IF @ItemId IS NULL BEGIN;
INSERT INTO dbo.Item (Name, Type, Owner)
VALUES(@Name, @Type, @Owner);
SELECT @ItemId = SCOPE_IDENTITY();
INSERT INTO dbo.ItemKeyMap(ItemId, KeyValue, ResourceId)
VALUES(@ItemId, @KeyValue, @ResourceId);
END;
SELECT @ItemId AS IdentityValue;"
$command.ExecuteNonQuery() | Out-Null
$command.Transaction.Commit()
} catch {
$command.Transaction.Rollback()
Write-Error $_
} finally {
$connection.Close()
}
} -ThrottleLimit 50
Zero duplicates. Every thread either:
- Inserted a new record if the KeyValue was unique, or
- Blocked briefly and saw the record inserted by another thread before it could proceed.
The locking prevented race conditions entirely.
Final Thoughts
Adding the locks didn’t just prevent bad data, it also improved consistency, reduced noise in the logs, and avoided unnecessary retries.
Share on: