I'm trying to understand why there is a discrepancy in lock count in `sys.dm_tran_locks` and `sqlserver.lock_acquired` extended event in certain cases. Here is my repro script, I'm using the `StackOverflow2013` database on SQL Server 2019 RTM, compat level 150.
/* Initial Setup */
IF OBJECT_ID('dbo.HighQuestionScores', 'U') IS NOT NULL
DROP TABLE dbo.HighQuestionScores;
CREATE TABLE dbo.HighQuestionScores
(
Id INT PRIMARY KEY CLUSTERED,
DisplayName NVARCHAR(40) NOT NULL,
Reputation BIGINT NOT NULL,
Score BIGINT
)
INSERT dbo.HighQuestionScores
(Id, DisplayName, Reputation, Score)
SELECT u.Id,
u.DisplayName,
u.Reputation,
NULL
FROM dbo.Users AS u;
CREATE INDEX ix_HighQuestionScores_Reputation ON dbo.HighQuestionScores (Reputation);
Next I update the table statistics with a large fake row count
/* Chaotic Evil. To reproduce the issue, this step is absolutely needed*/
UPDATE STATISTICS dbo.HighQuestionScores WITH ROWCOUNT = 99999999999999;
DBCC FREEPROCCACHE WITH NO_INFOMSGS;
Then I open a transaction and update `Score` for Reputation, say `56`
BEGIN TRAN;
UPDATE dbo.HighQuestionScores
SET Score = 1
WHERE Reputation = 56 /* 8066 records */
AND 1 = (SELECT 1);
/* Source: https://www.erikdarlingdata.com/sql-server/helpers-views-and-functions-i-use-in-presentations/ Thanks, Erik */
SELECT *
FROM dbo.WhatsUpLocks(@@SPID) AS wul
WHERE wul.locked_object = N'HighQuestionScores'
ROLLBACK;
I get a bunch of page locks (despite having an index on Reputation). I'm guessing the bad estimates really did a number on the optimizer there.
[![page_locks][1]][1]
I also double checked using `sp_whoisactive` and it too returns the same information.
<Object name="HighQuestionScores" schema_name="dbo">
<Locks>
<Lock resource_type="OBJECT" request_mode="IX" request_status="GRANT" request_count="1" />
<Lock resource_type="PAGE" page_type="*" index_name="PK__HighQues__3214EC072EE1ADBA" request_mode="X" request_status="GRANT" request_count="6159" />
</Locks>
</Object>
Meanwhile I also have an extended event running on `sqlserver.lock_acquired` separately. When I look at the grouped data I see **8066** page locks instead of initial **6159**
[![extended_event_lock_count][2]][2]
I definitely do not a see a lock escalation (verified using `sqlserver.lock_escalation` event), so I guess my question is why is the extended event showing a discrepancy with a higher number of lock count?
[1]: https://i.stack.imgur.com/OZPCj.png
[2]: https://i.stack.imgur.com/8rS0Q.png
The XE is reporting a page lock being acquired each time a row is updated (one event for each of the 8066 rows affected by the update). However, these rows are only stored on 6159 unique pages, which explains the discrepancy.
I don't have StackOverflow2013 on this machine, but get a similar experience with SO2010:
- 1368 rows updated (and that many XEvents fired)
- 958 page locks
You can see the same pages being locked repeatedly in the XE output if you sort by `resource_0`:
[![screenshot of XE output in SSMS][1]][1]
Using `DBCC PAGE`:
```
DBCC TRACEON (3604); -- needed for the next one to work
GO
DBCC PAGE (StackOverflow2010, 1, 180020, 3);
GO
```
I can see that page 180020 has 163 records on it (`m_slotCnt = 163`):
```lang-none
Page @0x000002C278C62000
m_pageId = (1:180020) m_headerVersion = 1 m_type = 1
m_typeFlagBits = 0x0 m_level = 0 m_flagBits = 0x0
m_objId (AllocUnitId.idObj) = 174 m_indexId (AllocUnitId.idInd) = 256
Metadata: AllocUnitId = 72057594049331200
Metadata: PartitionId = 72057594044350464 Metadata: IndexId = 1
Metadata: ObjectId = 1525580473 m_prevPage = (1:180019) m_nextPage = (1:180021)
pminlen = 24 m_slotCnt = 163 m_freeCnt = 31
m_freeData = 7835 m_reservedCnt = 0 m_lsn = (203:19986:617)
m_xactReserved = 0 m_xdesId = (0:0) m_ghostRecCnt = 0
m_tornBits = 1894114769 DB Frag ID = 1
```
And that 3 of those match the update criteria (I pasted the output into notepad++ and searched for "Reputation = 56"):
[![screenshot showing 3 mathes][2]][2]
Here's the first match, as an example:
```lang-none
Record Type = PRIMARY_RECORD Record Attributes = NULL_BITMAP VARIABLE_COLUMNS
Record Size = 39
Memory Dump @0x000000EDB2B7821D
0000000000000000: 30001800 86540000 38000000 00000000 6041c2e4 0...T..8.......`AÂä
0000000000000014: c3020000 04000801 00270045 00720069 006300 Ã........'.E.r.i.c.
Slot 9 Column 1 Offset 0x4 Length 4 Length (physical) 4
Id = 21638
Slot 9 Column 2 Offset 0x1f Length 8 Length (physical) 8
DisplayName = Eric
Slot 9 Column 3 Offset 0x8 Length 8 Length (physical) 8
Reputation = 56
Slot 9 Column 4 Offset 0x0 Length 0 Length (physical) 0
Score = [NULL]
Slot 9 Offset 0x0 Length 0 Length (physical) 0
KeyHashValue = (e1caffa60313)
Slot 10 Offset 0x244 Length 43
```
I believe this behavior is due to the pipelined nature of execution plans, and the way this specific XE is implemented.
[1]: https://i.stack.imgur.com/C3I03.png
[2]: https://i.stack.imgur.com/qkb6g.png