sql-server add tag
mark sinkinson (imported from SE)
We're seeing a lot of these [Intra-Query Parallel Thread Deadlocks][1] in our Production environment (SQL Server 2012 SP2 - yes...I know...), however when looking at the Deadlock XML that has been captured via Extended Events, the victim-list is empty.


    <victim-list />

The deadlocking appears to be between 4 threads, two with the `WaitType="e_waitPipeNewRow"` and two with the `WaitType="e_waitPipeGetRow"`.

     <resource-list>
      <exchangeEvent id="Pipe13904cb620" WaitType="e_waitPipeNewRow" nodeId="19">
       <owner-list>
        <owner id="process4649868" />
       </owner-list>
       <waiter-list>
        <waiter id="process40eb498" />
       </waiter-list>
      </exchangeEvent>
      <exchangeEvent id="Pipe30670d480" WaitType="e_waitPipeNewRow" nodeId="21">
       <owner-list>
        <owner id="process368ecf8" />
       </owner-list>
       <waiter-list>
        <waiter id="process46a0cf8" />
       </waiter-list>
      </exchangeEvent>
      <exchangeEvent id="Pipe13904cb4e0" WaitType="e_waitPipeGetRow" nodeId="19">
       <owner-list>
        <owner id="process40eb498" />
       </owner-list>
       <waiter-list>
        <waiter id="process368ecf8" />
       </waiter-list>
      </exchangeEvent>
      <exchangeEvent id="Pipe4a106e060" WaitType="e_waitPipeGetRow" nodeId="21">
       <owner-list>
        <owner id="process46a0cf8" />
       </owner-list>
       <waiter-list>
        <waiter id="process4649868" />
       </waiter-list>
      </exchangeEvent>
     </resource-list>

So:

 1. The Victim List is empty 
 2. The application running the query does not error and completes the query
 3. As far as we can see, there is no obvious issue, other than that the graph is captured 

Therefore, is this anything to worry about other than noise?

**Edit:** Thanks to Paul's answer, I can see where the issue likely occurs and appears to resolve itself with the tempdb spill.
[![enter image description here][2]][2]


  [1]: https://blogs.msdn.microsoft.com/bartd/2008/09/24/todays-annoyingly-unwieldy-term-intra-query-parallel-thread-deadlocks/
  [2]: https://i.stack.imgur.com/35M8p.png
Top Answer
Paul White (imported from SE)
I wouldn't be surprised if this is the way the deadlock graph looks when an intra-query parallel deadlock is resolved by an exchange spill (so there is no victim, except performance).

You could confirm this theory by capturing exchange spills and matching them up (or not) to the deadlock.

Writing exchange buffers to *tempdb* to resolve a deadlock is not ideal. Look to eliminate sequences of order-preserving operations in the execution plan (e.g. order-preserving exchanges feeding a parallel merge join). Unless it's not causing a noticeable performance problem, and you have other things to worry about.

> Out of interest, is this problem likely to be exacerbated by high fragmentation/outdated statistics?

Fragmentation, no. Outdated Statistics: not in any specific sense I can think of, no. Of course unrepresentative stats are rarely a good thing generally.

The fundamental issue here is that parallelism works best when there are as few dependencies between threads as possible; preserved ordering introduces rather nasty dependencies. Things can easily get gummed up, and the only way to clear the logjam is to spill a bunch of rows held at exchanges to *tempdb*.

---

**Note**

Intra-query parallel deadlocks no longer generate xml graphs when the deadlock can be resolved with an exchange spill, following a [fix](https://support.microsoft.com/en-us/help/4338715/many-xml-deadlock-report-events-are-reported-for-one-intra-query) released for Cumulative Update 10 for SQL Server 2017 and Cumulative Update 2 for SQL Server 2016 SP2.

This room is for discussion about this question.

Once logged in you can direct comments to any contributor here.

Enter question or answer id or url (and optionally further answer ids/urls from the same question) from

Separate each id/url with a space. No need to list your own answers; they will be imported automatically.