Consider the following query plan in SQL Server 2014:

[![enter image description here][1]][1]

In the query plan, a self-join `ar.fId = ar.fId` yields an estimate of 1 row. However, this is a logically inconsistent estimate: `ar` has `20,608` rows and just one distinct value of `fId` (accurately reflected in the statistics). Therefore, this join produces the full cross product of rows (`~424MM` rows), causing the query to run for several hours.

I am having a hard time understanding why SQL Server would come up with an estimate that can be so easily proven to be inconsistent with the statistics. Any ideas?

**Initial investigation and additional detail**

Based on Paul's answer here:

@@@ answer 81

...it seems like both the SQL 2012 and SQL 2014 heuristics for estimating join cardinality should easily handle a situation where two identical histograms need to be compared.

I started with output from trace flag 2363, but was not able to understand that easily. Does the following snippet mean that SQL Server is comparing histograms for `fId` and `bId` in order to estimate the selectivity of a join that uses only `fId`? If so, that would obviously not be correct. Or am I misreading the trace flag output?

	Plan for computation:
	  CSelCalcExpressionComparedToExpression( QCOL: [ar].fId x_cmpEq QCOL: [ar].fId )
	Loaded histogram for column QCOL: [ar].bId from stats with id 3
	Loaded histogram for column QCOL: [ar].fId from stats with id 1
	Selectivity: 0

Note that I have come up with several workarounds, which are include in the full repro script and bring this query down to milliseconds. This question is focused on understanding the behavior, how to avoid it in future queries, and determining whether it is a bug that should be filed with Microsoft.

Here is a [full repro script][3], here is the [full output from trace flag 2363][4], and here is the query and table definitions in case you want to quickly look at them without opening the full script:

	WITH cte AS (
		SELECT ar.fId, 
			ar.bId,
			MIN(CONVERT(INT, ar.isT)) AS isT,
			MAX(CONVERT(INT, tcr.isS)) AS isS
		FROM  #SQL2014MinMaxAggregateCardinalityBug_ar ar 
		LEFT OUTER JOIN #SQL2014MinMaxAggregateCardinalityBug_tcr tcr
			ON tcr.rId = 508
			AND tcr.fId = ar.fId
			AND tcr.bId = ar.bId
		GROUP BY ar.fId, ar.bId
	)
	SELECT s.fId, s.bId, s.isS, t.isS
	FROM cte s 
	JOIN cte t 
		ON t.fId = s.fId 
		AND t.isT = 1

___

    CREATE TABLE #SQL2014MinMaxAggregateCardinalityBug_ar (
    	fId INT NOT NULL,
    	bId INT NOT NULL,
    	isT BIT NOT NULL
    	PRIMARY KEY (fId, bId)
    )
    
    CREATE TABLE #SQL2014MinMaxAggregateCardinalityBug_tcr (
    	rId INT NOT NULL,
    	fId INT NOT NULL,
    	bId INT NOT NULL,
    	isS BIT NOT NULL
    	PRIMARY KEY (rId, fId, bId, isS)
    )


  [1]: https://i.stack.imgur.com/QQbSF.png
  [2]: https://dba.stackexchange.com/a/97073/11635
  [3]: https://gist.github.com/anonymous/e45887f1b0d21640be6c9a5456300c34
  [4]: https://gist.github.com/anonymous/346ac05d7c0588fcde81a221e3f30151
Top Answer
Paul White (imported from SE)
> I am having a hard time understanding why SQL Server would come up with an estimate that can be so easily proven to be inconsistent with the statistics.

### Consistency

There is no general guarantee of consistency. Estimates may be calculated on different (but logically equivalent) subtrees at different times, using different statistical methods.

There is nothing wrong with the logic that says joining those two identical subtrees ought to produce a cross product, but there is equally nothing to say that choice of reasoning is more sound than any other.

### Initial estimation

In your specific case, the *initial* cardinality estimation for the join is **not performed on two identical subtrees**. The tree shape at that time is:

~~~
  LogOp_Join
     LogOp_GbAgg
        LogOp_LeftOuterJoin
           LogOp_Get TBL: ar
           LogOp_Select
              LogOp_Get TBL: tcr
              ScaOp_Comp x_cmpEq
                 ScaOp_Identifier [tcr].rId
                 ScaOp_Const Value=508
           ScaOp_Logical x_lopAnd
              ScaOp_Comp x_cmpEq
                 ScaOp_Identifier [ar].fId
                 ScaOp_Identifier [tcr].fId
              ScaOp_Comp x_cmpEq
                 ScaOp_Identifier [ar].bId
                 ScaOp_Identifier [tcr].bId
        AncOp_PrjList 
           AncOp_PrjEl Expr1003 
              ScaOp_AggFunc stopMax
                 ScaOp_Convert int
                    ScaOp_Identifier [tcr].isS
     LogOp_Select
        LogOp_GbAgg
           LogOp_LeftOuterJoin
              LogOp_Get TBL: ar
              LogOp_Select
                 LogOp_Get TBL: tcr
                 ScaOp_Comp x_cmpEq
                    ScaOp_Identifier [tcr].rId
                    ScaOp_Const Value=508
              ScaOp_Logical x_lopAnd
                 ScaOp_Comp x_cmpEq
                    ScaOp_Identifier [ar].fId
                    ScaOp_Identifier [tcr].fId
                 ScaOp_Comp x_cmpEq
                    ScaOp_Identifier [ar].bId
                    ScaOp_Identifier [tcr].bId
           AncOp_PrjList 
              AncOp_PrjEl Expr1006 
                 ScaOp_AggFunc stopMin
                    ScaOp_Convert int
                       ScaOp_Identifier [ar].isT
              AncOp_PrjEl Expr1007 
                 ScaOp_AggFunc stopMax
                    ScaOp_Convert int
                       ScaOp_Identifier [tcr].isS
        ScaOp_Comp x_cmpEq
           ScaOp_Identifier Expr1006 
           ScaOp_Const Value=1
     ScaOp_Comp x_cmpEq
        ScaOp_Identifier QCOL: [ar].fId
        ScaOp_Identifier QCOL: [ar].fId
~~~

The first join input has had an unprojected aggregate simplified away, and the second join input has the predicate `t.isT = 1` pushed below it, where `t.isT` is `MIN(CONVERT(INT, ar.isT))`. Despite this, the selectivity calculation for the `isT` predicate is able to use `CSelCalcColumnInInterval` on a histogram:

~~~
  CSelCalcColumnInInterval
      Column: COL: Expr1006 

Loaded histogram for column QCOL: [ar].isT from stats with id 3

Selectivity: 4.85248e-005

Stats collection generated: 
  CStCollFilter(ID=11, CARD=1)
      CStCollGroupBy(ID=10, CARD=20608)
          CStCollOuterJoin(ID=9, CARD=20608 x_jtLeftOuter)
              CStCollBaseTable(ID=3, CARD=20608 TBL: ar)
              CStCollFilter(ID=8, CARD=1)
                  CStCollBaseTable(ID=4, CARD=28 TBL: tcr)
~~~

The (correct) expectation is for 20,608 rows to be reduced to 1 row by this predicate.

### Join estimation

The question now becomes how the 20,608 rows from the other join input will match up with this one row:

~~~
  LogOp_Join
      CStCollGroupBy(ID=7, CARD=20608)
          CStCollOuterJoin(ID=6, CARD=20608 x_jtLeftOuter)
              ...

      CStCollFilter(ID=11, CARD=1)
          CStCollGroupBy(ID=10, CARD=20608)
              ...

      ScaOp_Comp x_cmpEq
          ScaOp_Identifier QCOL: [ar].fId
          ScaOp_Identifier QCOL: [ar].fId
~~~

There are several different ways to estimate the join in general. We could, for example: 

* Derive new histograms at each plan operator in each subtree, align them at the join (interpolating step values as necessary), and see how they match up; or
* Perform a simpler 'coarse' alignment of the histograms (using minimum and maximum values, not step-by-step); or
* Compute separate selectivities for the join columns alone (from the base table, and without any filtering), then add in the selectivity effect of the non-join predicate(s).
* ...

Depending on the cardinality estimator in use, and some heuristics, any of those (or a variation) could be used. See the Microsoft White Paper [Optimizing Your Query Plans with the SQL Server 2014 Cardinality Estimator][1] for more.

### Bug?

Now, as noted in the question, in this case the 'simple' single-column join (on `fId`) uses the `CSelCalcExpressionComparedToExpression` calculator:

~~~
Plan for computation:

  CSelCalcExpressionComparedToExpression [ar].fId x_cmpEq [ar].fId

Loaded histogram for column QCOL: [ar].bId from stats with id 2
Loaded histogram for column QCOL: [ar].fId from stats with id 1

Selectivity: 0
~~~

This calculation assesses that joining the 20,608 rows with the 1 filtered row will have a zero selectivity: no rows will match (reported as one row in final plans). Is this wrong? Yes, probably there is a bug in the new CE here. One could argue that 1 row will match all rows or none, so the result might be reasonable, but there is reason to believe otherwise.

The details are actually rather tricky, but the expectation for the estimate to be based on unfiltered `fId` histograms, modified by the selectivity of the filter, giving `20608 * 20608 * 4.85248e-005  = 20608` rows is very reasonable.

Following this calculation would mean using the calculator `CSelCalcSimpleJoinWithDistinctCounts` instead of `CSelCalcExpressionComparedToExpression`. There is no documented way to do this, but if you are curious, you can enable undocumented trace flag 9479:

[![9479 plan][2]][2]

Note the final join produces 20,608 rows from two single-row inputs, but that should not be a surprise. It is the same plan produced by the original CE under TF 9481.

I mentioned the details are tricky (and time-consuming to investigate), but as far as I can tell, the root cause of the problem is related to the predicate `rId = 508`, with a zero selectivity. This zero estimate is raised to one row in the normal way, which appears to contribute to the zero selectivity estimate at the join in question when it accounts for lower predicates in the input tree (hence loading statistics for `bId`).

Allowing the outer join to keep a zero-row inner-side estimate (instead of raising to one row) (so all outer rows qualify) gives a 'bug-free' join estimation with either calculator. If you're interested in exploring this, the undocumented trace flag is 9473 (alone):

[![9473 plan][3]][3]

The behaviour of the join cardinality estimation with `CSelCalcExpressionComparedToExpression` can also be modified to not account for `bId` with another undocumented variation flag (9494). I mention all these because I know you have an interest in such things; not because they offer a solution. Until you report the issue to Microsoft, and they address it (or not), expressing the query differently is probably the best way forward. Regardless of whether the behaviour is intentional or not, they should be interested to hear about the regression.

Finally, to tidy up one other thing mentioned in the reproduction script: the final position of the Filter in the question plan is the result of a cost-based exploration `GbAggAfterJoinSel` moving the aggregate and filter above the join, since the join output has such a small number of rows. The filter was initially below the join, as you expected.

  [1]: https://msdn.microsoft.com/en-us/library/dn673537.aspx
  [2]: https://i.stack.imgur.com/vd9Yu.png
  [3]: https://i.stack.imgur.com/mROZw.png

This room is for discussion about this question.

Once logged in you can direct comments to any contributor here.

Enter question or answer id or url (and optionally further answer ids/urls from the same question) from

Separate each id/url with a space. No need to list your own answers; they will be imported automatically.