I have a query which takes a json string as a parameter. The json is an array of latitude,longitude pairs.
An example input might be the following.
```sql
declare @json nvarchar(max)= N'[[40.7592024,-73.9771259],[40.7126492,-74.0120867]
,[41.8662374,-87.6908788],[37.784873,-122.4056546]]';
```
It calls a TVF that calculates the number of POIs around a geographical point, at 1,3,5,10 mile distances.
```sql
create or alter function [dbo].[fn_poi_in_dist](@geo geography)
returns table
with schemabinding as
return 
select count_1  = sum(iif(LatLong.STDistance(@geo) <= 1609.344e * 1,1,0e))
      ,count_3  = sum(iif(LatLong.STDistance(@geo) <= 1609.344e * 3,1,0e))
      ,count_5  = sum(iif(LatLong.STDistance(@geo) <= 1609.344e * 5,1,0e))
      ,count_10 = count(*)
from dbo.point_of_interest
where LatLong.STDistance(@geo) <= 1609.344e * 10
```
The intent of the json query is to bulk call this function. If I call it like this the performance is very poor taking nearly 10 seconds for just 4 points:
```sql
select row=[key]
      ,count_1
      ,count_3
      ,count_5
      ,count_10
from openjson(@json)
cross apply dbo.fn_poi_in_dist(
            geography::Point(
                convert(float,json_value(value,'$[0]'))
               ,convert(float,json_value(value,'$[1]'))
               ,4326))
```
plan = https://www.brentozar.com/pastetheplan/?id=HJDCYd_o4

However, moving the construction of the geography inside a derived table causes the performance to improve dramatically, completing the query in about 1 second.

```
select row=[key]
      ,count_1
      ,count_3
      ,count_5
      ,count_10
from (
select [key]
      ,geo = geography::Point(
                convert(float,json_value(value,'$[0]'))
               ,convert(float,json_value(value,'$[1]'))
               ,4326)
from openjson(@json)
) a
cross apply dbo.fn_poi_in_dist(geo)
```
plan = https://www.brentozar.com/pastetheplan/?id=HkSS5_OoE


The plans look virtually identical. Neither uses parallelism and both use the spatial index. There is an additional lazy spool on the slow plan that I can eliminate with the hint `option(no_performance_spool)`. But the query performance does not change. It still remains much slower.

Running both with the added hint in a batch will weigh both queries equally.

Sql server version =
Microsoft SQL Server 2016 (SP1-CU7-GDR) (KB4057119) - 13.0.4466.4 (X64)

So my question is why does this matter? How can I know when I should calculate values inside a derived table or not?
Top Answer
Martin Smith (imported from SE)
I can give you a partial answer that explains why you are seeing the performance difference - though that still leaves some open questions (such as **can** SQL Server produce the more optimal plan without introducing an intermediate table expression that projects the expression as a column?)

___

The difference is that in the fast plan the work needed to parse the JSON array elements and create the Geography is done 4 times (once for each row emitted from the `openjson` function) - whereas it is done more than 100,000 *times* that in the slow plan. 

In the fast plan...

    geography::Point(
                    convert(float,json_value(value,'$[0]'))
                   ,convert(float,json_value(value,'$[1]'))
                   ,4326)

Is assigned to `Expr1000` in the compute scalar to the left of the `openjson` function. This corresponds to `geo` in your derived table definition. 

[![enter image description here][1]][1]

In the fast plan the filter and stream aggregate reference `Expr1000`. In the slow plan they reference the full underlying expression.

**Stream aggregate properties**

[![enter image description here][2]][2]

The filter is executed 116,995 times with each execution requiring an expression evaluation. The stream aggregate has 110,520 rows flowing into it for aggregation and creates three separate aggregates using this expression. `110,520 * 3 + 116,995 = 448,555`. Even if each individual evaluation takes 18 microseconds this adds up to 8 seconds additional time for the query as a whole. 

You can see the effect of this in the actual time statistics in the plan XML (annotated in red below from the slow plan and blue for the fast plan - times are in ms)

[![enter image description here][3]][3]

The stream aggregate has an elapsed time 6.209 seconds greater than its immediate child. And the bulk of the child time was taken up by the filter. This corresponds to the extra expression evaluations.

___

By the way.... In general it [is not a sure thing][4] that underlying expressions with labels like `Expr1000` are only calculated once and not re-evaluated but clearly in this case from the execution timing discrepancy this happens here.


  [1]: https://i.stack.imgur.com/DePyW.png
  [2]: https://i.stack.imgur.com/KuaxF.png
  [3]: https://i.stack.imgur.com/5CSpl.png
  [4]: https://dba.stackexchange.com/a/30947/3690

This room is for discussion about this question.

Once logged in you can direct comments to any contributor here.

Enter question or answer id or url (and optionally further answer ids/urls from the same question) from

Separate each id/url with a space. No need to list your own answers; they will be imported automatically.