This is slightly complex to explain, but very easy to reproduce.
## Short version:
For about 10%-30% queries (by count) significant amounts of Query Store data is not available. I have looked at SQL 2016 and SQL 2017 where Query store had been running for weeks+, where there was activity on the database(s).
The query below will return two sets of data, the top set does not have a value for `query_store_query.last_execution_time`, while the bottom set does have data in the field. The 'have nots' are also missing most the run time stats data.
EXEC sp_query_store_flush_db; --Same results without this, but just to rule it out in the examples.
Select * from sys.query_store_query
where query_store_query.last_execution_time is null -- there are bunch of these, also missing other data, why?
Select * from sys.query_store_query
where query_store_query.last_execution_time is not null
## Problem Solving:
1. I Initially discovered this using [exported Query Store data](https://dba.stackexchange.com/q/231650/21924) My data is in an Excel sheet, I sorted and compared and did not find any common factors separating the 'haves' from the 'have nots'
2. To rule out data export issues, I used the code above to get data directly from the system dataview. Similar results to findings in exported data.
3. To rule out the system dataview, I used "Tracked Queries" to report from the root data. (Query Store > Tracked queries > Configure)
3.A. For the 'haves', the query plan(s) and such are displayed, just as you would expect
3.B. For the 'have nots', there is no query plan and no metrics
4. The only scope limiting factor I have found is that on very active databases, the 'have nots' are limited to the last few hours. But on slow databases, the 'have nots' can have initial_compile_start_time dating back several weeks. (*I suspect the 'have nots' are being purged then recreated the next time the now "Never seen before, new" query is run*)
5. The 'have nots' can have any range of compile counts, number of plans, etc.
What do the 'have nots' have in common? Why are they missing data when others are not?
**Why this matters:** If as I currently suspect, the run time data is not being captured correctly on the 'have nots' some/all of the most resource intensive queries **may not be** reported in query store reports.
## Update: Query Store Capture Mode `All` does not eliminate this issue.
Using the [suggested setting by Erin Stellato](https://www.sqlskills.com/blogs/erin/query-store-settings/) I have Query Store Capture Mode set to `Auto`, as pointed out in a comment and answer this could be the cause. But after testing (*purged the QS on one of my database and set it to ALL*), I am still seeing some 'have nots' after the database has run for a while
I see this on my SQL Server 2016 instance as well (same settings as yours: 15 min data flush, and 1 hour stats collection). I noticed that the plans with missing information correlated with AG failovers and maintenance-related reboots (which I found by looking in the SQL Server error log).
If you are following the [best practices related to "mission critical servers"] like I am, you may have taken these steps:
> ## Use trace flags on mission critical servers
> The global trace flags 7745 and 7752 can be used to improve availability of databases using Query Store. For more information, refer to Trace Flags.
> - Trace flag 7745 will prevent the default behavior where Query Store writes data to disk before SQL Server can be shut down. This means that Query Store data that has been collected but not been yet persisted to disk will be lost.
> - Trace flag 7752 enables asynchronous load of Query Store. This allows a database to become online and queries to be executed before the Query Store has been fully recovered. The default behavior is to do synchoronous load of Query Store. The default behavior prevents queries from executing before the Query Store has been recovered but also prevents any queries from being missed in the data collection.
This explains all of the missing runtime stats in my setup. While it's not explicitly called out in this article, it appears that the queries and plans might get into the query store without aggregated runtime stats. This might vary based on your "Data Flush Interval" and "Statistics Collection Interval."
**TL:DR** It happens, it is not unusual, it is by design.
* An answer and a comment, that are both no longer present here, suggested that setting the Query Store Capture Mode to `Auto` might cause this behavior. I can validate that setting to `ALL` does not eliminate the issues, it is unclear from available documentation and testing if 'insignificant queries' get any entry in `sys.query_store_query`.
> Auto - Infrequent queries and queries with insignificant compile and execution duration are ignored. Thresholds for execution count, compile and runtime duration are internally determined. [Source](https://docs.microsoft.com/en-us/sql/relational-databases/performance/best-practice-with-the-query-store?view=sql-server-2017)
* I have validated that it can be at least partially related to the 'Data Flush Interval' as per [answer by Josh](https://dba.stackexchange.com/a/234579/21924) in the example below I ran `EXEC sp_query_store_flush_db` in a single execute with the two quiries, Query_id "213" is reported both without **AND** with a `last_execution_time`
* I have also validated that it is also at [least partially related to "AG failovers and maintenance-related reboots" as per the answer by Josh](https://dba.stackexchange.com/a/234579/21924)
* It is possible that some data is purged due to lack of space. You can [check current the Query Store size and limit](https://docs.microsoft.com/en-us/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store?view=sql-server-2017#Scenarios) with the query below, in my case I am using 871MB of 1000MB, it is unlikely lack of space is the issue.
`SELECT current_storage_size_mb, max_storage_size_mb FROM sys.database_query_store_options;`
* A large percentage of the 'have nots' include an `INSERT` or Index modification. Both of which could lead to a recompile.
* One hypothesis is that a query that leads to a recompile does not get fully tracked, and impacted queries lose part of their tracking pending the recompile on next run.
> It does this by dropping the existing plan from the procedure cache forcing a new plan to be created the next time that the procedure or trigger is run. [Source](https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-recompile-transact-sql?view=sql-server-2017)
* There are multiple reasons query plans are flushed from the cache see [Parameterized Plan Flushed](https://dba.stackexchange.com/a/232579/21924) & [Can a plan “age” reach zero?](https://dba.stackexchange.com/a/151147/21924) On closer review the majority of the plans missing data seem to meet one or more criteria that could result in flushing. At the sametime, there are very similar queries that keep their plans.
My final hypothesis is that plans are flushed as part of standard recomple scenarios, with loss of Query Store data, this is by design. The exact mechanism of how some data is kept and other data is lost, is unclear. It happens, it is not unusual, it is by design.
NOTE: Query Store is reporting times in UTC, error logs are in local times.