Category: SQL Server 2016

String_Split function in SQL Server 2016

Introduction:

SQL Server 2016 has introduced a built-in table valued function string_split to split a string into rows based on the separator.

Earlier, developers need to write custom/user defined function to do the split work. However, with the new function, its a built-in and can be used directly. The problem with user defined function is the way the function is defined and used.There are multiple ways usually developers write custom functions like scalar/Table valued functions. And its observed in many cases, scalar functions are not scaling as desired and causing serious performance issues.

Usages:

1. Split the variable string


Select * From string_split('first,second,third',',')

2. Split the table rows with the help of CROSS APPLY


Create Table splitTable(Col int, string varchar(100))
Insert into splitTable values(1,'First,Second'),(2,'Primary,Secondary')

Select * From splitTable a
Cross apply string_split(a.string,',') B

Drop Table splitTable

If we analyse the execution plan of the above query, we can see “Table Valued Function” operator is used to implement the function.

Limitations:

1. ONLY a single-character delimiters can be used at a time. If you want to apply multiple delimiter, you can use multiple CROSS APPLY.

2. There is no way to understand the ordinal position/eliminate duplicates/empty strings in the string.

3. This function will work ONLY if the compatibility of your database is 130 or later irrespective of SQL Server version.

I would recommend to use this built-in function for future development and provide your experience as part of learning and sharing.

Advertisements

SQL Server 2016 Database Scoped Configuration to force The Legacy CE

Problem Statement:

Recently, we identified a performance issue with a query once we upgraded the database server to SQL Server 2016. If you look at the query, it uses system catalogs to fetch some data. This query can be optimized or rewritten in better way, however, we are going to discuss about how the same query behaves with different scenario/settings.


SELECT UPPER(ccu.table_name) AS SourceTable
    ,UPPER(ccu.constraint_name) AS SourceConstraint
    ,UPPER(ccu.column_name) AS SourceColumn
    ,LOWER(ccu.TABLE_SCHEMA) AS SourceSchema
    ,UPPER(kcu.table_name) AS TargetTable
    ,UPPER(kcu.column_name) AS TargetColumn
    ,LOWER(kcu.TABLE_SCHEMA) as TargetSchema
    ,LOWER(kcu.constraint_name) as TargetConstraint
FROM INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE ccu
    INNER JOIN INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS rc
    ON ccu.CONSTRAINT_NAME = rc.CONSTRAINT_NAME
    INNER JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE kcu
    ON kcu.CONSTRAINT_NAME = rc.UNIQUE_CONSTRAINT_NAME
WHERE ccu.TABLE_NAME not like 'tablename%' and ccu.TABLE_SCHEMA = 'dbo'
ORDER BY ccu.TABLE_SCHEMA, ccu.table_name, ccu.COLUMN_NAME 

The above query runs for 3 minutes in SQL Server 2016 with compatibility 130, but quite faster in SQL Server 2008 R2 that returns in 3 seconds. It is also observed that, when we change the compatibility 100 in SQL Server 2016, the query runs faster as expected.

Analysis:

 
--SQL 2016 (with compatibility 130) Took 3 min 16 sec
(424 row(s) affected)
Table 'sysidxstats'. Scan count 7971, logical reads 38154, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'sysschobjs'. Scan count 13063, logical reads 104556, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'syscolpars'. Scan count 3265, logical reads 8681635, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'sysiscols'. Scan count 2696, logical reads 6120, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.


--SQL 2008 R2 (with compatibility 100) Took 0 min 3 sec
(424 row(s) affected)
Table 'sysidxstats'. Scan count 235, logical reads 1280, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'sysschobjs'. Scan count 11, logical reads 1314787, physical reads 0, read-ahead reads 6, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'syscolpars'. Scan count 1, logical reads 2659, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'sysiscols'. Scan count 2, logical reads 410, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

If we see the IO statistics, we can clearly see that the number of IO is more with compatibility 130 compared to 100. So it is evident that there are some fundamental changes at optimizer level that causes this difference.

Solutions:

We have multiple ways to solve this difference:

1. Change the compatibility to 100


Alter database DBNAME SET COMPATIBILITY_LEVEL = 100

Considerations:

Once we set lower compatibility, we will not be able to use new features introduced in SQL 2016 like RLS/In Memory OLTP and many others.

2. Use Trace Flag 9481

Trace Flag 9481 forces Legacy CE model used by SQL Server 7.0 – 2012.
There are three ways to use this option for different scopes.

a. Query Scope: Use QUERYTRACEON query hint specified in the OPTION clause for a query.


Select * From Table OPTION ( QUERYTRACEON 9481 )

Considerations:

1. This option is to use trace flags at the query level. The change is ONLY applicable for the query, It does not have any impact on other queries.
2. This option requires sysadmin fixed server role. In production, it is unlikely to have this privillage in most cases for application logins.
3. This requires changes in the procedures/application code.

b. Session Scope: If you dont want to make any modification in query or procedures or there are many places we need to make the changes, then you can use this option at session level to use the legacy cardinality estimator.


DBCC TRACEON (9481);
GO
Select * From Table
GO
DBCC TRACEOFF (9481);

Considerations:

1. It may appear to be a quick fix in first place. But, this will not use the latest CE abilities.
2. If the procedure plan is already cached, then it will still be continuing to use the old one.

c. Global Scope: Trace flags can be set at server level and the changes will impact on every connection to the SQL Server instance.

Global trace flags can be set in two different ways as below:


	1. Using Query
		DBCC TRACEON (9481, -1);
	2. Using Startup parameters
                Go To "SQL Server 2016 Configuration Manager" -> "SQL Server services" -> Right Click on the Service -> Properties -> Startup Parameters - > Add "-T9481".
	

Considerations:

1. When you want to move your databases to use compatibility of 130 (to use new feature), but, you still need some time to test the existing code performance with new CE.
2. You may be restricting the new CE cost based benefits while creating the plans.

3. Use LEGACY_CARDINALITY_ESTIMATION Database Scoped Configuration

This is a new feature added to SQL Server 2016 – Database Scoped Configuration. To stick with the subject, lets cover LEGACY_CARDINALITY_ESTIMATION option in this feature. We have seen QUERYTRACEON option at different scopes like Query/session or Server level. SQL Server 2016 provides another option at DATABASE level using LEGACY_CARDINALITY_ESTIMATION. There are two ways, we can use this option.

a. Using GUI:


OR


ALTER DATABASE SCOPED CONFIGURATION
SET LEGACY_CARDINALITY_ESTIMATION = ON;

b. Using Query HINT: SQL Server 2016 SP1 introduced a new query hint which will influence the query optimizer. This does not require the sysadmin fixed server role unlike QUERYTRACEON, however, it is logically equivalent of the same.


SELECT * FROM Tablename
OPTION (USE HINT('FORCE_LEGACY_CARDINALITY_ESTIMATION'));

Additional information: Using DMV sys.database_scoped_configurations, we would be able to see the settings with Database scoped configuration.


SELECT configuration_id, name, value, value_for_secondary
FROM sys.database_scoped_configurations OPTION (RECOMPILE);

Hope, this helps you to understand various options to use Legacy CE in SQL Server 2016.
Happy reading and learning!!!

Always Encrypted – A new column level security feature in SQL Server 2016

Always Encrypted (AE) is a new feature introduced in SQL Server 2016 to secure your data in SQL Server at column level. Perhaps, SQL Server has many options to secure the data, the new feature Always Encrypted stands out from the list with unique characteristics – “Always Encrypted”.

Before we get into details about Always Encrypted, let us quickly look at the security features in the SQL Server in comparison as an overview.

AE-Always Encrypted, DDM – Dynamic Data Masking, TDE – Transparent Data Encryption

Why do we call Always Encrypted?

As the name depicts, Always Encrypted feature in SQL Server always ensures your data encrypted, that means, the data at rest and in motion. The encryption and decryption happens at client application using an Always Encrypted driver. This separates the encryption from SQL Server database engine and enforces more security in a better controlled manner.

How do we implement Always Encrypted?

First and foremost action is to install the right version of SQL Server 2016.If you do not have the right version, you will not find the option “Encrypt Columns” in “Task” of the database options. If you are not using SSMS version 13.0.4001.0 or above, you will not be able to see this option in your SSMS.

You can find and download SP1

There are two ways, we can implement Always Encrypted in SQL Server using Wizard and T-SQL. However, we need to know that for existing table/column data, there is no way to implement the AE using T-SQL in SQL Server 2016(SP1). I mentioned the service pack as a caveat because Microsoft may change this behavior in future, but unlikely as of now. For existing data, we need to make sure the AE needs to be implemented using Wizard. It is quite good to be noted there is an option to generate PowerShell script for the existing data to encrypt that can be run later.

Using Wizard

Using T-SQL


/*1*/ CREATE COLUMN MASTER KEY CMK_Auto2
WITH (  
  KEY_STORE_PROVIDER_NAME = 'MSSQL_CERTIFICATE_STORE',   
  KEY_PATH = 'CurrentUser/my/B27A4A9FCC37F2C5B1807249FE1285CD4A40B88F');
/*2*/ CREATE COLUMN ENCRYPTION KEY AEColumnKey   
WITH VALUES  
(  
COLUMN_MASTER_KEY = CMK_Auto2,   
ALGORITHM = 'RSA_OAEP',   
ENCRYPTED_VALUE = 0x016E000001630075007200720065006E00740075007300650072002F006D0079002F00620032003700610034006100390066006300630033003700660032006300350062003100380030003700320034003900660065003100320038003500630064003400610034003000620038003800660053FD933BC3E3E6E5FFD935F452A5C4113FF56E4D946D78B22A69415FF8EF69D9B3A5541F2463BBC32D06AC88AE95B4CDBBEE7A9D1DD80043D7C900F28917637F4414565CB3F2B29CEEE5C03DF182C4F62395CDAD59A59BFCBD421889DB9EFB2B5250AA597268011B8ACCFFA7A1B5D846BD476BBD8F8239D2681C800E3BCD848485AEC6E69FE76D06D2E213FB36FCBCA5E8B75FE67D21C1C05EB7CF819AD9F96701116A2B642F690455FC7DC48AEEB1825BB20ECD428F910C002EE3D186706E00F76C608EF78FBB147ABA798309092517A39C9C4031B3857C599B238174AA1E8433A649D63D194278B0A4EFBF15DF4E4B5B4468FB73FC8992B3E34606AB306E2E19BADEE4B38288FF77B9A8E45A56BB321091EF0CF3567076ED27D875286CB2232177F610B9A0DAEBFA34ABA9856A094E26E995987AD050D27954DDB08BED9A34C6D19CBE6B2271A7E716C33850DB8781C9D3B762C0920EED57BB9D2BA581F7AC1A46EA55962200FD26405FE31005D413BA5B624E5AF2770377A13EB68FB681242B8B719499175113E84073013BDC6E03E5F82EC070B9151705F1C564106B93E3C7566E41BAD00209AB4587278640FE225F797DD9BB83284E8A674DFC7F48558441E00BC856161FC93A38E337B050915450E7B0ED848CDB63272B65319B26B45119ED081852DEBE53DFF7A6CD21935FC3CBF2C4852AD01CFF0153B76C196F7667  
);
/*3*/ CREATE TABLE AlwaysEncryptedSampleSQL(
  EmpID INT PRIMARY KEY,
  SSN NVARCHAR(15) 
    COLLATE Latin1_General_BIN2 
    ENCRYPTED WITH (COLUMN_ENCRYPTION_KEY = AEColumnKey, 
    ENCRYPTION_TYPE = Deterministic, 
    ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256') NOT NULL);

Why do we need two encryption keys?

Yes, Always Encrypted uses two keys – Column encryption key and master encryption key. Column encryption key is used to encrypt the value in the column and database engine stores the column encryption key in the SQL Server instance. However, master key is being stored in external key store like Windows Certificate store or Azure Key Vault and database engine will have metadata pointer to the external key store. Master key is responsible to encrypt the column encryption key.

How do we verify the above implementation?

1. Check the master key

SELECT name KeyName,column_master_key_id KeyID,
  key_store_provider_name KeyStore,
  key_path KeyPath
FROM sys.column_master_keys;

2. Check the column key

SELECT name KeyName,
  column_encryption_key_id KeyID
FROM sys.column_encryption_keys;

3. Check the sys.columns

Select name,collation_name,encryption_type_desc, encryption_algorithm_name,column_encryption_key_id 
From sys.columns where object_id in (object_id('AlwaysEncryptedSample'),object_id('AlwaysEncryptedSampleSQL'))

How do the application encrypt and decrypt the value?

Client application uses Always Encrypted driver. I would suggest you to go through “Using Always Encrypted with the ODBC Driver for SQL Server” to understand better the usage. Anyway, this blog post will be followed up with the next post, on which I am currently working on, to understand the performance impact.

What are the different types of encryption in Always Encryption feature?

Yes, AE comes up with two different type of encryption.

1. Deterministic
As the name suggests, this type will always produce the same encrypted value for a given text. Ideally, this may not be a good option for all the keys as a good intruder can easily understand the value by analyzing data pattern as an example, gender, polar questions etc. As the encrypted value for a given text is always same, the encrypted column can very well part of a join, grouping and indexing.

2. Randomized
As the name suggests, it will produce randomized value which will make the encryption more secure than the earlier.

Gotchas!!!!…..

1. There is no straight forward method to implement AE for existing data apart using the wizard. However, wizard can generate PowerShell Script to do the action later.
2. Encryption method – Deterministic is less secure compared to “randomized”.
3. Encryption method – Randomized cannot be part of joins/groups/indexing.
4. INSERT/UPDATE operations are not allowed directly to table unless through the client driver. We will receive the below error message.
Msg 206, Level 16, State 2, Line 5
Operand type clash: varchar is incompatible with varchar(8000) encrypted with (encryption_type = 'DETERMINISTIC', encryption_algorithm_name = 'AEAD_AES_256_CBC_HMAC_SHA_256', column_encryption_key_name = 'CEK_Auto1', column_encryption_key_database_name = 'test') collation_name = 'SQL_Latin1_General_CP1_CI_AS'

5. By specifying encryption setting in “Additional Connection Parameters” in SQL Connection window, who has access the encrypted table, can see the actual data. However, the login cannot modify or insert new data.

6. Column encryption changes the collation of the string column to Latin1_General_BIN2
7. Encryption will increase the size of the table

See Also

Please refer Transparent Data Encryption

To know more about other Always Encrypted limitations, please refer Aaron’s blog.

Its always important to look at the License type of your SQL Server

Its always important to look at the License type of your SQL Server!!!

Problem Statement:

We recently had an issue with CPU utilization reaching more than 95% always for database server in one of our performance test environment. Load test environments are resource intensive test, hence it is expected to have high CPU utilization. However, we could observe the number of tests processed and number of transactions are very less spiking the CPU utilization to 95%.

Let me explain a bit more on my environment, We have 4 sockets with 10 physical cores and HT enabled in our test environment. As per the configuration we have total 80 logical CPU available. SQL Server version information as below:


Microsoft SQL Server 2014 (SP2) (KB3171021) - 12.0.5000.0 (X64) 
               Jun 17 2016 19:14:09 
               Copyright (c) Microsoft Corporation
               Enterprise Edition ((missing))Core based Licensing>((missing)) (64-bit) on Windows NT 6.3  (Build 9600: )

Here we can observe that the version information is missing “Core based Licensing”, that means, the SQL version is not Core based, but CAL based. Let us look at the excerpt from the MSDN article:

“Enterprise Edition with Server + Client Access License (CAL) based licensing (not available for new agreements) is limited to a maximum of 20 cores per SQL Server instance. There are no limits under the Core-based Server Licensing model”

So, though we have 4 sockets with 10 Cores, ONLY 20 cores are VISIBLE for SQL Server. In our environment, this was 20*2(HT enabled) = 40 Logical CPUs are visible.

Ref: https://technet.microsoft.com/en-us/library/cc645993(v=sql.120).aspx

Lets confirm the above with other parameters DMV -sys.dm_os_schedulers.


Select parent_node_id,Count(cpu_id) Total_Schedulers,
 count(Case when Status = 'VISIBLE ONLINE' Then 1 Else null End) Visible_Count,
 count(Case when Status = 'VISIBLE OFFLINE' Then 1 Else null End) NotVisible_Count
From sys.dm_os_schedulers 
where status in ('VISIBLE ONLINE','VISIBLE OFFLINE') 
and parent_node_id not in (64)--DAC
Group by parent_node_id

The result looks like below:


This is clear that SQL server was not able to utilize more than 40 logical CPU in the above environment.

Once we upgraded the license to Core Based License, we were able to use all available CPUs in our environment and observed the database CPU utilization has come down to 65% resulting more number of tests and transactions.

Let me reiterate, Its always important to look at the License type of your SQL Server !!!

Could not load assembly ‘Microsoft.SqlServer.Connectioninfo, Version=10.0.0.0, Culture=netural, PublicKeyToken=89845dcd8080cc91’ or one Of its dependencies. The system cannot find the file specified.

Problem Statement:

When trying to migrate SQL Server 2008 R2 to SQL Server 2016, we identified an issue with the following error:

Could not load assembly ‘Microsoft.SqlServer.Connectioninfo, Version=10.0.0.0, Culture=netural, PublicKeyToken=89845dcd8080cc91’ or one
Of its dependencies. The system cannot find the file specified.

As a knee jerk reaction, we initially thought it was an issue with SQL Server 2016 installation and need to install the Feature Pack for SQL Server 2016. Once applied the feature Pack, we again tested and found the same issue unfortunately(but really fortunate one!!!).

Analyzing further(I mean, reading the error message clearly, It is found the version 10.0.0 that represents Microsoft® SQL Server® 2008.
As we do not have SQL Server 2008 installed in the environment and the code(.NET application) has some reference to the version mentioned, this was not able to find the dlls in the specified path(C:\windows\assemblies\).

To proceed with our testing, we installed SQL Server 2008 R2 Feature pack and resumed our testing. Finally, it worked like a treat!!!

The learning is we always need to go to the facts(error message) than blindly presume things!!!

please find the below information for various version feature pack references:

Microsoft® SQL Server® 2016 Feature Pack (Version: 13.0.1601.0)
https://www.microsoft.com/en-us/download/details.aspx?id=52676
Microsoft® SQL Server® 2014 SP2 Feature Pack (Version: 12.0.5000.0)
https://www.microsoft.com/en-us/download/details.aspx?id=53164
Microsoft® SQL Server® 2014 Feature Pack (Version: 12.0.0.0)
http://www.microsoft.com/en-us/download/details.aspx?id=42295
Microsoft® SQL Server® 2012 SP1 Feature Pack (Version: 11.0.0.0)
http://www.microsoft.com/en-us/download/details.aspx?id=35580
Microsoft® SQL Server® 2012 Feature Pack (Version: 11.0.2100.60)
https://www.microsoft.com/en-us/download/details.aspx?id=29065
Microsoft® SQL Server® 2008 R2 SP2 Feature Pack (Version: 10.50.4000.0)
https://www.microsoft.com/en-us/download/details.aspx?id=30440
Microsoft® SQL Server® 2008 R2 SP1 Feature Pack (Version: 10.50.2500.0)
http://www.microsoft.com/en-us/download/details.aspx?id=26728
Microsoft® SQL Server® 2008 R2 Feature Pack (Version: 10.50.1600.1)
https://www.microsoft.com/en-us/download/details.aspx?id=16978
Microsoft SQL Server 2008 Service Pack 4 Feature Pack (Version: 10.0.6000.29)
https://www.microsoft.com/en-in/download/details.aspx?id=44277
Microsoft SQL Server 2008 Service Pack 3 Feature Pack (Version: 10.00.5500.00)
https://www.microsoft.com/en-us/download/details.aspx?id=27596
Microsoft SQL Server 2008 Service Pack 2 Feature Pack (Version: 10.00.4000.00)
https://www.microsoft.com/en-us/download/details.aspx?id=6375
SQL Server 2008 Service Pack 1 (Version: 10.00.2531.00)
https://www.microsoft.com/en-in/download/details.aspx?id=20302

CREATE OR ALTER in SQL Server

One of my colleague was asking a follow up question on my earlier post DROP IF EXISTS – A handy feature in SQL Server 2016

Do we have something similar for Creating objects?
The good news is that we have “CREATE or ALTER” in SQL Server 2016 to address our long waiting requirement. However, this will not be available in RTM, but only from SP1 and above.

Applies to : SQL Server 2016 SP1 or later versions

Here is a small script to demonstrate the same.
SQL Script:


--Creating the procedure first time
CREATE OR ALTER PROCEDURE SP_TEST
AS
BEGIN
	SELECT 2
END
GO
EXEC SP_TEST
GO
--Altering the procedure 
CREATE OR ALTER PROCEDURE SP_TEST
AS
BEGIN
	SELECT 200
END
GO
EXEC SP_TEST

create-or-replace

Note: This can also be used for Functions, Views and Triggers objects as well.

Query Store in SQL Server 2016 – Overview

QueryStore feature in SQL Server 2016 is very helpful for any SQL Server resource to identify and troubleshoot the performance issues in SQL Server. This feature enables to store the query information along with query plan. The query execution statistics also collected associated with each query.

Applies to: SQL Server 2016 and later

Why do we need QueryStore feature or How does it help?

Yes, we do have an option to analyse the query execution before Querystore using Cachedplan in earlier versions. However, the downside is Cached plan will be saved in memory and it will not be available once SQL Server has been restarted. This was always a challenge for DBA/performance engineer to identify issues with Query execution.

QueryStore actually stores plan and other information in memory initially and as per the configuration it writes to disk.So, even SQL Server gets restarted, the information can be collected from the disk(those written in the disk).

Let us see the plan of the query to understand in detail.
In the plan we can see there are objects like – plan_persist_* (to read the data from disk) and QUERY_STORE_*_IN_MEM (to read data from memory)
*- denotes different objects like query/plan/runtime_stats etc…

To understand simply, let us look at the plan for a simple dmv: Sys.query_store_query

qs_plan

What do we need to configure for Querystore?

qs_properties

General ->
Operation Mode(Actual) – This is disabled by default to represent the Actual mode of Operation.

Operation Mode(Requested) – This is to configure the mode of Operations.
Options are OFF – to disable the QueryStore
Read Only – You can ONLY read information from QueryStore
Read Write – You can READ and WRITE information to QueryStore

Monitoring – >
Data Flush Interval(Minutes) – This is to specify how frequent the data needs to write to Disk from memory. As higher the number, there is a risk of losing data(if any restart occurs between the specified vaue), lower the number there is an overhead of writing the information to disk. Default value is 15 minutes.

Statistics Collection Interval – This is to record the information about execution statistics of queries being executed.Default is 1 hour.

Query Store Retention – >
Max ize(MB) – The maximum size in MB that can be allocated to store the information. Default value is 100. Once the size reaches out the specified value, it will NEVER write further, only read is available.

Query Store Capture Mode – This is to determine what needs to be collected.
Options are All – (default) which collects all query information
Auto – Collects only the relevant queries, filters out the insignificant queries.However, there is no external influence on identifying the significance, its completely an internal process.
None – No collection of information

Size Based Cleanup Mode – Auto – (Default if QueryStore is enabled.)This will cleanup old and least expensive queries from QueryStore as per the size specified.
Off – No automatic cleanup

Stale Query Threshold(Days) – Number days the Query information stays for. By enabling the QueryStore, the default value sets as 30 days.

Where do we get the information from QueryStore?

Microsoft has provided a good number of DMVs to get the QueryStore information.
Sys.database_query_store_options
Sys.query_store_query
Sys.query_store_query_text
Sys.query_context_settings
Sys.query_store_plan
Sys.query_store_runtime_stats
Sys.query_store_runtime_stats_interval

The above DMVs provide enough information about Query execution statistics for you to troubleshoot your query performances.

How to purge the querystore information?
You can always purge the querystore information using wizard “Purge Query Data” in the Database Properties – > Query Store or by using the below T-SQL:

ALTER DATABASE [Test] SET QUERY_STORE CLEAR ALL