How to get last running query based on SPID in SQL Server

We often need to identify the SQL text that was executed last time for a spid in SQL Server. Recently one of my friend was asking to get these information for his troubleshooting purpose. I have come up with a very simple query to help him as below.

Hope this will be helpful for those looking for similar information.


SELECT
sysprc.spid,sysprc.waittime,sysprc.lastwaittype,DB_NAME(sysprc.dbid) AS database_name,
sysprc.cpu,sysprc.physical_io,sysprc.login_time,sysprc.last_batch,sysprc.status,
sysprc.hostname,sysprc.[program_name],sysprc.cmd,sysprc.loginame,
OBJECT_NAME(sqltxt.objectid) AS [object_name],sqltxt.text
FROM sys.sysprocesses sysprc
OUTER APPLY sys.dm_exec_sql_text(sysprc.sql_handle) sqltxt
Advertisements

DBMIRROR_DBM_EVENT wait type causing blocking in SQL Server

Recently, we observed lots of blocking in our production server showing the wait type as DBMIRROR_DBM_EVENT. To brief, the system is an OLTP system with lots of DML actions, ended up with lot of long running transactions and blocking the user transactions. As DBMIRROR_DBM_EVENT wait type is an internal to Microsoft and there is less documentation available on Web, we started our troubleshooting looking at the mirroring set up. Here are few points we started our analysis as below.

1. Verify DB Mirroring is working uninterrupted – if the mirroring is not working for some reason, on synchronous mode, the impact is huge, the principal server has to wait to commit.

2. Verify feasibility of DB Mirroring mode to Asynchronous – Changing Mirror mode from synchronous to Asynchronous is not a solution,however, its a workaround. Ideally, we need to identify if long running transaction/index maintenance etc caused the issue while applying the changes to Mirror.

3. Verify network latency between Principal and Mirror – The latency in the network is one of the reason that can be assessed using the perfmon counter.

Root cause and resolution

In our case, the root cause was the first one. For some reason, the mirroring has been broken, no endpoint was accepting the connection from the principal server. This lead to a situation on the principal server each time a transaction waits for its log (LSN) to be hardened on the mirror.Once we reestablished the mirroring, the wait type is removed and no blocking observed further.

How to get the status of Mirroring:


SELECT
DB_NAME(DATABASE_ID) AS [DATABASENAME],
CASE WHEN MIRRORING_GUID IS NOT NULL THEN 'MIRRORING IS ON' ELSE 'NO MIRROR CONFIGURED' END AS [ISMIRRORON],
[MIRRORING_STATE_DESC],
CASE WHEN MIRRORING_SAFETY_LEVEL=1 THEN 'HIGH PERFORMANCE' WHEN MIRRORING_SAFETY_LEVEL=2 THEN 'HIGH SAFETY' ELSE NULL END AS [MIRRORSAFETY],
MIRRORING_ROLE_DESC,MIRRORING_PARTNER_INSTANCE AS [MIRRORSERVER]
,MIRRORING_PARTNER_NAME AS [PARTNER NAME]
,MIRRORING_ROLE_DESC AS [MIRROR ROLE]  
,MIRRORING_SAFETY_LEVEL_DESC AS [SAFETY LEVEL]
,MIRRORING_WITNESS_NAME AS [WITNESS]
,MIRRORING_CONNECTION_TIMEOUT AS [TIMEOUT(SEC)]
FROM SYS.DATABASE_MIRRORING

See Also:
https://blogs.msdn.microsoft.com/grahamk/2011/01/10/sql-server-blocking-caused-by-database-mirroring-wait-type-dbmirror_dbm_event/

How do we recover unsaved query window in SQL Server Management Studio?

How do we recover the unsaved query window in SQL Server Management Studio?

This is a simple tip to recover your unsaved query window in SSMS.

As a SQL server professional, most of us will work closely with SSMS in a daily basis.At times, we forget to save our work in the management studio and for some reason if the query window closes/SSMS crashes, we will be in a position of “lost everything”. You may be questioned on your best practises. Here is a way to recover the SSMS window content which would save your time and effort.

You can verify the below path and get the files:

C:\Users\<>\Documents<>\SQL Server Management Studio\Backup Files

The above path may change depends on the windows version.

How does it happen?

There is a setting in Management Studio that allows the Autorecover option enabled by default. We can modify the default settings as per our need to reduce the risk of loss.

Go to SSMS -> Tools -> Options -> Environment -> AutoRecover

There are two different settings:

1) Save AutoRecover Information Every Minutes

This option will save the SQL Query window file at certain interval specified. You can set this value carefully on how frequent that you need to save your SSMS cahnges.

2) Keep AutoRecover Information for Days

This option will preserve the AutoRecovery information for specified days.

Error Message – The index ” is dependent on column ”.ALTER TABLE ALTER COLUMN ” failed because one or more objects access this column.

It is always annoying when we get a request to change the datatype of an existing column for many reason. The changes are due to business reason or bad design or anything. But, when we change to accommodate, we need to do lots of analysis to ascertain the impact of the change.

Here, we are going to explain a case, changing datatype size varchar (n) to varchar(max). This blog ONLY explains the context of changing the datatype not the impact of the change in terms of performance.


Create Table ColumnDatatypeChange (Col1 int,Col2 varchar(1))
Insert into ColumnDatatypeChange Select 1 , 'L'
create Index IX_ColumnDatatypeChange on ColumnDatatypeChange(Col1) include(Col2)
Alter Table ColumnDatatypeChange Alter column Col2 Varchar(MAX) -- This statement will fail

On executing the below snippet, we get the below error message. (Please note that an index is dependent on column)

Error Message:
Msg 5074, Level 16, State 1, Line 8
The index ‘IX_ColumnDatatypeChange’ is dependent on column ‘Col2’.
Msg 4922, Level 16, State 9, Line 8
ALTER TABLE ALTER COLUMN Col2 failed because one or more objects access this column.

However, when we do the change from MAX to 8000, it gets executed without any issue.


Create Table ColumnDatatypeChange (Col1 int,Col2 varchar(1))
Insert into ColumnDatatypeChange Select 1 , 'L'
create Index IX_ColumnDatatypeChange on ColumnDatatypeChange(Col1) include(Col2)
Alter Table ColumnDatatypeChange Alter column Col2 Varchar(8000) -- This statement will succeed

Reason:

The reason for this behavior is due to the change from non-max type to a max type. SQL Server considers this change as dropping a column and adding a new column. Internally, the non-max and max types do not share the same storage types (Row-Overflow vs. LOB allocation units). Hence, this change needs to update every row and move the data from dropped column to newly added column. However, if you see for non-max type change, they share the same storage (either in-row or in the row-overflow allocation unit), so no issues with in-place update, that mean, no need to drop the dependent(index in this case).

Resolution:

1. Drop the index and change the column size followed by adding the index back to the table


Drop Index IX_ColumnDatatypeChange on ColumnDatatypeChange
Alter Table ColumnDatatypeChange Alter column Col2 Varchar(MAX)
create Index IX_ColumnDatatypeChange on ColumnDatatypeChange(Col1) include(Col2)

2. Analyze and try to avoid the size to MAX (if possible)

Thanks for your read and looking for your feedback!!!

Msg 22841, Level 16, State 1, Procedure sp_cdc_vupgrade, Line 323 – While restoring from a cdc enabled database backup

We encountered an issue as below while restoring from a CDC enabled database backup.

Though Management Studio has shown the above message, the backup restore was successful. To further understand, we tried to restore the database using T-SQL, but ended with the same error.

DBCC execution completed. If DBCC printed error messages, contact your system administrator.
Msg 22841, Level 16, State 1, Procedure sp_cdc_vupgrade, Line 323
Could not upgrade the metadata for database ‘CDCTest’ that is enabled for Change Data Capture. The failure occurred when executing the action ‘alter cdc.change_tables index change_tables_unique_idx with (drop_existing = on)’. The error returned was 4922: ‘line 102, state 9, ALTER TABLE ALTER COLUMN column_id failed because one or more objects access this column.’. Use the action and error to determine the cause of the failure and resubmit the request.
RESTORE DATABASE successfully processed 1058229 pages in 72.909 seconds (113.393 MB/sec).

This error message was more clear that the restore database was successful.

In addition, we found that Microsoft Knowledge Base article KB2567366 describes a mechanism for skipping the internal “CDC Upgrade” process by enabling the trace -T3101. When the -T3101 trace flag is used, the restore process bypasses a CDC upgrade operation that is associated with this issue.


DBCC TRACEON (3101, -1)

RESTORE DATABASE ArchivalTest_1 FROM DISK = 'C:\bkpfile.bak'
WITH MOVE 'bkpfile' TO 'C:\data\bkpfile.mdf'
    , MOVE 'bkpfile_log' TO 'C:\data\bkpfile.LDF'
    , REPLACE
    , KEEP_CDC;

DBCC TRACEOFF (3101, -1)

The above query has executed successfully without any error message!!!

DBCC UPDATEUSAGE – Is it important post SQL Server version upgrade?

As part of migration from SQL server 2008 R2 to SQL server 2014, we listed out a list of items to be taken care post upgrade. There was a question in the panel on DBCC UPDATEUSAGE – should we really do this as part of post migration. Here are few points that I support to do this operation.

What is DBCC UPDATEUSAGE doing?

UPDATEUSAGE is a DBCC command that reports and correct inaccurate pages and rows in the catalog views.These catalogs will be used to report information for sp_spaceused and therefore it is very important to update those information using UPDATEUSAGE command.

Why do we need post migration?

When we are upgrading a SQL Server version, this is mandatory to update these catalogs, as these catalogs may be used internally for various purposes. UPDATEUSAGE command primarily updates page allocations of data/index. In addition, the command can also be used to update the row counts as well with an optional parameter – WITH COUNT_ROWS.

DBCC UPDATEUSAGE(0) WITH COUNT_ROWS

Few other details –

It is to be noted that you should not get confused by the SELECT COUNT() method and other methods that uses catalogs to get the row count information. SELECT COUNT always reads the from underlying objects, hence the information will be always accurate. Read more about it in the below link.

https://sqlzealots.com/2017/01/29/sql-server-count-table-rows-for-all-databases-using-sp_msforeachdb-and-sp_msforeachtable/

DBCC UPDATEUSAGE holds a shared lock on the object, hence this may cause a blocking scenario in a highly concurrent environment. This is not advisable to run the command on production drectly , rather to plan these to happen on maintenance window.

The other way to invoke the UPDATEUSAGE is with sp_spaceuded as below:

sp_spaceused @updateusage=true

Hope you enjoyed this blog post, let me know your thoughts on the topic.

The multi-part identifier “” could not be bound. with insert statement to get non insert values into OUTPUT variable in SQL Server

Today, one of my colleague was asking how to get the non inserted values from the insert statement to output table variable.
Here is a sample code that he tried to get information. He reported there is an error message as below.

Msg 4104, Level 16, State 1, Line 12
The multi-part identifier “p.PId” could not be bound.

Sample code:


CREATE TABLE T1 (RID INT IDENTITY(1, 1), col1 INT, col2 INT)
CREATE TABLE T2 (PID INT IDENTITY(1, 1), col1 INT, col2 INT)
Declare @OutputTable Table (ID INT IDENTITY(1, 1), PID INT, RID INT, col1 INT, col2 INT)

INSERT INTO T2 VALUES (1, 1), (2, 2), (3, 3), (4, 4)

INSERT into T1 (col1, col2) 
	OUTPUT p.PId, inserted.RID, inserted.col1, inserted.col2
	INTO @OutputTable (PID, RID, col1, col2)
Select Col1, Col2 From T2 p

SELECT  * FROM    @OutputTable

DROP TABLE T1,T2

Solution:
While analyzing, it is clear that there is no straight way to get non-inserted values into OUTPUT clause.But, there is work around to get the information for this scenario by using MERGE and use the INSERT statement as below:


CREATE TABLE T1 (RID INT IDENTITY(1, 1), col1 INT, col2 INT)
CREATE TABLE T2 (PID INT IDENTITY(1, 1), col1 INT, col2 INT)
Declare @OutputTable Table (ID INT IDENTITY(1, 1), PID INT, RID INT, col1 INT, col2 INT)

INSERT INTO T2 VALUES (1, 1), (2, 2), (3, 3), (4, 4)

MERGE INTO T1 r USING T2 p ON 1 = 0
WHEN NOT MATCHED THEN
    INSERT (col1, col2) VALUES (p.col1, p.col2)
    OUTPUT p.PId, inserted.RID, inserted.col1, inserted.col2
    INTO @OutputTable (PID, RID, col1, col2);

SELECT  * FROM    @OutputTable

DROP TABLE T1,T2