Author: Latheesh NK

The transaction log for database ‘database name’ is full due to ‘REPLICATION’

Today, we are going to see an exception from SQL Server “The transaction log for database ‘dbname’ is full due to ‘REPLICATION'”.

Recently, we received a backup of a database for a troubleshooting purpose from the production. The database had configured with CDC in Production environment. While we get this database and restored in our local environments, it is observed that the size of the database is very huge and if you look at the size in details, its log that is majorly contributing the size.

Since its huge in size, we tried to shrink the file, please note this is a non-production environment, shrinkfile is not advised to run without a careful consideration and validation. Few references on shrink file written earlier. Since, in our case it was a testing environment, we were free to use this command to reclaim the space.

Msg 9002, Level 17, State 6, Line 10 The transaction log for database ‘Restoredbname’ is full due to ‘REPLICATION’.

It is also observed that log_reuse_wait_desc was showing “REPLICATION” for the database.

Select log_reuse_wait_desc,* From sys.databases
As we know this db was enabled with CDC in Production environment, the first attempt was to disable CDC on restored database.

use Restoredbname
sys.sp_cdc_disable_db 

Msg 22831, Level 16, State 1, Procedure sp_cdc_disable_db_internal, Line 262 [Batch Start Line 2] Could not update the metadata that indicates database Restoredbname is not enabled for Change Data Capture. The failure occurred when executing the command ‘(null)’. The error returned was 9002: ‘The transaction log for database ‘Restoredbname’ is full due to ‘REPLICATION’.’. Use the action and error to determine the cause of the failure and resubmit the request.

We also noticed checkpoint was also not successful due to serious disk issue.
checkpoint

Could not write a checkpoint record in database Restoredbname because the log is out of space. Contact the database administrator to truncate the log or allocate more space to the database log files. Msg 5901, Level 16, State 1, Line 10 One or more recovery units belonging to database ‘Restoredbname’ failed to generate a checkpoint. This is typically caused by lack of system resources such as disk or memory, or in some cases due to database corruption. Examine previous entries in the error log for more detailed information on this failure. Msg 9002, Level 17, State 6, Line 10 The transaction log for database ‘Restoredbname’ is full due to ‘REPLICATION’.

Finally, we decided to apply sp_repldone on the database as below. When xactid is NULL, xact_seqno is NULL, and reset is 1, all replicated transactions in the log are marked as distributed. This is useful when there are replicated transactions in the transaction log that are no longer valid and you want to truncate the log. sp_removedbreplication stored procedure removes all replication objects on the publication database on the Publisher instance of SQL Server or on the subscription database on the Subscriber instance of SQL Server. Once we executed the commands, we were able to shrink the file and the size has been reclaimed to os.

EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time= 0, @reset = 1
sp_removedbreplication

The solution discussed here is NOT a general solution nor for Production. This should ONLY be used for non-production or lower environment where the restored database is used for testing purpose and truncating log is not a concern!

I’d like to grow my readership. If you enjoyed this blog post, please share it with your friends!

How to capture deadlock occurrences in SQL Server using sys.dm_os_performance_counters

Today, let us quickly see how to monitor deadlocks in your SQL Server. There are multiple ways to get the deadlock information.

But, this blog post details an approach to capture the number of deadlocks using performance monitor dynamic management view in a SQL Server.

This approach does not really cover the deadlock graph capturing or analyzing the deadlock graph etc. This enables you to just understand if there are deadlocks in your system and it captures the data as you configured. This data can be used to generate nice reports as per your requirement.

Object creation script

First step is to create the objects to store the deadlock information. Please note, we are not doing to capture the deadlock information, but just the occurrence of deadlocks. So we need a table with DateAdded to denote the captured date and time and deadlocks column to denote the number of deadlocks occurred.
Drop Table  if exists DeadlockTracker

CREATE TABLE DeadlockTracker(
    DateAdded datetime NOT NULL
    , Deadlocks int NOT NULL
)


CREATE UNIQUE CLUSTERED INDEX IX_DeadlockTracker_DateAdded_U_C ON DeadlockTracker
(
    DateAdded
) WITH (FillFactor = 100)

Capture Query

The next step is to define the capture process. The idea is to frequently query a dynamic management view – sys.dm_os_performance_counter and log the data in the table as defined int he first step.

We need to capture these information in a defined interval, may be every 15 minutes. We can configure a SQL job to run every 15 minutes.
DECLARE @CounterPrefix NVARCHAR(30)
SET @CounterPrefix = CASE WHEN @@SERVICENAME = 'MSSQLSERVER'
                            THEN 'SQLServer:'
                        ELSE 'MSSQL$' + @@SERVICENAME + ':'
                        END

INSERT INTO DeadlockTracker(DateAdded,  Deadlocks)
SELECT DateAdded            = GetDate()
     , Deadlocks             = (SELECT cntr_value FROM sys.dm_os_performance_counters 
									WHERE object_name like @CounterPrefix + '%'
										AND instance_name IN ('', '_Total')
										AND counter_name ='Number of Deadlocks/sec')

Sample Report Dashboard Queries

This is the last step which is nothing but visualization of data captured. This is absolutely depending on data analyst discretion, but a very basic report sample has provided for your reference. You can change the query to get the data group by month/days/quarter etc.

Select *, Deadlocks - Lag(Deadlocks) Over(Order by DateAdded asc) From  DeadlockTracker

I’d like to grow my readership. If you enjoyed this blog post, please share it with your friends!

When was the database taken offline/online in SQL Server

Here is a script to identify when was the database taken offline/online and few other information like login name, host name and application name.

Script

DECLARE  @DBNAME nvarchar(100)
  ,@FileName nvarchar(max)
  ,@Status nvarchar(10)
  
SET @DBNAME = NULL -- Provide [DBNAME]
SET @Status = 'OFFLINE' --Provide [OFFLINE / ONLINE]
SELECT @FileName=[path] FROM sys.traces WHERE is_default=1

DECLARE @ErrorLogTable table (Logdate datetime, ProcessInfo nvarchar(10), [Text] nvarchar(max))

INSERT INTO @ErrorLogTable
EXEC xp_readerrorlog 0,1, @Status, @DBNAME, NULL, NULL, 'desc'

SELECT Distinct DatabaseID, DatabaseName, HostName, ApplicationName, LoginName, StartTime, B.ProcessInfo, B.Text
FROM sys.fn_trace_gettable( @FileName, DEFAULT ) A
Inner join @ErrorLogTable B on A.SPID = cast(SUBSTRING(B.ProcessInfo,5,5) AS int)
and CAST(StartTime AS nvarchar)=cast(B.Logdate AS nvarchar) 

I’d like to grow my readership. If you enjoyed this blog post, please share it with your friends!

How do we list the available instances from SQL Server

Here is a script to list the available instances of a SQL Server.

Script

DECLARE @regpath NVARCHAR(128)
DECLARE @regkey NVARCHAR(500)
DECLARE @Inst VARCHAR(128)

DROP TABLE IF EXISTS #AllInstancesOnMachine, #CurVer
CREATE TABLE #AllInstancesOnMachine (InstanceName VARCHAR(128), RegPathToUse VARCHAR(128), MajorVersionFound VARCHAR(50))
CREATE TABLE #CurVer (RegValue VARCHAR(128), VersionFound VARCHAR(50))

INSERT INTO #AllInstancesOnMachine (InstanceName, RegPathToUse)
EXEC   master..xp_instance_regenumvalues
	@rootkey = N'HKEY_LOCAL_MACHINE',
	@key     = N'SOFTWARE\\Microsoft\\Microsoft SQL Server\\Instance Names\\SQL'

DECLARE VersionCursor CURSOR FOR
SELECT InstanceName, RegPathToUse FROM #AllInstancesOnMachine

OPEN VersionCursor

WHILE (1=1)
BEGIN
	FETCH NEXT FROM VersionCursor INTO @Inst, @regpath
	IF @@FETCH_STATUS < 0 BREAK

	SET @regkey = N'Software\Microsoft\Microsoft SQL Server\'
					+@regpath+'\MSSQLSERVER\CurrentVersion'

	TRUNCATE TABLE #CurVer

	INSERT INTO #CurVer (RegValue, VersionFound)
	EXECUTE master.sys.xp_regread
		@rootkey		= N'HKEY_LOCAL_MACHINE',
		@key			= @regkey,
		@value_name		= N'CurrentVersion'
		
	UPDATE #AllInstancesOnMachine SET MajorVersionFound = 
			(SELECT CASE Parsename(VersionFound,4)  when 10 then 'SQL 2008 or 2008 R2'
								when 11 then 'SQL 2012'
								when 12 then 'SQL 2014'
								when 13 then 'SQL 2016'
								when 14 then 'SQL 2017' End FROM #CurVer) 
	WHERE InstanceName = @Inst

END

CLOSE VersionCursor
DEALLOCATE VersionCursor

SELECT * FROM #AllInstancesOnMachine

I’d like to grow my readership. If you enjoyed this blog post, please share it with your friends!

A way to Shrink Data File in SQL Server

Problem Statement

We recently wanted to drop few obsolete tables those are really big ones in my SQL Server databases. Post clean up, meaning dropping the tables, we would like to reclaim the space it occupied to OS and reduce the cost associated with. What are the best options in such situations?

DBA’s Ponderings

Usually, DBAs would never support shrinking of data file due to several reason. They consider about their system performance than the disk cost, especially, now days, the worry on cost associated must be an outdated belief. Shrink can cause many bad on your databases and some of them are listed as below:

1. It increases the fragmentation.

2. It will have a system performance issues due to heavy fragmentation.

3. While running shrink a data file, it may have dangerous locking issues. This must be an offline activity.

4. If shrink file on a LOB data, that would slow down the process badly.

Due to all the above reasons, we should always keep a long distance to shrink file operations. Thats not our best friend for our system.

Then, How do we shrink data file?

Then, What we do if its mandatory to do shrink the data file as our initial Problem Statement?

If its mandatory or meaningful or sensible, then you can do with few extreme considerations. In one of my case, I had to clean up almost 1000 tables from a database and I had to do this for almost 2000 databases. It had almost 4GB data to be cleared from a single database resulting a whooping amount of cost save! This is a real example where shrink file is important or a valid reason to do so. Myself & one of my colleague “Pascal Benguigui” have come up with an approach here as below.

Let us look at the options/considerations to apply shrink file as below. It may vary for others, but its more of our deliberate thoughts.

1. Understand the size of data that you wanted to eliminate. To understand, I would suggest you to take backup and restore your database to local and remove the data followed by shrink file and then identify the current size of data file.

2. If the file size if more than 1000 MB, we can consider shrinkfile, otherwise, it may not be so efficient.

3. If the file size is more than 1000 MB, let us also consider the percentage of free space. If free space percentage is lesser than 10%, it may not be so efficient again.

4. Let us also shrink the file up to the used space and 10 percentage added to the space. This must be a good number, but depends!, it may or may not suitable for all cases. A detailed analysis can be carried out and define as per your environment leaving it to the respective DBA’s descretion.

Shrink Script


CREATE OR ALTER procedure Shrink_DataFile @db varchar(200)
as
BEGIN

declare @f_name varchar(200)
declare @f_file varchar(500)
declare @f_size int
declare @f_used int
declare @pct_free int
declare @f_newsize int
declare @sql_query nvarchar(2000)
declare @sql_param nvarchar(200)

print 'Datafile SHRINK requested on database ['+@db+'] : '
declare cur_name cursor for 
select name from sys.sysaltfiles where dbid=db_id(@db) and groupid=1 and CONVERT(sysname,DatabasePropertyEx(@db,'Updateability'))='READ_WRITE' order by fileid asc

open cur_name
fetch next from cur_name into @f_name

while @@fetch_status=0
       begin

       select @sql_param =N'@f_size_out int OUTPUT,@f_used_out int OUTPUT,@f_file_out varchar(500) OUTPUT'
       select @sql_query=N'use ['+@db+']; select @f_size_out=CAST(s.size/128.0 AS int) , @f_used_out=CAST(FILEPROPERTY(s.name, ''SpaceUsed'')/128.0 AS int),@f_file_out=filename from sys.sysfiles s where groupid=1 and name='''+@f_name+''''
       EXEC sp_executesql @sql_query, @sql_param, @f_size_out=@f_size OUTPUT, @f_used_out=@f_used OUTPUT,@f_file_out=@f_file OUTPUT ;
       select @pct_free=(@f_size-@f_used)*100/@f_size

       if @f_size >1000 and @pct_free>10
             BEGIN
             -- shrink should keep 10% free space and be a multiple of 100 higher
             select @f_newsize=@f_used*1.1
             select @f_newsize=ceiling(cast(@f_newsize as decimal(38,2))/100)*100
             if @f_newsize < @f_size
                    BEGIN
                    print 'File "'+@f_file+' (size '+convert(varchar,@f_size)+' MB) " will be shrinked to '+convert(varchar,@f_newsize)+' MB ...'
                    select @sql_query=N'USE ['+@db+']; 
                    DBCC SHRINKFILE (N'''+@f_name+''' , '+convert(varchar,@f_newsize)+')  WITH NO_INFOMSGS'
                    print @sql_query
                    exec sp_executesql @sql_query

                    select @sql_param =N'@f_size_out int OUTPUT'
                    select @sql_query=N'use ['+@db+']; select @f_size_out=CAST(s.size/128.0 AS int) from sys.sysfiles s where groupid=1 and name='''+@f_name+''''
                    EXEC sp_executesql @sql_query, @sql_param, @f_size_out=@f_size OUTPUT;
                    print '... New size for file "'+@f_file+'" shrinked is '+convert(varchar,@f_size)+' MB
'
                    END
             else
                    print 'NO SHRINK : New size estimated for file "'+@f_file+'" ('+convert(varchar,@f_newsize)+' MB) not lower than the current file size ('+convert(varchar,@f_size)+' MB)'

             END
       else
             print 'NO SHRINK : Size requirements not valid to shrink file "'+@f_file+' (size '+convert(varchar,@f_size)+' MB / free space '+convert(varchar,@pct_free)+' %)
       '

       fetch next from cur_name into @f_name
       end
       close cur_name
deallocate cur_name

if @f_file is null
       print 'database ['+@db+'] not existing or not in READ-WRITE mode
'

END
GO

Post shrinkfile, we would expect lots of fragmentations. This can severely impact the system performance. We need to make sure that we need to avoid fragmentation as quickly as possible by running INDEX rebuild/reorganize. There is a well defined index maintenance solution defined by Ola Hallengren that can be used for the same.

If you have to plan it for many number of databases, you need to consider this to happen in multiple batches to ensure that this process does not hurt the system. There may be a chance that your immediate backup jobs taking longer time than usual. Importantly, we need to make sure the shrink followed by index defragmentation is planned at application off peak hours, otherwise it might end up with larger number of blocking/locking scenarios.

I’d like to grow my readership. If you enjoyed this blog post, please share it with your friends!