Problem StatementWe recently wanted to drop few obsolete tables those are really big ones in my SQL Server databases. Post clean up, meaning dropping the tables, we would like to reclaim the space it occupied to OS and reduce the cost associated with. What are the best options in such situations?
DBA’s PonderingsUsually, DBAs would never support shrinking of data file due to several reason. They consider about their system performance than the disk cost, especially, now days, the worry on cost associated must be an outdated belief. Shrink can cause many bad on your databases and some of them are listed as below: 1. It increases the fragmentation. 2. It will have a system performance issues due to heavy fragmentation. 3. While running shrink a data file, it may have dangerous locking issues. This must be an offline activity. 4. If shrink file on a LOB data, that would slow down the process badly. Due to all the above reasons, we should always keep a long distance to shrink file operations. Thats not our best friend for our system.
Then, How do we shrink data file?Then, What we do if its mandatory to do shrink the data file as our initial Problem Statement? If its mandatory or meaningful or sensible, then you can do with few extreme considerations. In one of my case, I had to clean up almost 1000 tables from a database and I had to do this for almost 2000 databases. It had almost 4GB data to be cleared from a single database resulting a whooping amount of cost save! This is a real example where shrink file is important or a valid reason to do so. Myself & one of my colleague “Pascal Benguigui” have come up with an approach here as below. Let us look at the options/considerations to apply shrink file as below. It may vary for others, but its more of our deliberate thoughts. 1. Understand the size of data that you wanted to eliminate. To understand, I would suggest you to take backup and restore your database to local and remove the data followed by shrink file and then identify the current size of data file. 2. If the file size if more than 1000 MB, we can consider shrinkfile, otherwise, it may not be so efficient. 3. If the file size is more than 1000 MB, let us also consider the percentage of free space. If free space percentage is lesser than 10%, it may not be so efficient again. 4. Let us also shrink the file up to the used space and 10 percentage added to the space. This must be a good number, but depends!, it may or may not suitable for all cases. A detailed analysis can be carried out and define as per your environment leaving it to the respective DBA’s descretion.
CREATE OR ALTER procedure Shrink_DataFile @db varchar(200) as BEGIN declare @f_name varchar(200) declare @f_file varchar(500) declare @f_size int declare @f_used int declare @pct_free int declare @f_newsize int declare @sql_query nvarchar(2000) declare @sql_param nvarchar(200) print 'Datafile SHRINK requested on database ['+@db+'] : ' declare cur_name cursor for select name from sys.sysaltfiles where dbid=db_id(@db) and groupid=1 and CONVERT(sysname,DatabasePropertyEx(@db,'Updateability'))='READ_WRITE' order by fileid asc open cur_name fetch next from cur_name into @f_name while @@fetch_status=0 begin select @sql_param =N'@f_size_out int OUTPUT,@f_used_out int OUTPUT,@f_file_out varchar(500) OUTPUT' select @sql_query=N'use ['+@db+']; select @f_size_out=CAST(s.size/128.0 AS int) , @f_used_out=CAST(FILEPROPERTY(s.name, ''SpaceUsed'')/128.0 AS int),@f_file_out=filename from sys.sysfiles s where groupid=1 and name='''+@f_name+'''' EXEC sp_executesql @sql_query, @sql_param, @f_size_out=@f_size OUTPUT, @f_used_out=@f_used OUTPUT,@f_file_out=@f_file OUTPUT ; select @pct_free=(@f_size-@f_used)*100/@f_size if @f_size >1000 and @pct_free>10 BEGIN -- shrink should keep 10% free space and be a multiple of 100 higher select @f_newsize=@f_used*1.1 select @f_newsize=ceiling(cast(@f_newsize as decimal(38,2))/100)*100 if @f_newsize < @f_size BEGIN print 'File "'+@f_file+' (size '+convert(varchar,@f_size)+' MB) " will be shrinked to '+convert(varchar,@f_newsize)+' MB ...' select @sql_query=N'USE ['+@db+']; DBCC SHRINKFILE (N'''+@f_name+''' , '+convert(varchar,@f_newsize)+') WITH NO_INFOMSGS' print @sql_query exec sp_executesql @sql_query select @sql_param =N'@f_size_out int OUTPUT' select @sql_query=N'use ['+@db+']; select @f_size_out=CAST(s.size/128.0 AS int) from sys.sysfiles s where groupid=1 and name='''+@f_name+'''' EXEC sp_executesql @sql_query, @sql_param, @f_size_out=@f_size OUTPUT; print '... New size for file "'+@f_file+'" shrinked is '+convert(varchar,@f_size)+' MB ' END else print 'NO SHRINK : New size estimated for file "'+@f_file+'" ('+convert(varchar,@f_newsize)+' MB) not lower than the current file size ('+convert(varchar,@f_size)+' MB)' END else print 'NO SHRINK : Size requirements not valid to shrink file "'+@f_file+' (size '+convert(varchar,@f_size)+' MB / free space '+convert(varchar,@pct_free)+' %) ' fetch next from cur_name into @f_name end close cur_name deallocate cur_name if @f_file is null print 'database ['+@db+'] not existing or not in READ-WRITE mode ' END GOPost shrinkfile, we would expect lots of fragmentations. This can severely impact the system performance. We need to make sure that we need to avoid fragmentation as quickly as possible by running INDEX rebuild/reorganize. There is a well defined index maintenance solution defined by Ola Hallengren that can be used for the same. If you have to plan it for many number of databases, you need to consider this to happen in multiple batches to ensure that this process does not hurt the system. There may be a chance that your immediate backup jobs taking longer time than usual. Importantly, we need to make sure the shrink followed by index defragmentation is planned at application off peak hours, otherwise it might end up with larger number of blocking/locking scenarios. I’d like to grow my readership. If you enjoyed this blog post, please share it with your friends!