PARSENAME in SQL Server: a Powerful function

Introduction:
PARSENAME is one of my favourite functions in SQL Server. But, unfortunately, I could not see many people using this function efficiently.So, I thought of sharing some of its usages here with you.

PARSENAME function can be used to retrive specified part of a string that has four parts separated with DOT(.). This function can be used efficiently to retrive the each part name with four part names like fully qualified objectnames.


--Database four part Object name Usage
create Table Test_ParseName(ObjectName Varchar(100))

Insert into Test_ParseName Values('SER1.DB1.SCH1.TAB1'),('SER2.DB2.SCH2.TAB2')

Select PARSENAME(ObjectName,4) 'SERVERNAME',PARSENAME(ObjectName,3) 'DBNAME',PARSENAME(ObjectName,2) 'SCHEMANAME',PARSENAME(ObjectName,1) 'OBJNAME'
From Test_ParseName

Drop table Test_ParseName

The above is a typical example used in Books online(BOL).

Here, I would like to introduce some more scenarios where we can efficiently use PARSENAME function.

1. To separate IPAdress Sections and for its validation:
Note: IPAddress validation is not a fool-proof solution, however, used to showcase a typical usage of PARSENAME.


create Table Test_ParseName(IPAddress Varchar(100))

Insert into Test_ParseName Values('10.123.123.234'),('10.234.12.123'),('101.123.234.12.123'),('10.234.12'),('10.234.12.')

Select IPAddress,
		PARSENAME(IPAddress,4) 'SECTION1',PARSENAME(IPAddress,3) 'SECTION2',
		PARSENAME(IPAddress,2) 'SECTION3',PARSENAME(IPAddress,1) 'SECTION4',
		/*Another Usage If we want to check the test is satisfying the 4 part naming separated by "."
		Please note that it does not cover the entire validation of IPAdress*/
		Case when (PARSENAME(IPAddress,1) is not null and PARSENAME(IPAddress ,4) is not null) 
				Then 'Satisfied four part' Else 'Invalid format' End
From Test_ParseName

Drop table Test_ParseName

PARSENAME_image2

2. Another example with REPLACE function:
PARSENAME can be used along with REPLACE in many places. Try out the below;the below sample will replace “-” character with “.” and use PARSENAME to separate the four part string easily.


create Table Test_ParseName(string Varchar(100))

Insert into Test_ParseName Values('Check-function-usage-efficiently'),('Check-function-usage'),('Check-function'),('Check'),('Check-function-usage-efficiently-failed')

;With cte_Test_parsename as
(Select REPLACE(string,'-','.') stringmodified From Test_Parsename)

Select stringmodified,
		PARSENAME(stringmodified,4) 'SECTION1',PARSENAME(stringmodified,3) 'SECTION2',
		PARSENAME(stringmodified,2) 'SECTION3',PARSENAME(stringmodified,1) 'SECTION4'
From cte_Test_parsename

Drop table Test_ParseName

PARSENAME_image3

Hope you enjoyed this post. I would like to request you to share any other scenario you come across where PARSENAME can be efficiently used.

Advertisements

Myth: Truncate cannot be rolled back as it is not logged

Discussion Point:
Once again, I heard “Truncate cannot be rolled back where as DELETE can.”. There are many posts available in net regarding and still many people believe that truncate can not be rolled back within a transaction.

Lets us test and see the facts here.

Step 1: To understand better, We are going to create a test table and load some data to the test table as below:


--Create test table - Test_Rollback
create Table Test_Rollback(ProductID int, Product_Name Varchar(MAX))
Go

--Data generation to Test_Rollback table
;With cte as
(
	Select 1 ProductID, Cast('a' as varchar(MAX)) Product_Name
	Union All
	Select ProductID +1 , Replicate(Cast('a' as varchar(MAX)),ProductID +1)
	From Cte A where PRoductId <= 1000
)
Insert into Test_Rollback
 Select ProductID, Product_Name From cte Option(MAXRECURSION 1000)

--Check the Table data
Select * From Test_Rollback

Here, we inserted 1001 records to Test_Rollback table to proceed with our testing.

Step 2: Now, We are going to DELETE records from the table and see the effect of ROLLBACK. In addition, we are also going to capture the number of logs generated during the delete operation.


CheckPoint
--Delete operation
Begin Tran Tran_Delete
	Delete From Test_Rollback
	Select Description,[Transaction Name],* From fn_dblog(NULL,NULL) WHERE [AllocUnitNAme] = 'dbo.Test_Rollback' --1001 Records
Rollback Tran Tran_Delete

Here, we issued a CHECKPOINT to move the active portion of the log forward to have a clarity for our testing. Please note that, CHECKPOINT does not mean that those logs are cleared or dumped, its still available in the log.
Once, the delete operation is completed, we used fn_dblog function (one among undocumented functions used to read the transaction log records from the active portion of transaction log file. Though fn_dblog is a very powerful function, as long as it is a undocumented function, please refrain the use of this function in production environment.) to read the transaction records. As fn_dblog function usually returns lots of information, we tried to filter information related to the AllocUnitName to our table (dbo.Test_Rollback). As we queried the information, we could find that there are 1001 records which indicates that the delete operation does at row level, each log for each record as below.
Snapshot:
Tran_Delete

Step 3: Now, We are going to TRUNCATE the table and see the effect of ROLLBACK. In addition, we are also going to capture the number of logs generated here as well.


Checkpoint
--Truncate Operation
Begin Tran Tran_Truncate
	Truncate Table Test_Rollback
	Select Description,[Transaction Name],* From fn_dblog(NULL,NULL)  WHERE  [AllocUnitNAme] = 'dbo.Test_Rollback' --40 Records
Rollback Tran Tran_Truncate

Here, we see the log records are only 40 which is far less than what we have seen with DELETE operation(1001). Closely looking at the Description, we can see that the description states that Deallocation of Extents are carried out during the TRUNCATE.See the below sample(only one sample is provided for understanding).

Deallocated 0001:00003528;Deallocated 0001:00003529;Deallocated 0001:0000352a;Deallocated 0001:0000352b;Deallocated 0001:0000352c;Deallocated 0001:0000352d;Deallocated 0001:0000352e;Deallocated 0001:0000352f
Deallocated 1 extent(s) starting at page 0001:00003528

Snapshot:
Tran_Truncate

This is a clear indication that when a table is getting TRUNCATE, the TRUNCATE is deallocating at EXTENT level not at ROW level. Hence we could see the number of logs generated are considerably lesser than the DELETE operation.

Step 5: Now lets query the table and see for data existence.


--Check the Table data
Select * From Test_Rollback

Cool, we can see the data in the table. What does it mean? TRUNCATE can also be Rolled back within the transaction.

Step 5: As a final step, we are going to clean up our test table.


Select * from Test_Rollback
Drop table Test_Rollback

Conclusions: Let us conclude our findings here:

1. TRUNCATE is a logged operation, in fact does more efficiently by deallocating the extents.
2. TRUNCATE does a deallocation at Extent level not at ROW level.
3. As TRUNCATE does at Extent level, it logs lesser records in log file than DELETE. Hence, TRUNCATE would be faster than DELETE.
4. Finally, Debunk the Myth, TRUNCATE Table in a transaction can also be rolled back.

Hope this post would be useful, please share your thoughts.

T SQL Script – Get SQL Job Details and history details

Description

The Script “sp_GetJobDetails” has 3 resultsets to get job details and its history analysis result. The Procedure has a parameter that gets Job_Id, however is an optional, a NULL value can be passed to fetch job details and History information for all jobs configured in the server.

The script will be very useful and handy for those who are looking for SQL Server Agent Job details and its execution details.

DOWNLOAD SCRIPT

Note: This is an enhanced version of my earlier script posted in MSDN Gallery at https://gallery.technet.microsoft.com/T-SQL-Script-Get-Job-231229b4.

Please download the script and have your suggestions.

Auto File Growth: Track the growth events Using Extended Events in SQL Server

This is a continued blog post for Auto File Growth – Different ways to track the growth events to answer one of the comments.

The author of the comment (Vinuraj R) was looking for a way to capture the Growth events through Extended Events. Let me begin with a Thank you note to the author talking about Extended Events. Extended Events are powerful ways to capture most of the metrics from the SQL Server 2008 own wards.

Here, We are going to see a method to track/cpature the growth of (Data/Log)files using Extended Events.

First of all, we are going to create an EVENT SESSION on the Server to capture the database_file_size_change Event as below: (You may also be able to provide a filter to capture information for any specific database, if required.)


-- Create event
CREATE EVENT SESSION [DB_file_size_changed] ON SERVER 
ADD EVENT sqlserver.database_file_size_change(SET collect_database_name=(1)
    ACTION
   (sqlserver.client_app_name,sqlserver.client_hostname, sqlserver.database_id,sqlserver.session_id,
   sqlserver.session_nt_username,sqlserver.username) 
   /* You can filter the database that needs to be monitored for the auto growth, for which you need to pass the database id for the filter*/
   --WHERE ([database_id]=()) 
   ) 
ADD TARGET package0.event_file
(SET filename=N'C:\temp\DB_file_size_changed.xel')
WITH (MAX_DISPATCH_LATENCY=1 SECONDS)

Once , Event Session has been created, We need to START the EVENT SESSION as below:Please note that you may need CONTROL SERVER permission to Alter/Drop the Event Session.


ALTER EVENT SESSION DB_File_size_Changed 
ON SERVER STATE = START -- START/STOP to start and stop the event session

Once the Event session is started, events will be capturing the Growth information in the file and we would be able to see those information using the below script:


USE [master];
GO
SELECT
        Case when file_type = 'Data file' Then 'Data File Grow' Else File_Type End AS [Event Name]
	   , database_name AS DatabaseName
	   , file_names
	   , size_change_kb
	   , duration
       , client_app_name AS Client_Application
	   , client_hostname
       , session_id AS SessionID
	   , Is_Automatic 
       
FROM (
       SELECT
           n.value ('(data[@name="size_change_kb"]/value)[1]', 'int') AS size_change_kb
           , n.value ('(data[@name="database_name"]/value)[1]', 'nvarchar(50)') AS database_name
           , n.value ('(data[@name="duration"]/value)[1]', 'int') AS duration
           , n.value ('(data[@name="file_type"]/text)[1]','nvarchar(50)') AS file_type
           , n.value ('(action[@name="client_app_name"]/value)[1]','nvarchar(50)') AS client_app_name
           , n.value ('(action[@name="session_id"]/value)[1]','nvarchar(50)') AS session_id
		   , n.value ('(action[@name="client_hostname"]/value)[1]','nvarchar(50)') AS Client_HostName
		   , n.value ('(data[@name="file_name"]/value)[1]','nvarchar(50)') AS file_names
		   , n.value ('(data[@name="is_automatic"]/value)[1]','nvarchar(50)') AS Is_Automatic
           
       FROM 
           (   SELECT CAST(event_data AS XML) AS event_data
               FROM sys.fn_xe_file_target_read_file(
                   N'C:\temp\DB_file_size_changed*.xel',
                   NULL,
                   NULL,
                   NULL)
           ) AS Event_Data_Table
CROSS APPLY event_data.nodes('event') AS q(n)) xyz
ORDER BY database_name

Here is a sample report generated with my testing.
AutoGrowth_ExtendedEvent

Hope, this post would help to understand another way of capturing File Growth information.