How to identify NetBIOS Name of a computer?

Its not a SQL Server related, but I found it difficult to identify a way to get the NetBIOS name of a computer, hence sharing the same.

Recently, while troubleshooting an issue with MSDTC, I had to use DTCPing tool. I do not cover the DTCPing tool here, but will share more about it in another post. DTCPing tool was expecting NetBIOS name instead of IP address.

So, here are few options to identify the NetBIOS Name of a computer.


Using T-SQL

Select serverproperty('ComputerNamePhysicalNetBIOS')
(Or)
exec master..xp_regread 'HKEY_LOCAL_Machine', 'SYSTEM\CurrentControlSet\Control\ComputerName\ComputerName\','ComputerName'

Using Windows

nbtstat -n  (-n   -- Lists local NetBIOS names.)

Thats it for now, but if you think there is any other method, post it in the comment section.

Advertisements

Error Message: The operation could not be performed because OLE DB provider “SQLNCLI11” for linked server “” was unable to begin a distributed transaction.

Problem Statement:

We had an issue with Linked Server in one of our environment while executing a functionality. The error message is shown as below.

OLE DB provider “SQLNCLI11” for linked server “” returned message “The transaction manager has disabled its support for remote/network transactions.”.
Msg 7391, Level 16, State 2, Procedure “VIEWNAME” Line 8 [Batch Start Line 1]
The operation could not be performed because OLE DB provider “SQLNCLI11” for linked server “” was unable to begin a distributed transaction.

Analysis:

While analyzing, we identified the reason for the error is actually an INSERT operation on a view.To be more clear, lets discuss the scenario with an example as below.

Lets assume we have two servers ServerA and ServerB connected through linked server “LKSERVER”.

1. Create a table Called – dbo.TestLinkedServerTargetonPrimary in ServerA

Create Table dbo.TestLinkedServerTargetonPrimary(Col1 int)

2. Create a view called vw_LinkedServerTest in ServerB

create view vw_LinkedServerTest
		as
			Select Col1 From [ServerA].DatabaseA.dbo.TestLinkedServerTargetonPrimary

3. Create an INSTEAD OF Trigger for INSERT operation on the view created above

create TRIGGER [dbo].TR_LinkedServerTest on [dbo].vw_LinkedServerTest
		INSTEAD OF INSERT
		AS		
		BEGIN
                	INSERT INTO [ServerA].DatabaseA.dbo.TestLinkedServerTargetonPrimary
	                (
				Col1
                	)
	                SELECT 
				Col1
        	        from inserted
		END

4. Try inserting data into Linked Server table through view.

Insert into vw_LinkedServerTest Select 1

The insert operation is failing with the below error message.

OLE DB provider “SQLNCLI11” for linked server “LKSERVER” returned message “The transaction manager has disabled its support for remote/network transactions.”.
Msg 7391, Level 16, State 2, Procedure TR_LinkedServerTest, Line 7 [Batch Start Line 21]
The operation could not be performed because OLE DB provider “SQLNCLI11” for linked server “LKSERVER” was unable to begin a distributed transaction.

The error is due to the transaction created by the INSTEAD OF Trigger on the view. Though, we dont have any explicit transaction defined, the instead of trigger is creating an implicit transaction. As the transaction scope is across linked server, it tries to open a distributed transaction in the trigger and it fails due to non access to network DTC access.

Lets quickly confirm the cause of the issue by modifying the trigger code to get the transaction count.

From the above, we can clearly see the transaction count is increasing as the trigger is getting executed.

Solution:

There are two options to solve the issue.

1. Provide enough security or enable the configuration at Network DTC access

a. Open “Component Services” in both servers and change the security settings as below.

b. Once the settings are changed, try to execute the INSERT query.

One drawback of this solution is a high dependency on MSDTC. Unless there are no alternatives, I would not suggest to use this method .

2. Try to Avoid the Trigger

Yes, As already discussed,INSTEAD OF trigger on the view is causing the issue. If we can drop the trigger on the view, it will not create a distributed transaction and the query will be executed successfully.

Hope this post helps you for similar situations, please share your feedback/comments….

Internal usage of TEMPDB by SQL Server DB engine

This post is actually a continuation post for TEMPDB – the most important system database in SQL Server which trying to cover one of the important usage of TEMPDB in SQL Server. Some operations in SQL Server uses TEMPDB internally to improve the performance of the operations. It may not be fully aware for the users, but we can understand the usage of tempdb using the below query.

--Query to understand the internal usage of TEMPDB
select
reserved_MB=(unallocated_extent_page_count+
			version_store_reserved_page_count+
			user_object_reserved_page_count+
			internal_object_reserved_page_count+
			mixed_extent_page_count)*8/1024. ,
unallocated_extent_MB =unallocated_extent_page_count*8/1024., 
internal_object_reserved_page_count,
internal_object_reserved_MB =internal_object_reserved_page_count*8/1024.
from sys.dm_db_file_space_usage

Few of operations as below:

1. DBCC CHECKDB/CHECKALLOC

DBCC CHECKDB/CHECKALLOC are using TEMPDB space for creating internal database snapshot to perform the operation. This is to efficiently avoid the locking behavior on database to obtain the consistency check. DBCC CHECKDB and CHECKALLOC has an option to estimate the usage of TEMPDB by providing ESTIMATEONLY option as below. Please note, this is an estimated value, may not be the correct one, however, this can be a good indication to estimate the space required for the operation. This can be used to make sure the TEMPDB drive has enough space to run on DBCC operations for large databases.

2. SORT operations can spill over to TEMPDB

When a query is executed, SQL optimizer will choose the plan already created if prsent or will create a new plan based on cost based algorithms. When optimizer creates a plan, SQL optimizer/relational engine will identify the right operator for the plan to execute the queries and it estimates the memory required to execute the query. This plan will be used for further execution. If the estimation is not happening correctly because of wrong statistics or parameters used while creating the plan is returning less number of records, the estimated number of records will be deviating from the actual number of records while executing the query. In such scenario, the memory granted for the execution may not be sufficient for the execution.

If SORT operator needs more memory to sort the data , other words, the memory granted is not sufficient to do the sorting, it will spill over to TEMPDB. This will have a performance impact for the query execution. The spill over to TEMPDB can be observed in the profiler as SORT warnings(below).

In the above snapshot, we can see Sort warnings in profiler with Event Subclass as single or multiple. Whenever the sort operation spills to tempdb, SQL Server raises the ‘Sort Warnings’ event and it takes single or multiple passes to tempdb.

As mentioned, Sort warnings or spilling to TempDB will have some detrimental impact on the query performance, we need to identify those queries and avoid if possible. Through profiler, we may not understand the query is being caused the Sort Warnings, but, we need to identify from the cached plan or by setting up Extended events to capture the Sort warnings.

Once we identified the query caused Sort Warning, the easiest solution would be re-write the query in a manner to avoid the sorting. I have seen queries with CTE using the ORDER BY even though the order by is not required specifically. Try to use ORDER BY genuinely to avoid performance issues. At the same time, it may not be easy for all cases to avoid the ORDER BY completely. Similar scenario, we may need to evaluate adding a supporting index or modify the existing indexes to avoid, but again, this needs more careful and clear understanding of code and index usage. If you are well aware of the reason for Sort warnings are due to incorrect statistics, you can update the stats or use the solutions to avoid the parameter sniffing(As the objective of this post is not going to explain the parameter sniffing, this post does not cover the topic now.).

3. Worktables/intermediate temp objects due to spooling/hash joins/aggregate Operations

Spools are special operators created by SQL optimizer to improve the performance of a query. A spool operator is not an independent operator,but a supporting operator for another operator like clustered index scan, Table scan or even Constant scan. A spool operator reads and stores intermediate “operated” data into TEMPDB from another operator, there by, increasing the performance of the query.

In the below snapshot, we can observe that the spool operator stores the data from the input, here its nothing but constant scan into a temporary tables to avoid multiple rewinds.

There are totally five types of spool operators – Eager, Lazy, Table, RowCount and Non-Clustered Index spools. All spool operators will store the data into TEMPDB, however, it may be different the way it behaves. Please explore further on the topic to know more details.

Hope you enjoyed this post, please post your feedback/thoughts in the comments.

Table Variables in SQL Server

Table variables are another types of temporary objects to store transient data. Please refer Temporary Table objects for more details about Temporary Tables in SQL Server.

Differences between Table Variables and Temporary tables

When do you use Temporary Tables over Table Variables, vice versa

So, now we should be good enough to take a decision on when do we need to use table variable and Temporary tables. I would like to reiterate statistics as one of the most important factor. Table variable does not have statistics where as temporary table has statistics maintained. As SQL Server optimizer is based on cost- based optimization, statistics are very important factor for Cost based approach to identify the best plan for your query. If your transient data is more than 100, I would suggest to use Temporary tables(with right indexes) over table variables to make use of the statistics to help SQL Server to identify the best plan for us.


DECLARE @TabVariableTesting TABLE (
 id INT PRIMARY KEY,
 ScrambledData Varchar(7000)
)

INSERT INTO @TabVariableTesting 
SELECT number, Replicate('SQLZEalot',100)
 FROM master..spt_values WHERE NAME IS NULL

SELECT * FROM @TabVariableTesting WHERE id > 75


CREATE TABLE #TempTableTesting (
 id INT PRIMARY KEY,
 somecolumn Varchar(7000)
)

INSERT INTO #TempTableTesting
SELECT number, Replicate('SQLZEalot',100)
 FROM master..spt_values WHERE NAME IS NULL

SELECT * FROM #TempTableTesting WHERE id > 75

DROP TABLE #TempTableTesting

In other words, if you have very less data, then I would prefer to use table variables over temporary tables. If you need to store data in a user defined function, table variables are the way for you currently. The choice is not hard rule one, but choose the best for your needs/requirements.

Hope you enjoyed this post, please share your feedback and thoughts.

Linked Server in SQL Server – Best Practices

Linked Server is one of the easiest way of communicating between multiple servers/instances in SQL Server. By linking the servers, you would be able to receive/send data between the partners. This becomes handy as it may not really need *lots of* changes in your code to work, however, caveat is it may have some performance issues depending on your code and other factors. I am trying to put few best practices for you while you work with Linked Server in this post.

1. “Collation Compatible” Setting while creating the Linked Server

Collation Compatible is a setting to instruct SQL Server to do the evaluation of comparisons on character columns locally. By default, the value is false.

If the value is true, then SQL Server will consider that all partners of a Linked Servers are compatible regards to character set and sort order.

If the value is false, then SQL Server tries to process the data locally by pulling the data from remote server. This may lead to a performance issue if your remote table has more data. A word of caution: The choice of this setting should be considered at most care as this may even lead for data inconsistency if we set the value is true for performance gain and collation are different for Linked servers.

2. PUSH versus PULL method in Linked Server

PUSH or PULL are denoting how the SQL query is being operated through Linked Server. PUSH denotes the data to send (push) to the partner. PULL denotes the data to receive as required from the partner. In Linked Server world, PULL is much faster than PUSH method.

Eg: If you need to get some data from Remote, Local server can PULL data from remote server table instead of pushing the data from remote server. This has an impact on performance and I personally experienced in one of my earlier projects. Having said, the amount of data pass through the network has a major impact on the performance. As much as can limit the data volume over the network, we will get better performance. It depends on the query being passed, size of tables, cardinality, type of queries etc. This is a huge topic to be discussed later and forever.

3. Query dialects

SQL Server always do the best work for you, meaning, it will do most of the work at remote server and get you the required data to locally. However, sometimes, it may not happen as desired due to many factors. There are few operations that can hinder the work at remote if you use in your queries.

Any formatting/conversion of data
Queries that use uniqueidentifier datatype
Queries that use TOP operator
Complex queries
UNION queries with local objects

4. Required Permission

This is one of the important criteria while you set a Linked Server. SQL Server query execution is heavily depending on the statistics of objects, which will decide the best execution plan for your query. Linked Server is not a magic technology in SQL Server. It gets an additional layer OLEDB interface while communicating between servers. That enables Linked Server to connect heterogeneous systems. When your Linked server is between SQL Servers, the native client SQL Server retrieves statistics from the remote and process the query for the performance. Please find the SQL Profiler collected from the remote server for one of my Linked Query:

The highlighted lines are the ones getting executed in remote server to get the statistics. SQL Server needs special permission for the users connected to remote server to use the statistics of remote server. This could be either of the below:

1. sysadmin
2. db_owner
3. ddl_admin

If the user is not having any of the above roles, then the query cannot use the distributed statistics from the remote server and it will have an impact on performance. The issue with providing these roles is that compromising the security.

Prior to SQL Server 2012 (SP1), this is one of “must to verify”, however, there is a relief in SQL Server 2012 (SP1) , MS has put a fix to use statistics information even for read only user (not necessarily be associated with any of the above user roles). You have a choice of disabling this behavior by using trace flag 9485, that means, enabling trace flag 9485 reverts the new permission check to the original behavior. But, be aware, this may not solve all your Linked Server performance issues.

Please comment your experiences with Linked Server if you find anything interesting to be shared.

How to create a lock on a table that prevents other requests in SQL Server

Problem Statement:

Today, I got a very peculiar request from one of my colleague that he wants to put a lock on a table so that no other requestscan be served. The request was looking weird to me initially as most of us want to avoid locking/blocking scenarios in a system, but the request was looking for creating one. But Why? After asking many questions, I got his intention that he is looking for replicating a locking scenario in his local environment that ends up with a Timeout.

Solution:

The solution provided is simple but put a Schema modification lock(Sch-M) on his table. Schema modification lock will be acquired when a DDL statement is executed on the table. As Schema modification is not compatible with any other locks, it will prevent access to the locked object data. An example of the Sch-M lock is lock during index rebuild (Please note when used with ONLINE option, index rebuild will acquire Sch-M lock shortly at the end of the process).


Session 1:
--Create Table
create Table TestLock(Col1 int)
Insert into TestLock values ('100')

--Raise a Schema modification lock
begin Tran T1
ALTER TABLE TESTLOCK ALTER COLUMN COL1 VARCHAR(5000)

-----------------------------------------------------------------------
Session 2:
Select * From TestLOCK(NOLOCK)--NOLOCK has no impact on SCH-M locked object.

Thoughts:

Finally, I understood he is trying to put some code to handle the locking situation, but not the solution for the timeout. As he mentioned the issue happens ONLY when maintenance window is running, I believe this is something related to any maintenance operations like (rebuild index,update stats etc.). He can , later, better of find the actual issue and fix it like move the functionality out of the maintenance window!!!