SQL Server – Unable to shrink Transaction Log file

Background

Received an alert stating that we have gotten below our threshold of 10% on one of DB Servers.

Thanks goodness no late night phone calls.

Only saw it coming to work this morning.

The good and bad about smartphone.

TroubleShooting

Windows Desktop

Folder

Image

Explanation

  1. ppsivr_8.ldf is 28 GB

SQL Metadata

Outline

Here are dynamic management views ( DMVs) that we will access…

  1. dbcc sqlperf(logspace)
  2. sys.database_files
  3. sys.databases
  4. sys.dm_exec_requests & sysprocesses

 

Review Transaction Log File Sizes

SQL


dbcc sqlperf(logspace)

Output

 

Explanation

  1. ppsivr
    • Log Size MB
      • 27,699 MB
      • 27 GB

Review Database File Sizes

SQL


-- replace with targeted database
use [database]

SELECT 
		  [name]

		, [type]
			= tblSDF.[type_desc]

		, [filename]
			= tblSDF.[physical_name]

		, [allocatedMB]
			= tblSDF.size / 128

		, [usedMB]
			= CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)
				/ 128

		, [availableSpaceMB]
				 = ( size - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int) )
					/ 128

FROM sys.database_files tblSDF

order by
		tblSDF.size desc
;

Output

Review Database Transaction Log Re-usability State

SQL


declare @dbid   int
declare @dbname sysname

set @dbname = 'ppsivr'
set @dbid = db_id(@dbname)

select 
		    tblSD.[database_id]
		  , tblSD.[name]
		  , tblSD.[log_reuse_wait_desc]

from   sys.databases tblSD

where  (
			   (@dbid is null)

			or ( tblSD.[database_id] = @dbid)
		
	   )

Output

Explanation

  1. log_reuse_wait_desc
    • The database is not re-cycling its transaction log due to an ‘Active Transaction’

Review Current DB Requests

Code



declare @dbid   int
declare @dbname sysname

set @dbname = 'ppsivr'
set @dbid = db_id(@dbname)

select 
		  tblSDER.[session_id]

		, [database] 
			= db_name(tblSDER.[database_id])

		, [openTransaction]
			= tblSP.[open_tran]

		, tblSDER.[status]

		, tblSDER.[command]

		, tblSDER.[percent_complete]

		, tblSDER.wait_type

		, tblSDER.last_wait_type

		, tblSDER.[reads]

		, tblSDER.[writes]

from   sys.dm_exec_requests tblSDER

inner join master..sysprocesses tblSP

	on tblSDER.[session_id] = tblSP.[spid]

where  (

	         ( tblSDER.[open_tran] != 0 )

	      or ( tblSDER.[database_id] = @dbid)

      )



Output

Explanation

  1. User Transactions
    • It does not appear that we have user transactions that are current using our targeted database
  2. Background
    • WAIT_XTP_OFFLINE_CKPT_NEW_LOG
      • It does does not apply that WAIT_XTP_OFFLINE_CKPT_NEW_LOG updates the percent_complete column as so it is a bit difficult to track its current progress

 

Remediation

Here are the things we tried:

  1. dbcc shrinkfile
  2. Take database offline and bring back online

Attempt to Shrink Transaction Log Files

SQL


use [ppsivr]
go

dbcc shrinkfile('ppsivr_log')
go

Output

Explanation

  1. We want to take a good look at Current Size and Used Pages
    • In our case they are same
    • What is in our Transaction Log File?

 

Take Database Offline and bring it back online

Force termination of ongoing sessions

SQL


use master
go

ALTER DATABASE [ppsivr]
	SET SINGLE_USER WITH ROLLBACK IMMEDIATE
	;

alter database [ppsivr] set offline;

alter database [ppsivr] set online;

ALTER DATABASE [ppsivr]
	SET MULTI_USER WITH ROLLBACK IMMEDIATE
	;

 

Summary

But, nothing worked!

We will come back and talk about what worked.

References

  1. Microsoft
    • Support.Microsoft.com
      • FIX: Offline checkpoint thread shuts down without providing detailed exception information in SQL Server 2014
        Link

Transact SQL – In Memory Programmable Objects

Background

On a couple of SQL Server Instances that was recently grandfathered in to our group, there are a few In-Memory tables.

And, so I find myself using these precious hours in between Christmas and the forthcoming New Year to catch up on In-Memory Tables.

What good are In-Memory tables, without In-Memory Programmable Objects such as Stored Procedures.

Let us quickly create a couple of Stored Procedures.

 

Code

Programmable Objects

Stored Procedures

dbo.usp_Emp_Get_FilteredOnSSN

 




if object_id('[dbo].[usp_Emp_Get_FilteredOnSSN]') is not null
begin
	drop procedure [dbo].[usp_Emp_Get_FilteredOnSSN]
end
go

create procedure [dbo].[usp_Emp_Get_FilteredOnSSN]
(
	@ssn char(9)
)  
with 
		  native_compilation
		, schemabinding
		, execute as owner  
as  
begin atomic  

	with 
	(
		  transaction isolation level=snapshot
		, language=N'us_english'
	)  

	select
			  tblEmp.[oe_emp_ssn]
			, tblEmp.[oe_emp_location]

	from   [dbo].[oe_emp] tblEmp

	where  tblEmp.[oe_emp_ssn] = @ssn


end  
go  

dbo.usp_Emp_Set_FilteredOnSSN

Code


if object_id('[dbo].[usp_Emp_Set_FilteredOnSSN]') is not null
begin
	drop procedure [dbo].[usp_Emp_Set_FilteredOnSSN]
end
go

create procedure [dbo].[usp_Emp_Set_FilteredOnSSN]
(
	  @ssn      char(9)
	, @location char(2)
)  
with 
		  native_compilation
		, schemabinding
		, execute as owner  
as  
begin atomic  

	with 
	(
		  transaction isolation level=snapshot
		, language=N'us_english'
	)  

	update [dbo].[oe_emp]

	set	[oe_emp_location] = @location
		
	where  [oe_emp_ssn] = @ssn

end  
go  


Unit Test

set nocount on;
set XACT_ABORT on;
go

declare @ssn char(9)
declare @location char(2)
declare @location_Temp char(2)
declare @location_v2 char(2)


begin tran

	select top 1
				@ssn = [oe_emp_ssn]
			, @location = [oe_emp_location]
	from   [dbo].[oe_emp] with (SNAPSHOT)

	set @location_Temp = 'NA'

	exec [dbo].[usp_Emp_Set_FilteredOnSSN]
			@ssn = @ssn
		, @location = @location_Temp


	select @location_v2 = [oe_emp_location]
	from   [dbo].[oe_emp] with (SNAPSHOT)
	where  [oe_emp_ssn] = @ssn


	select 
			  [@location] = @location
			, [@location_Temp] = @location_Temp
			, [@location_v2] = @location_v2	

rollback tran

 

Restrictions

There are a little restrictions placed on Stored Procedures marked for Native Compilation.

The restrictions includes

  1. SQL Syntax
    • Can not use like Statement
      • If we attempt to do so, we will get the error pasted below
        • Sample SQL Statement
          • select tblEmp.[oe_emp_ssn],  tblEmp.[oe_emp_location] from [dbo].[oe_emp] tblEmp where tblEmp.[oe_emp_ssn] like @ssn
        • Error Message
          • Msg 10794, Level 16, State 62, Procedure usp_Emp_Get_FilteredOnSSN, Line 42
            The operator ‘LIKE’ is not supported with natively compiled stored procedures.
    • Can not use wildcard (*) for Column Names
      • If we attempt to do so, we will get the error pasted below
        • Sample SQL Statement
          • select tblEmp.* from [dbo].[oe_emp] tblEmp where tblEmp.[oe_emp_ssn] like @ssn
        • Error Message
          • Msg 1054, Level 15, State 2, Procedure
            Syntax ‘*’ is not allowed in schema-bound objects.
    • Can not use Begin Transaction / Rollback-Commit within Native Compiled Stored Procedures
      • The same effect is handled declaratively, as Native Compile SP are designated as atomic
        • Database Features > In-Memory OLTP (In-Memory Optimization)  > Natively Compiled Stored Procedures
          Link

          • BEGIN ATOMIC is part of the ANSI SQL standard.
          • SQL Server supports atomic blocks at the top-level of natively compiled stored procedures, as well as for natively compiled, scalar user-defined functions.
            • Every natively compiled stored procedure contains exactly one block of Transact-SQL statements. This is an ATOMIC block.
            • Non-native, interpreted Transact-SQL stored procedures and ad hoc batches do not support atomic blocks.
        • Sample Code
          • begin atomic
            —– source code —
            end

 

Table of Errors

Msg Error Number Error Details Remediation
 1054 Syntax ‘*’ is not allowed in schema-bound objects.  Explicitly list the Column Names
 10794  The operator ‘LIKE’ is not supported with natively compiled stored procedures.  The Like Operator can not be used in natively compile Stored Procedure.
Please use regular Stored Procedure
41320 EXECUTE AS clause is required, and EXECUTE AS CALLER is not supported, with natively compiled stored procedures  Replace “with native_compilation, schemabinding” with “with native_compilation, schemabinding, , execute as owner

 

 

References

  1. Database Features > In-Memory OLTP (In-Memory Optimization) > Natively Compiled Stored Procedures > Atomic Blocks in Native Procedures
    Link
  2. Database Engine Features and Tasks > Database Features > In-Memory OLTP (In-Memory Optimization) > Scalar User-Defined Functions for In-Memory OLTP
    Link

Transact SQL – Atomicity and In-Memory Tables

 

Background

In this post, we touch on a couple of errors that one might encounter when trying to access In-Memory tables within a Transaction.

Error

Error Number Error Description
 Msg 41333 The following transactions must access memory optimized tables and natively compiled stored procedures under snapshot isolation: RepeatableRead transactions, Serializable transactions, and transactions that access tables that are not memory optimized in RepeatableRead or Serializable isolation.
 Msg 41332 Memory optimized tables and natively compiled stored procedures cannot be accessed or created when the session TRANSACTION ISOLATION LEVEL is set to SNAPSHOT.

 

 

Code

Here is how to reproduce the errors listed above.

Set Isolation Level

Set Isolation Level – Serializable

Code


SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;

begin tran

	select top 10 tblE.*

	from   [dbo].[oe_emp] tblE

rollback tran

 

Output – Textual


Msg 41333, Level 16, State 1, Line 20
The following transactions must access memory optimized tables and natively compiled stored procedures under snapshot isolation: RepeatableRead transactions, Serializable transactions, and transactions that access tables that are not memory optimized in RepeatableRead or Serializable isolation.


Output – Image

msg41333-setisolationlevelserializable

 

Explanation

  1. The error message states that the “The following transactions must access memory optimized tables and natively compiled stored procedures under snapshot isolation: RepeatableRead transactions, Serializable transactions, and transactions that access tables that are not memory optimized in RepeatableRead or Serializable isolation.
  2. It seems that we have to be in Snapshot isolation when we access Memory Optimized Tables

 

Set Isolation Level

Set Isolation Level – Snapshot

Code

 


set nocount on;
--SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
SET TRANSACTION ISOLATION LEVEL SNAPSHOT;

begin tran

	begin try

		select top 10 tblE.*

		from   [dbo].[oe_emp] tblE

		print 'Successful '

	end try

	begin catch

		print 'Exception'
		print '========='
		print 'ErrorNumber    :- ' + cast(ERROR_NUMBER() as varchar(600))
		print 'ErrorState     :- ' + cast(ERROR_STATE() as varchar(600))
		print 'ErrorProcedure :- ' + cast(isNull(ERROR_PROCEDURE(), -1) as varchar(600))
		print 'ErrorLine      :- ' + cast(isNull(ERROR_LINE(), -1) as varchar(600))
		print 'ErrorMessage   :- ' + cast(ERROR_MESSAGE() as varchar(600))

		
	end catch

rollback tran

 


Exception
=========
ErrorNumber    :- 41332
ErrorState     :- 0
ErrorProcedure :- -1
ErrorLine      :- 10
ErrorMessage   :- Memory optimized tables and natively compiled stored procedures cannot be accessed or created when the session TRANSACTION ISOLATION LEVEL is set to SNAPSHOT.


Output – Image

msg41333-setisolationlevelsnapshot

 

Explanation

  1. So even we set our Transactions Isolation Level to SNAPSHOT, we are still getting an error
  2. Thankfully, a more specific error that states that “Memory Optimized tables and natively compiled stored procedures cannot be accessed or created when the session TRANSACTION ISOLATION LEVEL is set to SNAPSHOT.

 

 

Remediation

Query

Add Table Hint SNAPSHOT To In-Memory Table Reference

Objective

We added SNAPSHOT as a table hint when referencing In-Memory Tables

Code



set nocount on;

/*
	Add Table Hint, SNAPSHOT, to In-Memory Table References
*/
begin tran

	begin try

		
		select top 10 tblE.*

		from   [dbo].[oe_emp] tblE with (SNAPSHOT)

		print 'Successful '

	end try

	begin catch

		print 'Exception'
		print '========='
		print 'ErrorNumber    :- ' + cast(ERROR_NUMBER() as varchar(600))
		print 'ErrorState     :- ' + cast(ERROR_STATE() as varchar(600))
		print 'ErrorProcedure :- ' + cast(isNull(ERROR_PROCEDURE(), -1) as varchar(600))
		print 'ErrorLine      :- ' + cast(isNull(ERROR_LINE(), -1) as varchar(600))
		print 'ErrorMessage   :- ' + cast(ERROR_MESSAGE() as varchar(600))

		
	end catch

rollback tran


Explanation

  1. Add Table Hint (SNAPSHOT) to all In-Memory Table References

 

 

Add Table Hints SNAPSHOT To all In-Memory Table Reference

Objective

We added SNAPSHOT as a table hint to all our In-Memory Table

Code


set nocount on;

/*
	Add Table Hint to all In-Memory Table Object References with (SNAPSHOT)
*/
begin tran

	begin try

		
		select top 10 tblOEE.*

		from   [dbo].[oe_emp] tblOEE with (SNAPSHOT)

		inner join [dbo].[oe_dep] tblOED with (SNAPSHOT)

			on tblOEE.[oe_emp_ssn] = tblOED.[oe_dep_emp_ssn]

		print 'Successful '

	end try

	begin catch

		print 'Exception'
		print '========='
		print 'ErrorNumber    :- ' + cast(ERROR_NUMBER() as varchar(600))
		print 'ErrorState     :- ' + cast(ERROR_STATE() as varchar(600))
		print 'ErrorProcedure :- ' + cast(isNull(ERROR_PROCEDURE(), -1) as varchar(600))
		print 'ErrorLine      :- ' + cast(isNull(ERROR_LINE(), -1) as varchar(600))
		print 'ErrorMessage   :- ' + cast(ERROR_MESSAGE() as varchar(600))

		
	end catch

rollback tran



Explanation

  1. Add Table Hint (SNAPSHOT) to all In-Memory Table References
    • We added to oe_emp and oe_dep

 

Database Settings

Set Database Option “MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT” ON


set nocount on;
set XACT_ABORT on;

/*
	For our current database:
		Set Memory Optimized Elevate to Snapshot ON  
*/


ALTER DATABASE CURRENT 
	SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT=ON  
go

begin tran

	begin try

		
		select top 10 tblOEE.*

		from   [dbo].[oe_emp] tblOEE --with (SNAPSHOT)

		inner join [dbo].[oe_dep] tblOED --with (SNAPSHOT)

			on tblOEE.[oe_emp_ssn] = tblOED.[oe_dep_emp_ssn]

		print 'Successful '

	end try

	begin catch

		print 'Exception'
		print '========='
		print 'ErrorNumber    :- ' + cast(ERROR_NUMBER() as varchar(600))
		print 'ErrorState     :- ' + cast(ERROR_STATE() as varchar(600))
		print 'ErrorProcedure :- ' + cast(isNull(ERROR_PROCEDURE(), -1) as varchar(600))
		print 'ErrorLine      :- ' + cast(isNull(ERROR_LINE(), -1) as varchar(600))
		print 'ErrorMessage   :- ' + cast(ERROR_MESSAGE() as varchar(600))

		
	end catch

rollback tran
go

/*
	Revert Database Setting
*/
ALTER DATABASE CURRENT 
	SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT=OFF  
go

Programmable Objects

Stored Procedure

In a follow-up post, we discuss how to effect the same covering when creating Stored Procedures.

 

Summary

In summary, to avoid the aforementioned errors, we can utilize Table Hints, specifically with (SNAPSHOT), on each query.

Or declaratively set the Database Option, MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT, on each In-Memory Database.

 

References

  1. Database Features > In-Memory OLTP (In-Memory Optimization) > Memory-Optimized Tables
    Link
  2. Database Features > In-Memory OLTP (In-Memory Optimization) > Memory-Optimized Tables > Transactions with Memory-Optimized Tables
    Link

 

 

 

Transact SQL – Identify In-Memory Objects

Introduction

Let us identify In Memory Objects in our SQL Server Instance.

 

User Tables – sys.tables

Overview

In the sample code below, we identify In Memory Objects by checking the is_memory_optimized column in the sys.tables DMV.

Code



select
		 [table]
			= quotename(tblSS.[name])
				+ '.'
				+ quotename(tblST.[name])

from   sys.tables tblST

inner join sys.schemas tblSS

		on tblST.[schema_id] = tblSS.[schema_id]

where  tblST.is_memory_optimized = 1

order by
		[table] asc


Output

identifyinmemorytables-systables

User Tables & Indexes – Based on FileGroups

Overview

In the sample code below, we identify In Memory Objects by seeking out objects that are located on In-Memory Filegroups.

Code



select 

		  [object]
			= tblSS.[name]
				+ '.'
				+ tblSO.[name]

		, [objectType]
			= tblSO.[type_desc]


		, [indexName]
			= tblSI.[name]


		, [isPrimaryKey]
			= case
				when tblSI.[is_primary_key] = 1 then 'Yes'
				else 'No'
			  end

		, [indexType]
			= tblSI.[type_desc]
	
		, [fileGroup]
			= tblSFG.[name]

		, [fileGroupType]
			= tblSFG.[type_desc]

from   sys.objects tblSO

INNER JOIN sys.schemas tblSS

	ON tblSO.schema_id = tblSS.schema_id

inner join sys.indexes tblSI

	on tblSO.object_id = tblSI.object_id

inner join sys.partitions tblSP

	on  tblSI.object_id = tblSP.object_id
	and tblSI.index_id = tblSP.index_id

		
INNER JOIN sys.allocation_units tblSAU

	on tblSAU.container_id = tblSP.hobt_id
		 
INNER JOIN sys.filegroups tblSFG
 
	ON tblSFG.data_space_id = tblSAU.data_space_id 

where tblSFG.[type] = 'FX'



Output

identifyinmemoryobjects

 

 

User Tables

DML Stats

Overview

Review Statistics of DML Operations against In-Memory Tables.

Code


SELECT
	  [object]
		= quoteName(tblS.[name])
		   + '.'
		   + quoteName(tblO.[name])

        , tblXTPOS.*

FROM sys.dm_db_xtp_object_stats tblXTPOS

INNER JOIN sys.objects tblO

	ON tblXTPOS.object_id = tblO.object_id

INNER JOIN sys.schemas tblS

	ON tblO.schema_id = tblS.schema_id


 

Output

dmlstats

 

Index Stats

Overview

Review Statistics of DML Operations against In-Memory Tables.

Code





SELECT
        [object]
        = quoteName(tblS.[name])
           + '.'
           + quoteName(tblO.[name])
 
		, [index]
			= tblSI.[name]

		, [indexType]
			= tblSI.[type_desc]

		, [isPrimaryKey]
			= case
				when ( tblSI.[is_primary_key] = 1 ) then 'Y'
				else 'N'
			  end

        , [scansStarted]
			= tblXTPIS.scans_started	

		, [scansRetried]
			= tblXTPIS.scans_retries
			
		, [rowsReturned]
			= tblXTPIS.rows_returned

		, [rowsTouched]
			= tblXTPIS.rows_touched	

		/*

		--- Expiring -----
		, tblXTPIS.rows_expiring
		, tblXTPIS.rows_expired
		, tblXTPIS.rows_expired_removed	
		--- Expiring -----

		-- Phantom ----
		, tblXTPIS.phantom_scans_started	
		, tblXTPIS.phantom_scans_retries	
		, tblXTPIS.phantom_rows_touched	
		, tblXTPIS.phantom_expiring_rows_encountered	
		--, phantom_expired_rows_encountered	
		, tblXTPIS.phantom_expired_removed_rows_encountered	
		, tblXTPIS.phantom_expired_rows_removed
		-- Phantom ----

		*/

FROM sys.dm_db_xtp_index_stats tblXTPIS
 
INNER JOIN sys.objects tblO
 
    ON tblXTPIS.object_id = tblO.object_id

INNER JOIN sys.indexes tblSI
 
    ON  tblXTPIS.object_id = tblSI.object_id 
    AND tblXTPIS.index_id = tblSI.index_id 

INNER JOIN sys.schemas tblS
 
    ON tblO.schema_id = tblS.schema_id

order by
		  tblS.[name]
		, tblO.[name]
		, tblSI.[name]

Output

imtabledmlindexstats

Explanation:

  1. In-Memory tables does not support Clustered Indexes as all data is stored in memory
    • If Clustered,then memory will have to be continuously shuffled to ensure proper sequence
  2. It makes sense to review ScanStarted
    • Doing so is informative in terms of which indexes are actually being used
    • And, to gauge popularity
  3. The RowsReturned Column
    • Reviewing the RowsReturned column is also important to follow trend in terms of access method

Memory Usage Stats

Memory Usage Stats – Object

Overview

Review Memory allocated and in-use by In-Memory Tables.

Code


SELECT

	[object]
		= quotename(tblSS.name)
			+'.'
			+ quotename(tblSO.name)

    , [allocatedMB]
		= SUM(tblXTPMC.[allocated_bytes]) / (1024* 1024)

	, [usedMB]
		= SUM(tblXTPMC.[used_bytes]) / (1024 * 1024)

FROM  sys.objects tblSO

INNER JOIN sys.schemas tblSS
 
	ON tblSO.schema_id = tblSS.schema_id 

INNER JOIN sys.indexes tblSI
 
	ON tblSO.object_id = tblSI.object_id 

INNER JOIN sys.dm_db_xtp_memory_consumers tblXTPMC 

	ON  tblSI.object_id = tblXTPMC.object_id
	AND tblSI.index_id  = tblXTPMC.index_id

GROUP BY 
		  quotename(tblSS.name)
		, tblSO.schema_id
		, tblSO.object_id
		, tblSO.[name]
		
ORDER BY 
		  quotename(tblSS.name)
		, tblSO.[name]
;


Output

memoryallocatedandinusedbyobject

Memory Usage Stats – Index

Overview

Review Memory allocated and in-use by In-Memory Table Indexes.

Code


SELECT

	[object]
		= quotename(tblSS.name)
			+'.'
			+ quotename(tblSO.name)

	, [index]
		= tblSI.name

	, [indexType]
		= tblSI.[type_desc]

    , [isPrimaryKey]
        = case
            when tblSI.[is_primary_key] = 1 then 'Yes'
            else 'No'
            end

    , [allocatedMB]
		= SUM(tblXTPMC.[allocated_bytes]) / (1024* 1024)

	, [usedMB]
		= SUM(tblXTPMC.[used_bytes]) / (1024 * 1024)

FROM  sys.objects tblSO

INNER JOIN sys.schemas tblSS
 
	ON tblSO.schema_id = tblSS.schema_id 

INNER JOIN sys.indexes tblSI
 
	ON tblSO.object_id = tblSI.object_id 

INNER JOIN sys.dm_db_xtp_memory_consumers tblXTPMC 

	ON  tblSI.object_id = tblXTPMC.object_id
	AND tblSI.index_id  = tblXTPMC.index_id

GROUP BY 
		  quotename(tblSS.name)
		, tblSO.schema_id
		, tblSO.object_id
		, tblSO.[name]
		, tblSI.name
		, tblSI.[type_desc]
		, tblSI.[is_primary_key]
		
ORDER BY 
		  quotename(tblSS.name)
		, tblSO.[name]
		, tblSI.name


Output

memoryallocatedandinusedbyobjectandindex

Programmable Objects – sys.modules

Overview

In the sample code below, we identify In Memory Objects by checking the is_memory_optimized column in the sys.tables DMV.

Code



/*

	sys.sql_modules

		execute_as_principal_id

			Value of -2 indicates that the batch submitted does not depend on implicit name resolution and can be shared among different users. 
			This is the preferred method. Any other value represents the user ID of the user submitting the query in the database.

*/

select 
		  [object]
			= quoteName(tblSS.[name])
				+ '.'
				+ quoteName(object_name(tblSSM.[object_id]))

		, [type]
			= tblSO.type_desc

		, [createDate]
			= tblSO.[create_date]

		, [isSchemaBound]
			= case
					when tblSSM.is_schema_bound = 1 then 'Y'
					else 'N'
			  end	

		, [principal]
			= case
			 
					when ( tblSSM.execute_as_principal_id is null ) 
						then tblSS.[name]
			  
					when  ( tblSSM.execute_as_principal_id = -2) 
						then '--Shared--'

					else user_name(tblSSM.execute_as_principal_id)

						  
			  end			 
		      
		, [definition]
			= tblSSM.[definition]

from  sys.objects tblSO

inner join sys.schemas tblSS

		on tblSO.[schema_id] = tblSS.[schema_id]

inner join sys.sql_modules tblSSM

		on tblSO.[object_id] = tblSSM.[object_id]

where tblSSM.[uses_native_compilation] = 1

Output

identifynativecompiledobjects

Loaded Modules – sys.dm_os_loaded_modules

Overview

In the sample code below, we identify In Memory Objects by reviewing loaded OS Modules.

The relevant DMVs are sys.dm_os_loaded_modules and sys.dm_os_virtual_address_dump.

 

Code



; with cteOSLM
(
	  [modulename]
	, [description]
    , [base_address]
	, [filenameFull]
	, [filename]
	, [filenameMassaged]
)
as
(

	SELECT
			  [name]

			, [description]

			, [base_address]

			, [filenameFull] = tblOSLM.[name]

			, [filename]
				= reverse(left(reverse(tblOSLM.[name]),
                    charindex('\',reverse(tblOSLM.[name]), 1) - 1))

			, [filenameMassaged]
				= replace(
							replace(
									 reverse(left(reverse(tblOSLM.[name]),
										charindex('\',reverse(tblOSLM.[name]), 1) - 1))
										, '_'
									, '.'
									)
							, '.dll'
							, ''
						)				

	
	FROM   sys.dm_os_loaded_modules tblOSLM 

	WHERE  tblOSLM.[description] = 'XTP Native DLL'

	
)

, cteOSLMObject
(
	  [modulename]
	, [description]
    , [base_address]
	, [filenameFull]
	, [filename]
	, [filenameMassaged]

	, [objectID] 
	, [databaseID] 
	, [objectType]

)
as
(

	SELECT 

		  	  [modulename]
			, [description]
			, [base_address]
			, [filenameFull]
			, [filename]
			, [filenameMassaged]


			, [objectID] 
				= PARSENAME([filenameMassaged], 1)

			, [databaseID] 
				= PARSENAME([filenameMassaged], 2)
			
			, [objectType]
				= case PARSENAME([filenameMassaged], 3)
						when 't' then 'Table'
						when 'p' then 'Procedure'
						when 'f' then 'Function'
						else PARSENAME([filenameMassaged], 3)
				  end 

	from   cteOSLM

)
, cteVirtualAddress
(
	  [region_allocation_base_address]
	, [regionSizeInBytes]
)
as
(
	select 
			  tblOSVAD.[region_allocation_base_address]
			, [regionSizeInBytes]
				= sum(tblOSVAD.region_size_in_bytes)
	
	from   sys.dm_os_virtual_address_dump tblOSVAD
	
	group by
	 
			tblOSVAD.[region_allocation_base_address]

)	 
SELECT 

		  tblOSLM.[description]

		, tblOSLM.[modulename]

		, tblOSLM.[filename]


		, [database]
			= case
				when tblOSLM.[databaseID] = 32767 then 'Resource DB'
				else db_name(tblOSLM.[databaseID])
			   end

		, [objectName]
			= quoteName
				(
				    object_schema_name
					(
					  tblOSLM.objectID
					, tblOSLM.databaseID
					)
				)
				+ '.'
				+ quoteName
				(
					object_name
					(
					   tblOSLM.objectID
					 , tblOSLM.databaseID
					)
				)

		, tblOSLM.[objectType]

		, [sizeInKB]
			= (tblOSVAD.[regionSizeInBytes])
				/ ( 1024 )

FROM   cteOSLMObject tblOSLM 

INNER JOIN cteVirtualAddress tblOSVAD

		on tblOSLM.[base_address] = tblOSVAD.[region_allocation_base_address]

order by

			  [database]
			, [objectName]


Output

osloadedmodules

 

Explanation

  1. We can see that we can the SQL Server Engine generates Database Object specific modules (dlls)
  2. The modules are aptly named once one know where to look
  3. And, they are very small in size

 

Summary

In summary, to identify In Memory Objects we have a couple of tracks we can take.

Those pathways includes:

  1. Checking Table – sys.tables
    • Column :- is_memory_optimized
  2. Checking objects that are sitting on Memory Optimized Filegroup by relying on the sys.filegroups.

We reviewed user usage by digging into the sys.dm_db_xtp_object_stats & sys.dm_db_xtp_index_stats.

We also touched on memory consumed by In Memory Objects and Indexes by exploring the sys.dm_db_xtp_memory_consumers view.

To identify natively compiled objects we narrow in on the uses_native_compilation column of the sys.sql_modules view

To dig a bit deeper and identify OS Modules that are produced once In-Memory tables are compiled, we look into sys.dm_os_loaded_modules and to size up them up we check the region_size_in_bytes column in the sys.dm_os_virtual_address_dump table.

 

Source Code Repository

GitHub

Blogs are not where to keep code, and so for easier and more polish consumption shipped out to GitHub.

Here is the Repository.

 

Dedicated

Again, the good thing about blogging is that one publicly acknowledges the source of Information.

Here is mine:

  1. SqlHints.com, Basavaraj Biradar
    • Working with In-Memory OLTP (a.k.a. Hekaton) enabled Databases, Memory Optimized Tables and Natively Compiled Stored Procedures and it’s Internals with extensive list of Examples – Sql Server 2014
      Link
  2.  Help: SQL Server, Balmukund
    • A-Z of In-Memory OLTP : Behind the scenes
      Link

 

References

Microsoft

    1. Dynamic Management Views and Objects
      • Transact-SQL Reference (Database Engine) > System Views (Transact-SQL) > Dynamic Management Views and Functions (Transact-SQL)
        Memory-Optimized Table Dynamic Management Views (Transact-SQL)
        Link

        • sys.dm_db_xtp_memory_consumers (Transact-SQL)
          Link
        • sys.dm_db_xtp_index_stats ( Transact SQL )
          Link
        • sys.dm_db_xtp_object_stats ( Transact-SQL )
          Link
        • sys.memory_optimized_tables_internal_attributes ( Transact-SQL )
          Link
    2. Developer Network
      • Database Features > In-Memory OLTP (In-Memory Optimization)  > Memory-Optimized Tables
        Native Compilation of Tables and Stored Procedures
        Link

Technical: Microsoft – SQL Server – v2014 (In Memory Table) – Benchmark Simple Queries

 

 

Technical: Microsoft – SQL Server – v2014 (In Memory Table) – Benchmark Simple Queries

 

Background

Comparing Simple SQL Server queries against traditional and in-memory tables.

 

Generate Data to query for

We will like our test queries to request data that is representative of our existing data. And, as such we will get a sampler of existing data.

This is easy to do once one discovers MS SQL Server tablesample keyword.

 

bcp "select distinct top 100000 quotename(firstname, '\"') as firstname, quotename(lastname, '\"') as lastname, quotename(emailaddress, '\"') as emailAddress, quotename(convert(varchar(10), dateofBirth, 101) , '\"')  as dateofBirth from   [dbo].[personTraditionalClustered] tablesample (100000 rows)  with (nolock)" queryout e:\tmp\sampleData.txt -c  -S (local) -d DBLabInMemory -T -t ","


 

Please note the following:

  • Quest Benchmark factory likes for strings to be quoted and as such we will use quotename to specify that our chosen strings and date columns be surrounded in double quotes
  • We are using (tablesample 100000 rows) to randomly get one hundred thousand rows

 

If your strings are not quoted, you might get an error that resembles the following.

IfFileContentsAreNotQuoted

 

Not quite descriptive nor informational, but an error is an error.

 

Sample Data



"Ezekiel","Brennan","Wallace@risus.edu","01/10/1994"
"Henry","Crane","Craig@ut.org","04/23/1963"
"Ulla","Workman","Chaney@vestibulum.org","10/10/1955"
"Ralph","Gamble","Avye@erat.edu","06/03/1929"
"Diana","Vang","Ori@mattis.gov","07/31/1921"
"Kasper","Sanford","Ori@morbi.edu","06/05/1975"
"Nicholas","Daniel","Joshua@posuere.net","01/19/1903"
"Erich","Howe","Kirestin@placerat.com","07/24/1969"
"Piper","Hill","Austin@id.gov","12/30/2012"
"Baxter","Key","Channing@placerat.org","01/29/1900"


 

Prepare Microsoft SQL Server

There are a couple of areas that we should cover prior to load testing.

 

Microsoft SQL Server Stored Procedures

Here are the Stored Procedures that we will use to retrieve data.

Stored Procedure – Traditional


if object_id('dbo.usp_Traditional_getPerson_FilteredOnName') is not null
begin
	drop procedure dbo.usp_Traditional_getPerson_FilteredOnName
end
go

create procedure dbo.usp_Traditional_getPerson_FilteredOnName
(
	@firstname varchar(40)
)
as

	select
			   [firstname]
			  ,[lastname]
			  ,[emailAddress]
			  ,[phoneNumber]
			  ,[addressStreet]
			  ,[addressCity]
			  ,[addressState]
			  ,[addressPostalCode]
			  ,[dateofBirth]
			  ,[uniqueID]
		from   [dbo].[personTraditionalClustered] tblPerson
		where  tblPerson.firstname = @firstname


end

go 

Stored Procedure – dbo.usp_Traditional_getPerson_FilteredOnUniqueID


if object_id('dbo.usp_Traditional_getPerson_FilteredOnUniqueID') is not null
begin
 drop procedure dbo.usp_Traditional_getPerson_FilteredOnUniqueID
end
go

create procedure dbo.usp_Traditional_getPerson_FilteredOnUniqueID
(
     @uniqueID uniqueIdentifier
)
as
 select 
    
       [firstname]
      ,[lastname]
      ,[emailAddress]
      ,[phoneNumber]
      ,[addressStreet]
      ,[addressCity]
      ,[addressState]
      ,[addressPostalCode]
      ,[dateofBirth]
      ,[uniqueID]

 from [dbo].[personTraditionalClustered] tblPerson

 where tblPerson.UniqueID = @uniqueID


go


Stored Procedure – Memory Optimized



if object_id('dbo.usp_InMemory_getPerson_FilteredOnName') is not null
begin
	drop procedure dbo.usp_InMemory_getPerson_FilteredOnName
end
go

create procedure dbo.usp_InMemory_getPerson_FilteredOnName
(
	@firstname varchar(40)
)
with native_compilation, schemabinding, execute as owner
as

	begin atomic
	with (transaction isolation level=snapshot, language=N'us_english')


	select
			   [firstname]
			  ,[lastname]
			  ,[emailAddress]
			  ,[phoneNumber]
			  ,[addressStreet]
			  ,[addressCity]
			  ,[addressState]
			  ,[addressPostalCode]
			  ,[dateofBirth]
			  ,[uniqueID]
		from   [dbo].[personInMemory] tblPerson
		where  tblPerson.firstname = @firstname


end

go




 

Stored Procedure – dbo.usp_InMemory_getPerson_FilteredOnUniqueID


if object_id('dbo.usp_InMemory_getPerson_FilteredOnUniqueID') is not null
begin
     drop procedure dbo.usp_InMemory_getPerson_FilteredOnUniqueID
end
go

create procedure dbo.usp_InMemory_getPerson_FilteredOnUniqueID
(
     @uniqueID uniqueIdentifier
)
with native_compilation, schemabinding, execute as owner
as

 begin atomic
 with (transaction isolation level=snapshot, language=N'us_english')


      select 
          [firstname]
         ,[lastname]
         ,[emailAddress]
         ,[phoneNumber]
         ,[addressStreet]
         ,[addressCity]
         ,[addressState]
         ,[addressPostalCode]
         ,[dateofBirth]
         ,[uniqueID]
     from [dbo].[personInMemory] tblPerson
     where tblPerson.UniqueID = @uniqueID

end

go




Quest Benchmark Factory

Scenarios

In Quest Benchmark, we reference the generated sample data file by using BFFFileArray. Please keep in mind that it allows us to reference each column individually by specifying its position in the file.

 

Scenario SQL
SQLStmt_FirstName_Traditional select *
from [dbo].[personTraditionalClustered] tblPerson
where tblPerson.firstname = ‘$BFFileArray(“E:\tmp\sampleData.txt”,RANDOM,1)’
 SQLStmt_FirstName_InMemory select *
from [dbo].[personInMemory] tblPerson
where tblPerson.firstname = ‘$BFFileArray(“E:\tmp\sampleData.txt”,RANDOM,1)’
SQLStmt_UniqueID_Traditional select *
from [dbo].[personTraditionalClustered] tblPerson
where tblPerson.uniqueID = ‘$BFFileArray(“E:\tmp\sampleData.txt”,RANDOM,5)’
 SQLStmt_UniqueID_InMemory select *
from [dbo].[personInMemory] tblPerson
where tblPerson.uniqueID = ‘$BFFileArray(“E:\tmp\sampleData.txt”,RANDOM,5)’
SQLSP_FirstName_Traditional  exec dbo.usp_Traditional_getPerson_FilteredOnName
@firstname = ‘$BFFileArray(“E:\tmp\sampleData.txt”,RANDOM,1)’
 SQLSP_FirstName_InMemory  exec dbo.usp_InMemory_getPerson_FilteredOnName
@firstname = ‘$BFFileArray(“E:\tmp\sampleData.txt”,RANDOM,1)’
SQLSP_UniqueID_Traditional exec dbo.usp_Traditional_getPerson_FilteredOnUniqueID
@uniqueID = ‘$BFFileArray(“E:\tmp\sampleData.txt”,RANDOM,5)’
 SQLSP_UniqueID_Traditional exec dbo.usp_InMemory_getPerson_FilteredOnUniqueID
@uniqueID = ‘$BFFileArray(“E:\tmp\sampleData.txt”,RANDOM,5)’

 

 

Latencies

Here is how we configured latency for each of our Transactions:

LoadSetup-Latency

Explanation:

  • We are setup for zero delay (No Delay).  In a whole lot of cases, one adds Think Time and the like, but since we know we are employing the Free Version of Quest Benchmark and we can only go up to 20 users, and not 100 users as the actual product allows, we wanted to stress the system a bit

 

 

 

SQL Server – Benchmark – Profiler

In certain cases, it is important to review the client Application interaction with SQL Server and glean how the queries are being submitted.

SQL Statements

SqlServerProfiler-SelectStatement

 

Stored Procedure Statements

SQLServerProfiler-StoredProcedureExecutionThroughExecStatement

 

SQL Server – Benchmark – Results

Here is the results of running the benchmark.

Summary

QuestBenchMark-ResultsSummary

Graph

Response Time vs Userload

 

QuestBenchmark-SQLServerCompare

Tabular

 

ID Scenario Response Time in millisecond for 20 Userload (with 1 being best)
1 SQLStmt_FirstName_Traditional 6  ( 31 milliseconds )
2  SQLStmt_FirstName_InMemory 5  ( 27 milliseconds )
3 SQLStmt_UniqueID_Traditional 2  ( 3 milliseconds )
4  SQLStmt_UniqueID_InMemory 4  ( 6 milliseconds )
5 SQLSP_FirstName_Traditional 6  ( 31 milliseconds )
6  SQLSP_FirstName_InMemory 8  ( 38 milliseconds )
7 SQLSP_UniqueID_Traditional 3 ( 4 seconds)
8  SQLSP_UniqueID_InMemory 1 ( 0 seconds )

 

 

Explanation:

  • The queries that filters on UniqueID, a uniqueidentifier a column, are the fastest
  • In the traditional table we are able to create clustered indexes and we created one on the UniqueIDs column.
  • The query on the in-memory table against the unclustered hash index did better than the one against the traditional table with a clustered indexed
  • Due to our tool’s usage of prepared statements, we did not pay a price on SQL Statements

 

SQL Server – Query Plan

Fetch on Name

Query

Here is what our query on name looks like:

declare @firstname varchar(40)

set @firstname = 'David'

exec dbo.usp_Traditional_getPerson_FilteredOnName
	   @firstname = @firstname

exec dbo.usp_InMemory_getPerson_FilteredOnName
	   @firstname = @firstname

 

Estimated Execution Plan

Traditional Table:

FecthOnNameAgainstTraditional

 

In-Memory Table:

FetchOnNameAgainstInMemoryTable

 

Explanation:

  • The query plan for in-memory is a bit simpler
  • For In-Memory, it consists of a single index seek
  • For Traditional, it does an index-seek and a clustered-index lookup to get the additional columns we are querying for, and a nested loops inner join to combine the results of the two
  • Interestingly, the traditional query is also suggesting that we create a covering index to avoid the two index retrievals

Statistics I/O

Table 'personTraditionalClustered'. Scan count 1, logical reads 3554, 
physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical 
reads 0, lob read-ahead reads 0.

Explanation

  • The traditional table registers I/O metrics, but not so for in-memory tables

 

 

Fetch on UniqueID

Query

Here is what our query on name looks like:

declare @uniqueID uniqueIdentifier

set @uniqueID = newid()

exec dbo.usp_Traditional_getPerson_FilteredOnUniqueID
	   @uniqueID = @uniqueID

exec dbo.usp_InMemory_getPerson_FilteredOnUniqueID
	   @uniqueID = @uniqueID

 

Estimated Execution Plan

Traditional Table:

FecthOnUniqueIDAgainstTraditional

 

In-Memory Table:

FecthOnUniqueIDAgainstInMemory

 

Explanation:

  • The query plans are very similar
  • Fo traditional table, we have a clustered index seek
  • For In-Memory table, we have an index seek
  • Not so sure that SQL Server is properly costing In-Memory queries

 

Statistics I/O

Table 'personTraditionalClustered'. Scan count 1, logical reads 3, 
physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical 
reads 0, lob read-ahead reads 0.

Explanation

  • The traditional table registers a very small I/O metric, but not so for in-memory tables

 

 

SQL Server – Wait Stats

In-Memory

Activity Monitor

For in_memory selects, here are our wait stats

ResourceWaits

 

Explanation:

  1. Table access is performed via SQLCLR
  2. Return of selected data is noted as Network I/O

 

Dynamic Management View

Using the stolen query https://danieladeniji.wordpress.com/2013/06/19/technical-microsoft-sql-server-wait-stats-cx-packet/, here is raw wait type data:

waitStatsDMV

Explanation:

  1. Again, CLR_AUTO_EVENT

 

Summary

Queries targeting UniqueIdentifier datatype column seem to run consistently better than those that are targeting the varchar datatype.

For in-memory tables because we can only create indexes on BIN2 columns.  As Binary means binary and does not support case-insensitive searches, for actual business use-cases, workarounds will have to be understood and diligently employed.

In a follow-up posting will like to compare In-Memory searches where an index is available against ones where indexes are not created.

I think our Tool choice is not fully supporting Stored Procedure calls and it is treating them as Prepared calls.

 

References

References – Microsoft – SQL Server – Memory Optimized DB

References – Quest Benchmark Factory

References – Microsoft – SQL Server – Transact SQL – Table Sample

 

 

 

Technical: Quest Software – Benchmark Factory for Databases

 

Technical: Quest Software – Benchmark Factory for Databases – Use Case Scenario – Quick Evaluation of MS SQL Server v2012 – InMemory Database – Simple Table and Programmable Objects

 

Background

Wanted a quick familiarization of MS SQL Server v2012 (In Memory Database) tables and programmable objects.

 

Tools

There are a couple of tools in the market.  Our choice includes:

  • Microsoft – RML Utilities – ostress.exe
  • Quest Software – Benchmark factory for Databases
  • Quest Software – Benchmark factory for Databases Freeware

As this is for home evaluation, was happy to find out that Quest has made a freeware version available.

In this post, We will focus our attention on this tool.

 

Download

Downloaded “Benchmark Factory for Databases Freeware” from http://www.toadworld.com/m/freeware/555.aspx .  Please keep in mind that you need a Quest Account.  As it is easy and free to get one, please register and get one, if you do not have an existing account.

Depending on when you attempt the download the exact download link might have changed. And, if so I will suggest that you visit http://www.toadworld.com/products/benchmark-factory/default.aspx for the more general product web site.

 

Environment Setup – Microsoft SQL Server

On a MS SQL Server 2014 Instance, Let us setup our SQL database objects.

 

Add File Group – In Memory

Check sys.filegroups and see if there are any entries with type = FX. If not, then create one.


set nocount on
set noexec off
go

use [DBLabInMemory]
go

/*
  Check sys.filegroups for type=FX -- HasMemoryOptimizedObjects
*/
if exists
	(
		select top 100 percent
				   tblFileGroup.name
				,  tblFileGroup.[type] 
				, tblFileGroup.type_desc
		from   sys.filegroups tblFileGroup
		where  tblFileGroup.[type] = 'FX'
		order  by tblFileGroup.[type]
	)
begin
	print 'Database - FileGroup - fileGroupInMemoryFileGroup exists!'
	set noexec on
end
go


ALTER DATABASE [DBLabInMemory]
	ADD FILEGROUP fileGroupInMemoryFileGroup
		CONTAINS MEMORY_OPTIMIZED_DATA 
		;

go

set noexec off
go



 

 

Add In-Memory File to File Group (Memory_OPTIMIZED_DATA)

Add in-memory file to file group…


set nocount on
set noexec off

use [DBLabInMemory]
go


if not exists
 (
 select *
 from master.sys.sysdatabases
 where name = 'DBLabInMemory'
 )
begin
    print 'Database - DBLabInMemory does not exists!'
    raiserror( 'Database - DBLabInMemory does not exists!', 16, 1)
    set noexec on
end
go


if exists
 (
     select *
     from sys.master_files
     where database_id = db_id() --db_id('DBLabInMemory')
     and (
                (name = 'datafile_InMemory_0001')
             or (physical_name = 
 'E:\Microsoft\SQLServer\DatafilesInMemory\DBLabInMemory\datafile___InMemory__0001.ndf'
                )
          )
   )
begin

 print 'Database - DBLabInMemory -- file already exists!'
 raiserror( 'Database - DBLabInMemory -- file already exists!', 16, 1)
 set noexec on
end
go

ALTER DATABASE [DBLabInMemory]
 ADD FILE 
 (
    name='datafile_InMemory_0001'
 , filename='E:\Microsoft\SQLServer\DatafilesInMemory\DBLabInMemory\datafile___InMemory__0001.ndf'
 ) 
 TO FILEGROUP [fileGroupInMemoryFileGroup]


go

set noexec off
go


 

Create Traditional Table – dbo.customerTraditional



USE [DBLabInMemory]
GO


CREATE TABLE [dbo].[CustomerTraditional]
(
	[ID] [int] NULL,
	[Firstname] [varchar](50) NULL,
	[Lastname] [varchar](50) NULL,
	[EmailAddress] [varchar](50) NULL,
	[Country] [varchar](50) NULL,
	[IPAddress] [varchar](20) NULL,
	[Company] [varchar](50) NULL,
	[City] [varchar](50) NULL,
	[PhoneWork] [varchar](50) NULL,
	[StreetAddress] [varchar](50) NULL,
	[Username] [varchar](50) NULL,
	[UniqueID] [varchar](40) NOT NULL,
	 CONSTRAINT [PK_UniqueID] PRIMARY KEY CLUSTERED 
	(
		[UniqueID] ASC
	)WITH (
                  PAD_INDEX = OFF
                , STATISTICS_NORECOMPUTE = OFF
                , IGNORE_DUP_KEY = OFF
                , ALLOW_ROW_LOCKS = ON
                , ALLOW_PAGE_LOCKS = ON
              )
	      ON [PRIMARY]

) ON [PRIMARY]

GO


 

 

Create Traditional Table – dbo.customerInMemory

create In-Memory table (dbo.customerInMemory)


USE [DBLabInMemory]
GO


CREATE TABLE [dbo].[CustomerInMemory]
(
 [ID] [int] NULL,
 [Firstname] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
 [Lastname] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
 [EmailAddress] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
 [Country] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
 [IPAddress] [varchar](20) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
 [Company] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
 [City] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
 [PhoneWork] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
 [StreetAddress] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
 [Username] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
 [UniqueID] [varchar](40) COLLATE Latin1_General_100_BIN2 NOT NULL,

    PRIMARY KEY NONCLUSTERED HASH 
      (
        [UniqueID]
      ) WITH ( BUCKET_COUNT = 256)

)
WITH  
   ( 
       MEMORY_OPTIMIZED = ON
    , DURABILITY = SCHEMA_AND_DATA 
   )

GO

SET ANSI_PADDING OFF
GO
 

 

 

Create Stored Procedure dbo.usp_InsertCustomerTraditional

SP – dbo.usp_InsertCustomerTraditional


USE [DBLabInMemory]
GO

if object_id('[dbo].[usp_InsertCustomerTraditional]') is null
begin
 exec ('create procedure [dbo].[usp_InsertCustomerTraditional] as begin print ''undefined'' end ')
end
GO


ALTER PROCEDURE [dbo].[usp_InsertCustomerTraditional]
(
 @ID int 
 ,@Firstname varchar(50)
 ,@Lastname varchar(50)
 ,@EmailAddress varchar(50)
 ,@Country varchar(50)
 ,@IPAddress varchar(20) = null
 ,@Company varchar(50) = null
 ,@City varchar(50) = null
 ,@PhoneWork varchar(50) = null
 ,@StreetAddress varchar(50) = null
 ,@Username varchar(50) = null
 ,@UniqueID varchar(50) = null
)
with execute as owner
as 
begin

 insert into [dbo].[CustomerTraditional]
 (
 [ID]
 ,[Firstname]
 ,[Lastname]
 ,[EmailAddress]
 ,[Country]
 ,[IPAddress]
 ,[Company]
 ,[City]
 ,[PhoneWork]
 ,[StreetAddress]
 ,[Username]
 ,[UniqueID]
 )
 values
 (
 @ID
 ,@Firstname
 ,@Lastname
 ,@EmailAddress
 ,@Country
 ,@IPAddress
 ,@Company
 ,@City
 ,@PhoneWork
 ,@StreetAddress
 ,@Username
 ,NEWID() --@UniqueID
 )

end

GO 

 

 

Create Stored Procedure dbo.usp_InsertCustomerInMemory

SP – dbo.usp_InsertCustomerInMemory


USE [DBLabInMemory]
GO

if object_id('[dbo].[usp_InsertCustomerInMemory]') is not null
begin
  DROP PROCEDURE [dbo].[usp_InsertCustomerInMemory]
end
GO

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO


create procedure [dbo].[usp_InsertCustomerInMemory]
(
  @ID int 
 ,@Firstname varchar(50)
 ,@Lastname varchar(50)
 ,@EmailAddress varchar(50)
 ,@Country varchar(50)
 ,@IPAddress varchar(20) = null
 ,@Company varchar(50) = null
 ,@City varchar(50) = null
 ,@PhoneWork varchar(50) = null
 ,@StreetAddress varchar(50) = null
 ,@Username varchar(50) = null
 ,@UniqueID varchar(50) = null
)
with native_compilation, schemabinding, execute as owner
as 
begin atomic with
(transaction isolation level = snapshot,
language = N'English')

 
 insert into [dbo].[CustomerInMemory]
 (
  [ID]
 ,[Firstname]
 ,[Lastname]
 ,[EmailAddress]
 ,[Country]
 ,[IPAddress]
 ,[Company]
 ,[City]
 ,[PhoneWork]
 ,[StreetAddress]
 ,[Username]
 ,[UniqueID]
 )
 values
 (
 @ID
 ,@Firstname
 ,@Lastname
 ,@EmailAddress
 ,@Country
 ,@IPAddress
 ,@Company
 ,@City
 ,@PhoneWork
 ,@StreetAddress
 ,@Username
 ,NEWID() --@UniqueID
 )

end

GO



 

 

 

Installation

Installation is straight-forward and it is supportable on my old MS Windows 2003 x86 box.

 

Usage

Launch product

 

Create Scenario

 

Welcome

Here is welcome screen

Welcome

 

Select Load Scenario

As we are targeting a “Microsoft SQL Server” database, chose that Database (Microsoft SQL Server) in the Database drop-down.

 

SelectLoadScenario__MicrosoftSQLServer

 

We have a basic choice of Test Types that we can run.  Our basic choices are:

  • Industry Standard benchmarks
  • Test Database scalability
  • Test SQL for scalability
  • Capture and Replay SQL Server workload

Here is Quest’s explanation for each Scenario:

Scenario Explanation
Industry Standard benchmarks Creates load scenarios from industry standard benchmarks (AS3AP, Scalable Hardware, TPC-C, TPC-B, TPC-D, TPC-E, TPC-H). This includes the steps to create/load all the required benchmark objects to execute the standard benchmark workloads. These synthetic workloads can be used when no real-world data is available to be imported for database load testing.
Test Database scalability Creates load scenarios to test database scalability using OLTP, database warehousing, and mixed workloads. This utilizes benchmark objects and transactions from the TPC-C and the TPC-H benchmarks.
Test SQL for scalability A SQL Scalability test compares the performance of SQL Statement variations under a load. Each transaction will execute individually for each user load for a specified number of executions or length of time.
Capture and Replay SQL Server workload Create a workload which replays a captured MSSQL workload from obtained trace file from a shared directory.

 

 

After a quick think, here is our understanding of the various items:

  • “Industry Standard benchmarks” / “Test Database Scalability”  — We will choose one of this options, if we wanted to run a standard tests against our server
    • In our case, we want a very simple test
    • Also, in its initial iteration, Microsoft In-Memory database is a hybrid product and does not have the option of implicitly defining all objects as in-memory
  • Test SQL for Scalability — supports our need of having a small well defined tables and programmable objects, and SQL queries
  • Capture and Replay SQL Server workload — Choose this option if you have captured SQL Server Trace files using SQL Server Profiler or Server side trace.

 

Please consider clicking on the “Show Advanced” check-box, if you will like to see and review additional options.

Again, we will choose “Test SQL for Scalability”

Define Iteration Length

The next screen depends on which Scenario you chose earlier.  As we chose “Test SQL for Scalability”, we are now in the “Define Iteration Length” window.

DefineIterationLength_v2

 

We chose the “Number of Executions per iteration” option and changed from the default of 5 to 1000 executions.

This basically means that we will cycle through our workload 1000 times.

We will skip the step of adding new User Scenario / adding SQL Statement at this time.

Measurement Intervals

The “Measurement Intervals” screen basically defines our virtual workload user; in essence the number of virtual users that we will have at every stage.  Keeping in mind that we ramp up gradually.

Before:

MeasurementIntervals_Before

 

After:

MeasurementIntervals

 

Select new job to

As always choose a good, associative name for your job.  We choose “MicrosoftSQLServer-InMemory-SimpleTableAndProgrammableObjects“.

 

 

SelectNewJobTo_v2

 

Congratulations

Congratulations

 

 

Define Profile

Empty Job View

We are taken to an an empty job view screen.  And, we will now define our Job.

JobsView

 

 

Define Profile

Profiles are basically targeted Database Connections.

Let us go define a new one and target our MS SQL Server Instance.

We initiate that process by clicking on the “Add Profile” main menu option.

 

Welcome

welcome

 

Driver Selection

We choose “Microsoft SQL Server (ODBC)”

DriverSelection

 

Connection Information

Select Data Source Name, enter SQL Server Username, password.

 

ConnectionInformation

 

Profile Name

Please enter a name for your Profile.  In our case, we chose MSSQLInMemory.

ProfileName

 

Congratulations

You can go back and review your selections and once you ‘re good, please click on the Finish button.

Congratulations

 

Define Scenario

Let us return to defining our Scenario.

 

Specify Profile

Access the In the “Job Setup” \ “General” Tab, choose your target profile.

selectProfile

 

Load Setup \ Transactions

Access the In the “Load Setup” \ “Transactions” Tab and at this stage of our work, you will likely see an empty panel, free of any defined Transactions.

Empty Transactions Panel

LoadSetup-emptyPanel

 

Add Transactions

Right click on the empty panel, add from the drop-down menu, please choose to “New Transactions” option.

AddTransaction

 

SQL Statement
INSERT INTO [dbo].[CustomerTraditional]
(
[ID],[Firstname],[Lastname],[EmailAddress]
,[Country],[IPAddress],[Company]
,[City],[PhoneWork],[StreetAddress]
,[Username],[UniqueID]
)
select
$BFRand(1000000)
, ‘$BFLeft($BFFirstName(),20)’
, ‘$BFLeft($BFLastName(),20)’
, ‘$BFLeft($BFEMail(), 50)’
, ‘$BFCountry()’
, null
, ‘$BFLeft($BFCompany(),50)’
, ‘$BFLeft($BFCity(), 50)’
, ‘$BFLeft($BFPhone(), 50)’
, ‘$BFLeft($BFAddress(), 50)’
, ‘$BFLeft($BFFirstName(),20)’
, NEWID()

 

I will suggest that you use the ExecuteStatementto execute and validate the SQL.

 

Real World Latencies

Here we specify the latency between each transaction

RealWorldLatencies

 

Congratulations

Click OK to continue

Congratulations

Add Transactions – Stored Procedure – Invocation of traditional Stored Procedure

Right click on the newly added Transaction, add from the drop-down menu, please choose to “New Transactions” option.

Here is us invoking our Stored Procedure (dbo.usp_InsertCustomerTraditional)

SQL Statement
exec dbo.usp_InsertCustomerTraditional
$BFRand(1000000)
, ‘$BFLeft($BFFirstName(),20)’
, ‘$BFLeft($BFLastName(),20)’
, ‘$BFLeft($BFEMail(), 50)’
, ‘$BFCountry()’
, null
, ‘$BFLeft($BFCompany(),50)’
, ‘$BFLeft($BFCity(), 50)’
, ‘$BFLeft($BFPhone(), 50)’
, ‘$BFLeft($BFAddress(), 50)’
, ‘$BFLeft($BFFirstName(),20)’
, null — uniqueID
Add Transactions – Sql Statement – Insert into in-memory table

Right click on the newly added Transaction, add from the drop-down menu, please choose to “New Transactions” option.

 

SQL Statement
INSERT INTO [dbo].[CustomerInMemory]
(
[ID],[Firstname],[Lastname],[EmailAddress]
,[Country],[IPAddress],[Company]
,[City],[PhoneWork],[StreetAddress]
,[Username],[UniqueID]
)
select
$BFRand(1000000)
, ‘$BFLeft($BFFirstName(),20)’
, ‘$BFLeft($BFLastName(),20)’
, ‘$BFLeft($BFEMail(), 50)’
, ‘$BFCountry()’
, null
, ‘$BFLeft($BFCompany(),50)’
, ‘$BFLeft($BFCity(), 50)’
, ‘$BFLeft($BFPhone(), 50)’
, ‘$BFLeft($BFAddress(), 50)’
, ‘$BFLeft($BFFirstName(),20)’
, NEWID()
Add Transactions – Stored Procedure – Invocation of In-Memory Stored Procedure

Right click on the newly added Transaction, add from the drop-down menu, please choose to “New Transactions” option.

Here is us invoking our Stored Procedure (dbo.usp_InsertCustomerInMemory)

SQL Statement
exec dbo.usp_InsertCustomerInMemory
$BFRand(1000000)
, ‘$BFLeft($BFFirstName(),20)’
, ‘$BFLeft($BFLastName(),20)’
, ‘$BFLeft($BFEMail(), 50)’
, ‘$BFCountry()’
, null
, ‘$BFLeft($BFCompany(),50)’
, ‘$BFLeft($BFCity(), 50)’
, ‘$BFLeft($BFPhone(), 50)’
, ‘$BFLeft($BFAddress(), 50)’
, ‘$BFLeft($BFFirstName(),20)’
, null — uniqueID

 

 

Run Job

Once we have defined our transactions and attached our job to a profile (target database), we can run the job.

To do so we can use the Menu (Jobs \ Run)  or  use the top menu icon:

runjob

 

Review Job

We can review our running job via a few lenses:

  • Internally
  • SQL Server Profiler

Benchmark

Within the Jobs view, access the “Run Job” \ “Summary” tab.

As we starting out, we will see that we are at our initial userload of 1.

Userload of 1

reviewRunningJob-Userload-1

Userload of 10

reviewRunningJob-Userload-10

As we have increased our userload to 10, we have new metrics:

  • Increased Transaction per second (TPS) from 0.97 to 9.83
  • Our average time is @ 0.007

Userload of 20

reviewRunningJob-Userload-20

 

As we have increased our userload to 20, we have new metrics:

  • Increased Transaction per second (TPS) to 17.24
  • Our average time is down a bit to 0.123

 

Database Profiler

SQL Server Profiler

We will use SQL Server’s native profiling tool, SQL Server Profiler, and try to get an internal view of how are statements are being executed.

Here is what a traditional SQL Insert statement looks like.

SQLStatment-InsertIntoCustomerTraditional

 

Hopefully you will notice a few things:

  • Our SQL Statements are prepared
  • As the SQL Statements are being prepared, they will be a bit faster than unprepared SQL Statements

 

See Results

Depending on the complexity of your transactions, the number of transactions you will like to run, and the workload sizes, it might take time to run your job.

To see the current result and pass results, please access the “Job View” \ “See Results” panel.

To view reports, please choose the report and click on the viewReport icon.

You can also select the Report, right click on your selection and choose the “Show Test Results”.

 

Results Summary

Our summary shows the best performing transaction is the “SQLStatementSP-InMemory”.

Transaction Time (baseline)

 

Results Summary – Transaction Time

I think we should dig deeper and so double-clicked on the graph above.

Result Summary Graph - Trasaction Time

 

The graph above demonstrates that In-Memory transactions have appreciably less transaction time than traditional transactions.

 

Results – Transactions per second

Transactions per second

 

As we expected as we increased the user-load, we process more transactions.

 

Results – Deadlocks

Thankfully, we are experiencing zero deadlocks.

Relative Summary Graph - Deadlocks

 

 

Report – Export To Word

Once you have a report up, you might want to export the report to Word.

To do so, please click on the “W” icon at the top of a report.

exportReportToWord

Unfortunately for us, we received an error message:

Textual:

Please make sure the Microsoft Word is installed correctly.  And, the active X control is enabled.

Image:

PleaseMakeSureTheMicrosoftWordIsInstalledCorrectly - Add the ActiveX control is enabled

 

I download Microsoft Word Viewer.  Upon trying to install it, ran into more problems.

Textual:

Please wait while Setup finishes determining your disk space requirements.

Image:

pleaseWait

 

To fix try:

wordview_en-us.exe /passive /norestart

This means that you will not be prompted for a destination folder, etc.

The install worked.

Returned to the report viewer but still no go!  I must need a full version of the product.

 

Report – Export To PDF

Downloaded FoxIT Reader.  Installed the application and was able to return to Quest Benchmark Factory and print the report to the FoxIT printer.

SelectPrinter

 

Reviewed generated PDF File.  But, as it is stale and unable to dig deep into charts.

 

Source Control

GitHub

A few of the files have been pushed onto Github.

The URL:
https://github.com/DanielAdeniji/QuestBenchmarkFactoryMSSQLInMemorySimple

 

Summary

I am really very impressed with Quest Benchmark factory.

It has a very rich test toolset for generating test data.

And, it has a firm foundation for packaging the payload and applying them to a database.

And, it prepares very well polished and applicable report.

SQL Server – v2012 – In-Memory Database and Max Server Memory

Background

I upgraded one of my Lab Databases from v2008-R2 to v2012 a day or so ago.

As I started a Performance Test run against the database I noticed that I could no longer connect to the In-Memory database.

The SQL Instance itself was online, but just the lone database that has the In-Memory table.

 

Reviewed MS SQL Server Error Log

Review MS SQL Server Error Log and noticed quite a few helpful relevant error lines.

 

Error Log Entries
[INFO] The SQL Server service does not have the SE_MANAGE_VOLUME_NAME privilege. Memory optimized checkpoint file operations may be slower, resulting in significant performance degradation.
Disallowing page allocations for database ‘DBLabInMemory’ due to insufficient memory in the resource pool ‘default’. See ‘http://go.microsoft.com/fwlink/?LinkId=330673’ for more information.
Failed allocate page due to database memory pressure: FAIL_PAGE_ALLOCATION 8
[ERROR] Database ID: [42] ‘DBLabInMemory’. Failed to redo log record at LSN 000000A2:00003940:0005. Error code: 0x83000000. (e:\sql12_main_t\sql\ntdbms\hekaton\sqlhost\sqlmin\hkhostdb.cpp : 1996 – ‘RecoverHkDatabaseApplyTailOfTheLog’)

 

 

Remediation Steps

 

Using Local Policy, Grant SE_MANAGE_VOLUME_NAME

What is Service Account?

Launched Services Applet and determine which Account SQL Server is “running as”:

ServiceAccount

 

… and it is “NT Service\MSSQLSERVER”.

 

Review Local Security Policies

LocalSecurityPolicy

 

We are interested in the following Policies:

  • Lock pages in memory
  • Perform volume maintenance tasks

 

Grant permissions to Service Account

 

Looked on the Net for how to grant Local Policies via Command Line, and found good from who else Kendra Little:

http://www.littlekendra.com/2009/11/12/automating-sql-local-security-policy-rights-posh-and-ntrights/

I do not even have to vent, as I know Kendra is smart and she shares the truth.

 

Command Sample:

ntrights -u "NT Service\MSSQLSERVER" +r SeLockMemoryPrivilege

ntrights -u "NT Service\MSSQLSERVER" +r SeManageVolumePrivilege

Output:

ntrights -u "NT Service\MSSQLSERVER" +r SeLockMemoryPrivilege
     Granting SeLockMemoryPrivilege to NT Service\MSSQLSERVER ... successful

ntrights -u "NT Service\MSSQLSERVER" +r SeManageVolumePrivilege
    Granting SeManageVolumePrivilege to NT Service\MSSQLSERVER ... successful

… restart MS SQL Server and dependent services.

net stop mssqlserver /y
net start mssqlserver 

 

Changed Microsoft SQL Server – Max Server Memory

As I am not yet quite yet ready to dig into resource pool allocation and management, I took the easy route per reviewing if I have in place a “max server memory”.  And, I do at a measly 400 MB.

As we are now using in-memory, I need at minimum enough memory to cover the in-memory infrastructure and database objects.

Let us start @ 1 GB.

exec sp_configure 'max server memory (MB)', 1000
go

reconfigure
go

 

Brought Database Online

ALTER DATABASE [DBLabInMemory] SET ONLINE;
go