SQL Server – Unable to shrink Transaction Log file

Background

Received an alert stating that we have gotten below our threshold of 10% on one of DB Servers.

Thanks goodness no late night phone calls.

Only saw it coming to work this morning.

The good and bad about smartphone.

TroubleShooting

Windows Desktop

Folder

Image

Explanation

  1. ppsivr_8.ldf is 28 GB

SQL Metadata

Outline

Here are dynamic management views ( DMVs) that we will access…

  1. dbcc sqlperf(logspace)
  2. sys.database_files
  3. sys.databases
  4. sys.dm_exec_requests & sysprocesses

 

Review Transaction Log File Sizes

SQL


dbcc sqlperf(logspace)

Output

 

Explanation

  1. ppsivr
    • Log Size MB
      • 27,699 MB
      • 27 GB

Review Database File Sizes

SQL


-- replace with targeted database
use [database]

SELECT 
		  [name]

		, [type]
			= tblSDF.[type_desc]

		, [filename]
			= tblSDF.[physical_name]

		, [allocatedMB]
			= tblSDF.size / 128

		, [usedMB]
			= CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)
				/ 128

		, [availableSpaceMB]
				 = ( size - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int) )
					/ 128

FROM sys.database_files tblSDF

order by
		tblSDF.size desc
;

Output

Review Database Transaction Log Re-usability State

SQL


declare @dbid   int
declare @dbname sysname

set @dbname = 'ppsivr'
set @dbid = db_id(@dbname)

select 
		    tblSD.[database_id]
		  , tblSD.[name]
		  , tblSD.[log_reuse_wait_desc]

from   sys.databases tblSD

where  (
			   (@dbid is null)

			or ( tblSD.[database_id] = @dbid)
		
	   )

Output

Explanation

  1. log_reuse_wait_desc
    • The database is not re-cycling its transaction log due to an ‘Active Transaction’

Review Current DB Requests

Code



declare @dbid   int
declare @dbname sysname

set @dbname = 'ppsivr'
set @dbid = db_id(@dbname)

select 
		  tblSDER.[session_id]

		, [database] 
			= db_name(tblSDER.[database_id])

		, [openTransaction]
			= tblSP.[open_tran]

		, tblSDER.[status]

		, tblSDER.[command]

		, tblSDER.[percent_complete]

		, tblSDER.wait_type

		, tblSDER.last_wait_type

		, tblSDER.[reads]

		, tblSDER.[writes]

from   sys.dm_exec_requests tblSDER

inner join master..sysprocesses tblSP

	on tblSDER.[session_id] = tblSP.[spid]

where  (

	         ( tblSDER.[open_tran] != 0 )

	      or ( tblSDER.[database_id] = @dbid)

      )



Output

Explanation

  1. User Transactions
    • It does not appear that we have user transactions that are current using our targeted database
  2. Background
    • WAIT_XTP_OFFLINE_CKPT_NEW_LOG
      • It does does not apply that WAIT_XTP_OFFLINE_CKPT_NEW_LOG updates the percent_complete column as so it is a bit difficult to track its current progress

 

Remediation

Here are the things we tried:

  1. dbcc shrinkfile
  2. Take database offline and bring back online

Attempt to Shrink Transaction Log Files

SQL


use [ppsivr]
go

dbcc shrinkfile('ppsivr_log')
go

Output

Explanation

  1. We want to take a good look at Current Size and Used Pages
    • In our case they are same
    • What is in our Transaction Log File?

 

Take Database Offline and bring it back online

Force termination of ongoing sessions

SQL


use master
go

ALTER DATABASE [ppsivr]
	SET SINGLE_USER WITH ROLLBACK IMMEDIATE
	;

alter database [ppsivr] set offline;

alter database [ppsivr] set online;

ALTER DATABASE [ppsivr]
	SET MULTI_USER WITH ROLLBACK IMMEDIATE
	;

 

Summary

But, nothing worked!

We will come back and talk about what worked.

References

  1. Microsoft
    • Support.Microsoft.com
      • FIX: Offline checkpoint thread shuts down without providing detailed exception information in SQL Server 2014
        Link

SQL Server – Measuring Transaction Log Utilization at Session Level

Background

In the last few posts, we touched on measuring File IO utilization.

This is yet another post on that trail.

 

Transaction Log

What is Transaction Log?

Here is Microsoft’s definition of Transaction Log

Link
Every SQL Server database has a transaction log that records all transactions and the database
modifications made by each transaction.

What type of measurements are available

Here are some of the often used commands:

Granularity Command Reset
 Database
 dbcc sqlperf(logspace) dbcc sqlperf(‘sys.dm_os_wait_stats’,CLEAR);
backup log
Transaction Log Operation  fn_dblog

 

 

Reset Transaction Log

Here are some of the often used commands:

Command Explanation
Checkpoint For performance reasons, the Database Engine performs modifications to database pages in memory—in the buffer cache—and does not write these pages to disk after every change. Rather, the Database Engine periodically issues a checkpoint on each database. A checkpoint writes the current in-memory modified pages (known as dirty pages) and transaction log information from memory to disk and, also, records information about the transaction log.

The Database Engine supports several types of checkpoints: automatic, indirect, manual, and internal.

Backup Log to NUL Backup Transaction Log

 

 

Process

The task that we will be undertaking is loading a staging table.

And, our primary interest is to measure the amount of Transaction Log that is expended.

 

Outline

  1. Prepare Database for Transaction Log Profiling
    • Ensure that recovery mode is set for “full recovery”
    • Suspend any regular transaction level backup
    • Checkpoint
    • Perform a full database backup
  2. Load Data
    • Actual Steps
      • Truncate staging table
      • Load data into staging table
      • Measure Transaction Log Metrics
        • Overall Transaction Log Usage
          • Get overall Transaction Log usage using”dbcc sqlperf(logspace)
        • Get Transaction Log Usage at Operation & Context Level
          • Query fn_dblog to get current database log metrics
          • Group results by Operation and Context Level
    • Cleanup steps
      • Checkpoint database
      • Take Transaction level backup
  3. Repeat loading data for each attempted load option
  4. Cleanup
    • Revert back to original recovery mode
    • Re-enable any regularly scheduled transaction backup

 

Code

DBCC SQLPERF

SQLPERF – Create Tracking Table


declare @tblSQLPerf TABLE
(
	  [id]			int not null identity(1,1)

	, [databaseName]	sysname not null

	, [logsizeMB]		decimal(30, 4) not null

	, [logSpaceUsed%]	decimal(20, 8) not null

	, [logSpaceUsed]
				as (
					([logsizeMB] * [logSpaceUsed%])
					 / 100
				  )

	, [status]	    smallint	   not null

)

SQLPERF – Load Tracking Table

 

DBCC SQLPERF(logspace) gets the log size in MB for all databases in the SQL Instance


insert into @tblSQLPerf
(
	  [databaseName]
	, [logsizeMB]	
	, [logSpaceUsed%]	
	, [status]			
)
exec('dbcc sqlperf(logspace) with no_infomsgs')


 

SQLPERF – Review Tracking Table

Get stored “DBCC SQLPERF(logspace)” data for the specific databases that we are interested in.


--Review results of tracking table
select 
	      [src] = '@tblSQLPerf'
		, [rowNumber]
			= ROW_NUMBER() OVER 
				(
					ORDER BY tblSP.[id] ASC
				)
		, tblSP.databaseName
		, tblSP.logsizeMB
		, tblSP.logSpaceUsed
		, tblSP.[logSpaceUsed%]
		--, tblSP.*

from    @tblSQLPerf tblSP

where   (
			( tblSP.[databaseName]= @dbName)
		)
order by
		tblSP.[id] asc

fn_dblog – Create Tracking Table

We create a tracking table that will house fn_dbLog data.


declare @tblFNDBLog TABLE
(
	  [id]			int not null 
	, [operation]	        sysname not null
	, [context]		sysname not null
	, [count]		bigint     not null
	, [logSize]		bigint     not null
)

fn_dblog – Load Tracking Table

Issue dn_dblog against current database.

We get a journal of all transaction log operations since last checkpoint.


insert into @tblFNDBLog
(
	  [id]
	, [operation]
	, [context]
	, [count]	
	, [logSize]	
)
select 
	          @id
		, [Operation]
		, [Context]
		, [cnt] = count(*)
		, LogSize
                   = sum([log record length]) as LogSize

from fn_dblog (NULL, NULL)

group by
	    [Operation]
	  , [Context]

 

 

fn_dbLog – Review Tracking Table – Count

Get stored “fn_dbLog” data.

Aggregated by Operation and Context.

First dataset that we project is count.

The number of times recorded for each grouping ( Operation and Context).

Data was captured in rows and we have to use pivot to display as columns.

 


select 
		  [src] = 'fn_dblog//Count]'
		, [operation]
		, [context]
		, [0]
		, [1]
		, [2]
		, [3]

from 
(

	SELECT [operation], [context], [id], [count]
	FROM   @tblFNDBLog

) AS SourceTable

PIVOT
(
	AVG([count])
	FOR [id] IN 
		(
			[0], [1], [2], [3]
		)

) AS pvt

order by

	isNull
	(
		  [0]
		, 0
	) 
	+ isNull
	(
		  [1]
		, 0
	) 
	+ isNull
	(
		  [2]
		, 0
	) 
	+ isNull
	(
		  [3]
		, 0
	) 

	desc


 

fn_dbLog – Review Tracking Table – Size

Get stored “fn_dbLog” data.

Aggregated by Operation and Context.

Second dataset that we project is size.

We sum up the size of transaction log uptake per Operation & Context.

Again, data was captured in rows and we have to use pivot to display as columns.

 


select 
		  [src] = 'fn_dblog//logSize]'
		, [operation]
		, [context]
		, [0]
		, [1]
		, [2]
		, [3]

from 
(

	SELECT [operation], [context], [id], [logSize]
	FROM   @tblFNDBLog

) AS SourceTable

PIVOT
(
	sum([logSize])
	FOR [id] IN 
		(
			[0], [1], [2], [3]
		)

) AS pvt

order by

	isNull
	(
		  [0]
		, 0
	) 
	+ isNull
	(
		  [1]
		, 0
	) 
	+ isNull
	(
		  [2]
		, 0
	) 
	+ isNull
	(
		  [3]
		, 0
	) 

	desc


 

Database Backup

Database Backup – Transaction Log

Here we review the transaction log backup metrics.

After each trial run we took a transaction log backup.


SELECT

		 [database]
			= tblBS.database_name  

	   , [TSStart]
			= tblBS.backup_start_date  

	   , [TSEnd]
			= tblBS.backup_finish_date

		, [durationInSec]
			= datediff
				(
					  second
					, tblBS.backup_start_date 
					, tblBS.backup_finish_date
				)

	   , [backupSizeInBytes]
			= tblBS.backup_size  

	   , [backupSizeInMegaBytes]
			= ( tblBS.backup_size / 1E6)

	   , [compressedBackupSizeInBytes]
			= tblBS.compressed_backup_size

	   , [compressedBackupSizeInMegaBytes]
			= (
				tblBS.compressed_backup_size
				/ 1E6
			  )	

		, [compressionRatio]
				= cast
					(
						(
						tblBS.compressed_backup_size
							/ NULLIF(tblBS.backup_size, 0)
						)
						as decimal(10, 4)
					)

	   , tblBMF.logical_device_name  

	   , tblBMF.physical_device_name   

FROM  msdb.dbo.backupset tblBS

left outer join msdb.dbo.backupmediafamily tblBMF

		ON tblBS.media_set_id  = tblBMF.media_set_id 

where  tblBS.[database_name] = db_name()

and    tblBS.[type] = 'L' 

--and    tblBS.backup_start_date > @dateStartedPostFullBackup

order by 
		tblBS.backup_start_date asc

Table Sizes

sp_spaceused – Create tracking table


declare @tblSpaceUsed TABLE
(
	  [id]				int not null identity(1,1)

	, [tableName]		sysname not null

	, [numberofRecords]	bigint  not null

	, [reservedKB]		varchar(100) not null
	, [reservedMB]		as 
						cast
						(
							(
								cast
								(
									replace
									(
										ltrim(rtrim([reservedKB]))
										, 'KB'
										, ''
									) 
									as decimal(30, 6)
								)
							) / (1024.00)
								as decimal(30, 6)
						)

						
	, [dataKB]			varchar(100) not null
	, [dataMB]		as 
					cast
						(
							(
								cast
								(
									replace
									(
										ltrim(rtrim([dataKB]))
										, 'KB'
										, ''
									) 
									as decimal(30, 6)
								)
							) / (1024.00)
								as decimal(30, 6)
						)

	, [indexKB]			varchar(100) not null
	, [indexMB]		
					as 
					cast
						(
							(
								cast
								(
									replace
									(
										ltrim(rtrim([indexKB]))
										, 'KB'
										, ''
									) 
									as decimal(30, 6)
								)
							) / (1024.00)
								as decimal(30, 6)
						)


	, [unusedKB]		varchar(100) not null
	, [unusedMB]		as 
						cast
						(
							(
								cast
								(
									replace
									(
										ltrim(rtrim([unusedKB]))
										, 'KB'
										, ''
									) 
									as decimal(30, 6)
								)
							) / (1024.00)
								as decimal(30, 6)
						)



)

sp_spaceused – Get table space used

We issued sp_spaceused against each table.


insert into @tblSpaceUsed
(

	  [tableName]
	, [numberofRecords]

	, [reservedKB]		
	, [dataKB]	
	, [indexKB]	
	, [unusedKB]	

)
exec sp_spaceused @objectName

 

Results

SQLPerf

fn_dbLog // Count

Explanation

  1. LOP_INSERT_ROWS & LCX_TEXT_MIX
    • Operation :- LOP_INSERT_ROWS
    • Context :- LCX_TEXT_MIX
    • Explanation
      • Insert Rows into LOB column ( LCX_TEXT_MIX )
      • Data written to Log file
  2. LOP_MODIFY_ROWS & LCX_TEXT_MIX
    • Operation :- LOP_MODIFY_ROWS
    • Context :- LCX_TEXT_MIX
    • Explanation
      • Insert Rows into LOB column ( LCX_TEXT_MIX )
      • Data read from Log file and written to datafile
  3. LOP_SET_FREE_SPACE & LCX_PFS
    • Operation :- LOP_SET_FREE_SPACE
    • Context :- LCX_PFS
    • Explanation
      • LOP_SET_FREE_SPACE
        • Looking for free Space
      • LCK_PFS
        • Page marked as deallocated in PFS
  4. LOP_MODIFY_ROW & LCK_PFS
    • LOP_MODIFY_ROW
      • Modify existing data
    • LCK_PFS
      • Update PFS Pages
  5. LOP_FORMAT_PAGE & LCK_TEXT_MIX
    • LOP_FORMAT_PAGE
      • Data is being loaded into a Heap
      • We are truncating data before loading
      • LOC_FORMAT_PAGE seems to indicate the zeroing out of pages from previous usage and in preparation for re-use
    •  LCX_TEXT_MIX
      • LCX_TEXT_MIX indicates LOB data pages
  6. LOP_MODIFY_ROW & LCK_TEXT_TREE
    • LOP_MODIFY_ROW
      • Modify existing data
    • LCK_TEXT_TREE
      • ???
  7. LOP_INSERT_ROWS & LCX_HEAP
    • LOP_INSERT_ROWS
      • Insert data
    • LCX_HEAP
      • Heap
    • Number of entries
      • Matches number of records inserted into table
  8. LOP_HOBT_DELTA & LCX_NULL
    • LOP_HOBT_DELTA
      • HOBT stands for “Heap or B-Tree
    • LCX_NULL
      • ???
  9. LOP_FORMAT_PAGE & LCX_HEAP
    • LOP_FORMAT_PAGE
      • LOP_FORMAT_PAGE is a type of log record that does a full or partial page format
    • LCX_HEAP
      • Heap is targeted
  10. LOP_INSERT_ROWS & LCX_INDEX_LEAF
    • LOP_INSERT_ROWS
      • Insert data
    • LCX_INDEX_LEAF
      • Index Leaf page
    • Number of entries
      • Depends on how records are inserted
  11. LOP_SET_BITS & LCX_IAM
    • LOP_SET_BITS
      • Set the bit for IAM Pages
    • LCX_IAM
      • SQL Server uses index allocation maps to track which pages have been allocated to which allocation unit. They are stored in special IAM pages. ( Link )
  12. LOP_SET_BITS & LCX_GAM
    • LOP_SET_BITS
      • Set the bit for GAM Pages
    • LCX_GAM
      • Global Allocation Map (GAM) page
  13. LOP_LOCK_XACT,  LCX_NULL
    • LOP_LOCK_XACT
      • Transaction Lock
  14. LOP_COMMIT_XACT,  LCX_NULL
    • LOP_COMMIT_XACT
      • Transaction Commit
  15. LOP_BEGIN_XACT,  LCX_NULL
    • LOP_BEGIN_XACT
      • Transaction Begin
  16. LOP_SET_BITS & LCX_DIFF_MAP
    • LOP_SET_BITS
      • Set the bit
    • LCX_DIFF_MAP
      • Likely Differential Map
      • ???
  17. LOP_FORMAT_PAGE  & LCX_TEXT_TREE
    • LOP_FORMAT_PAGE
      • ??
    • LCX_TEXT_TREE
      • ???
  18. LOP_INSERT_ROWS & LCX_TEXT_TREE
    • LOP_INSERT_ROWS
      • Insert Rows
    • LCX_TEXT_TREE
      • ???
  19. LOP_INSYSXACT & LCX_INDEX_LEAF
    • LOP_INSYSXACT
      • Internal System Table Insert
        • Likely due to table create statement
    • LCX_INDEX_LEAF
      • Index Leaf Operation
  20. LOP_BUF_WRITE  & LCX_NULL
    • LOP_BUF_WRITE
      • Write to Buffer
    • LCX_NULL
      • ???
  21. LOP_INSYSXACT & LCX_INDEX_INTERIOR
    • LOP_INSYSXACT
    • Internal System Table Insert
      • Likely due to table create statement
    • LCX_INDEX_INTERIOR
      • Interior = Non-Leaf Page
  22. LOP_COMPRESSION_INFO  & LCX_INDEX_LEAF
    • LOP_COMPRESSION_INFO
      • Compression Info
    • LCX_INDEX_LEAF
      • Index Leaf
  23. LOP_FORMAT_PAGE & LCX_BULK_OPERATION_PAGE
    • LOP_FORMAT_PAGE
    • LCX_BULK_OPERATION_PAGE
      • Bulk Insert
  24. LOP_MODIFY_HEADER & LCX_BULK_OPERATION_PAGE
    • LOP_MODIFY_HEADER
      • Header Page
    • LCX_BULK_OPERATION_PAGE
      • Bulk Insert
  25. LOP_EXPUNGE_ROWS & LCX_INDEX_LEAF
    • LOP_EXPUNGE_ROWS
      • Ghost Cleanup
    • LCX_INDEX_LEAF
      • Leaf Node

 

Others

  1.  LOP_END_CKPT
    • LOP_END_CKPT
      • CheckPoint – End
  2. LOP_BEGIN_CKPT
    • LOP_BEGIN_CKPT
      • Checkpoint – Begin
  3. LOP_XACT_CKPT
    • LOP_XACT_CKPT
      • Checkpoint – Transaction

 

 

Terminologies

PFS

  1. Paul Randal – Under the covers: GAM, SGAM, and PFS pages
    • PFS stands for Page Free Space, but the PFS page tracks much more than that. As well as GAM intervals, every database file is also split (conceptually) into PFS intervals. A PFS interval is 8088 pages, or about 64MB. A PFS page doesn’t have a bitmap – it has a byte-map, with one byte for each page in the PFS interval (not including itself).
    • Free space is only tracked for pages storing LOB values (i.e. text/image in SQL Server 2000, plus varchar(max)/varbinary(max)/XML and row-overflow data in SQL Server 2005) and heap data pages. This is because these are the only pages that store unordered data and so insertions can occur anywhere there’s space. For indexes, there’s an explicit ordering so there’s no choice in the insertion point.

LCK_PFS

  1. SQL Server: Curious Case of Logging in Online and Offline Index Rebuild in Full Recovery Model
    Link

    • Page that was being locked and context column show LCK_PFS which means for this page just it is marked as de-allocated in PFS page.

 

IAM

  1. sqlity.com
    • Home » Blog » The Index Allocation Map (IAM)
      • The Index Allocation Map (IAM)
        Link

LOB

  1. sqlity.com
    • Home » Blog » LOB Data Pages
      • Page Type 3 – LOB Data Pages explained
        Link

        • sys.dm_db_database_page_allocations

Background Tasks

  1. Ghost Cleanup
    • Tracking
      • SELECT * FROM master.dbo.sysprocesses WHERE cmd LIKE ‘%ghost%’

Summary

Intentionally leaving out any advisory.

Just wanted to point at reviewing Transaction Log contents and size as one way to profile Database Applications.

Transaction Backup Sizes present one option to review overall sizes.

fn_dblog querying through aggregating the Operation and Context columns allows one to follow the trail of SQL Statements.

 

References

  1. Server & Tools Blogs > Data Platform Blogs > SQL Server Database Engine Blog
    • Paul Randal – Under the covers: GAM, SGAM, and PFS pages
      Link
  2. Wiki > TechNet Articles > SQL Server: Curious Case of Logging in Online and Offline Index Rebuild in Full Recovery Model
    • SQL Server: Curious Case of Logging in Online and Offline Index Rebuild in Full Recovery Model
      Link
  3. sqlFascination
    • Andrew Hong
      • A Strange Case of a Very Large LOP_FORMAT_PAGE
        Link
      • Decoding a Simple Update Statement Within the Transaction Log
        Link
      • How Do You Decode A Simple Entry in the Transaction Log? (Part 1)
        Link
  4. Simple Talk
    • Uwe Ricken
      • DELETE Operation in SQL Server HEAPs
        Link
  5. Akash Gautam
    • sqlservergeeks.com
      • SQL Server Recovery Model Deep Dives
        Link
  6.  WikiDBA
    •  Virendra Yaduvanshi
      • Finding user who ran insert on SQL Server
        Link
  7. SqlShack
    • Jefferson Elias
      • How to take advantage of SQL Server Transaction Log
        Link
  8. SQLSkills
    • Paul Randle
      • LOP_FORMAT_PAGE is a type of log record that does a full or partial page format
        Link
      • Tracking page splits using the transaction log
        Link
      • Inside the Storage Engine: Ghost cleanup in depth
        Link
  9. Martin Smith
    • stackoverflow
      • Why does DELETE leave a lingering effect on performance?
        Link
  10. Paul White
    • Paul White & DiamondBeezer
      • SQL Server High Availability and Sharepoint – log file grows when no database activity
        Link
  11. raresql.com
    • SQL Server – How to find Who Deleted What records at What Time
      Link
  12. EfficientSQL
    1. OPTIMIZING TRANSACTION LOG PERFORMANCE
      Link

 

SQL Server – Transaction Log Management – Recovery Mode

Background

It is finally time to revisit a blog posting I started on Dec 21st, 2016.

That post is titled “SQL Server – Error – Cannot shrink log file because the logical file located at the end of the file is in use“.

And, it is here.

Issue

Received an alarm that our database files are growing and so sought to find out why.

 

TroubleShooting

Transaction Log – Utilization Metrics from Performance Counter

transactionLog.IUDF_UtilizationMetricsFromPerformanceCounters

Code

use master
go

if schema_id('transactionLog') is null
begin

	exec('create schema [transactionLog] authorization [dbo]')

end
go

if object_id('[transactionLog].[IUDF_UtilizationMetricsFromPerformanceCounters]') is null
begin

	exec(
			'create function [transactionLog].[IUDF_UtilizationMetricsFromPerformanceCounters]
			()
			RETURNS table
			AS
			RETURN 
			(
				SELECT 
					[shell] = 1/0
		   );
	   	   '
		)

end
go


alter function [transactionLog].[IUDF_UtilizationMetricsFromPerformanceCounters]
()
returns TABLE
as
return
(

	with cte
	(
		  [dbid]
		, [database]
		, [counter]
		, [size]
	)
	as
	(
 
		SELECT
				  [dbid]
					= db_id(RTRIM(instance_name)) 
 
				, [database]
					= RTRIM(instance_name) 
 
				, tblDMOPC.counter_name
 
				, cntr_value log_size_kb
 
		FROM  sys.dm_os_performance_counters tblDMOPC
 
		WHERE tblDMOPC.object_name = 'SQLServer:Databases'
 
		AND   tblDMOPC.counter_name like 'Log File' + '%'
 
		AND   tblDMOPC.instance_name <> '_Total'
 
	)
	, ctePivot
	as
	(
		select *
 
		from   cte
		pivot 
				(
					avg ([size]) 
							for [counter] in
								(
									  [Log File(s) Used Size (KB)]                                                                                                           
									, [Log File(s) Size (KB)]                                                                                                           
								)
				) as avgSize
 
	)
 
	select 

			  [cteP].[dbid]

			, [cteP].[database]
 
			, [usedMB] = [Log File(s) Used Size (KB)]
							/ 1000
 
			, [totalMB] =  [Log File(s) Size (KB)]
							/ 1000
 
			, [%]
				= cast
					(
						( [Log File(s) Used Size (KB)] * 100.00)
						/ ( [Log File(s) Size (KB)])
						as decimal(10,2 )
					)

			, [recoveryModel]
				= tblSD.recovery_model_desc

			, [log reuse wait]
				= tblSD.log_reuse_wait_desc
			
	from   ctePivot cteP

	inner join sys.databases tblSD

		on cteP.[dbid] = tblSD.database_id
 

)
go


Output

qa_20170207_0958pm

Explanation

  1. Before capturing the screenshot above, our target database, QA_rbpivr1, was in FULL RECOVERY
    • It’s transaction log size was 8 GB
    • And, 3.2 GB of the 8GB is in use

 

Why so much Transaction Log is being held up?

Recovery Mode

Again for QA_rbpiv1  3.2 GB is being used.

Let us review how database is configured

qa_recoverymode_20170207_1009am

We see that the Recovery Mode for our database is Full.

Backup Dates

databaseproperty_options_general_backupdate_qa_20170207_1013am

 

Explanation
  1. The last full backup was taken last night Feb 2nd, 2017 and it completed at 8:35 PM
  2. And, the last Transaction backup was taken on Sunday Feb 5th, 2017 at 6 PM

 

 

Transaction Backup History

Let us review our Transaction Backup History

 

sqlserveragent_transactionbackuphistory_qa_20170207_1019am

Explanation:

  1. Before today, transaction backups were occurred on 2/5/2017, 1/29/2017, 1/22/2017
  2. And, they were occurring each hour
  3. Looked back at the Calendar and those days fell on Sundays

 

 

Review Scheduled Transaction Backup

jobproperties-databasebackup-userdatabases-log-schedules-sunday-eachhour

 

Explanation:

  1. Schedule is Weekly on Sundays
  2. And, the frequency is each hour

 

Correction

Our bad.

Revise Schedule

sqlserveragent_transactionbackuphistory_qa_20170207_1028am

Explanation:

  1. Changed Schedule’s frequency from Weekly to Daily

 

Metadata

Review Checkpoints in Error Log File

Enable Logging of Checkpoints in ErrorLog file

Enable Trace Flags

Code

dbcc traceon (3502, 3504, 3605, -1 )

Review

Code

dbcc tracestatus

Output

dbcctracestatus

Review ErrorLog file

Let us review our error logs and make sure that Checkpoints are indeed been logged.

logfileviewer_20170207_1220pm

 

transactionLog.sp_checkPointsInErrorLogFileAggregate

 

Code

 


use [master]
go

if object_id('transactionLog.sp_checkPointsInErrorLogFileAggregate') is null
begin

	exec('create procedure [transactionLog].[sp_checkPointsInErrorLogFileAggregate] as ')

end
go

alter procedure [transactionLog].[sp_checkPointsInErrorLogFileAggregate] 
as  

begin

	set nocount on;
  
	declare @p1  INT
	declare @p2  INT
	declare @p3  VARCHAR(255)
	declare @p4  VARCHAR(255)
  
	declare @tblErrorLog     TABLE
	(
		  [id] int not null
			identity(1,1)
  
		, [LogDate] datetime
  
		, [ProcessInfo] sysname
  
		, [Text]        varchar(600) null
  
		, primary key
			(
				  [LogDate]
				, [id]
			)
  
		, INDEX INDX_ID
			(
				[id]
			)
  
	)
  
	set @p1 = 0 -- current
	set @p2 = 1 -- error log
	set @p3 = 'Ckpt'
	set @p3 = 'Ckpt dbid'
	set @p4  = ''
  
	insert into @tblErrorLog
	(
		  [LogDate]
		, [ProcessInfo]
		, [Text]        
	)
	exec [sys].[sp_readerrorlog]
			 @p1   = @p1  
		   , @p2   = @p2
		   , @p3   = @p3
		   , @p4   = @p4
  
  
	select
			  tblSD.[name]
  
			, tblSD.log_reuse_wait_desc
  
			, [count] 
				= count(tblEL.LogDate)
  
			, [LogDateMin]
				= min(tblEL.LogDate)
  
			, [LogDateMax]
				= max(tblEL.LogDate)
   
	from  sys.databases tblSD
  
	left outer join @tblErrorLog tblEL
  
			on   tblEL.[Text]  like '%' + cast(tblSD.database_id as varchar(10)) + '%'
          
			and  tblEL.[Text] like 'Ckpt dbid ' + '%' + 'started'
 
	group by
  
			   tblSD.[name]
			 , tblSD.log_reuse_wait_desc
  
	order by
  
			   tblSD.log_reuse_wait_desc asc
			 , count(tblEL.LogDate) desc
			 , tblSD.[name]

end
go



 

Output

errorlogfile_checkpoints_transactionlogcheckpoint_20170207_1036am

Explanation

  1. Database
    • QA_rbpivr1
      • LogDateMax
        • From digging in the Transaction Log File, we can see that the last Transaction Log on 2017-02-06 20:34
        • The timeline is Yesterday, Monday Feb 6th, 2017 8:34 PM
      • LogDateMin
        • LogDateMin is 2017-01-15 04:00
        • In readable term Jan 15th, 2017 @ 4 AM

 

Source Control

GitHub

  1. DanielAdeniji/SQLServerTransactionLogMetrics
    Link

 

Summary

In summary, our transaction size is growing and not being pruned because unfortunately our scheduled Transaction backup job was incorrectly set to only occur on Sundays.

Please keep in mind this only affects databases set for FULL RECOVERY.

From the metrics above, we can see that our identified database Transaction Log Checkpoints are only occurring at the tail end of Full Backups.

And, as aforementioned on Sundays.

 

SQL Server – Database Management – Shrink Log files

Background

Received a ticket that we are running low on Storage on one of our Development Database Servers.

Traced the error back to the fact that though we have a Database Transaction Backup registered it is not scheduled to run.

If I run it right away, I will be backing up old unneeded transaction log data.

And, so let us change the recovery mode to Simple ( that dumps the transaction log), shrink the log files, change the Recovery mode back to it original state.

 

Script

Code


; with cteLogFile
(
	  database_id
	, [shrinkStatement] 
)
as
(

	select

		  tblSMF.database_id

		, [shrinkStatement] 
			= STUFF
			  (
				(
					select 
							 ', ' 
							 + 'dbcc shrinkfile(' 
							 + quotename(tblSMF_Inner.[name])
							 + ') ' 
							 + ' with no_infomsgs'
							 + ';'
					from   sys.master_files tblSMF_Inner
					where  tblSMF.database_id = tblSMF_Inner.database_id  
					and  tblSMF_Inner.[type_desc] = 'LOG'
					FOR XML PATH('')
				)
				, 1
				, 1
				, ''
			 ) 

	from   sys.master_files tblSMF

	group by

			tblSMF.database_id

)
select	
		  [database] = tblSD.[name]

		, tblSD.recovery_model_desc

		, tblSD.log_reuse_wait_desc

		, [sqlSimple]
			= 'ALTER DATABASE ' 
				+ quoteName(tblSD.[name])
				+ ' set recovery SIMPLE;'

		, [sqlShrink]
			= 'use '
				+ quoteName(tblSD.[name])
				+ '; '
				+ cteLF.shrinkStatement

		, [sqlRestate]
			= 'ALTER DATABASE ' 
				+ quoteName(tblSD.[name])
				+ ' set recovery '
				+ tblSD.recovery_model_desc	collate DATABASE_DEFAULT
				+ '; '

from   sys.databases tblSD

inner join cteLogFile cteLF

		on tblSD.[database_id] = cteLF.database_id

where  tblSD.recovery_model_desc != 'SIMPLE'



 

Output

 

logfilemanagement

 

Explanation

  1. Transaction Log
    • File Sizes
      • Can only be shrunk down to initial database file size
    • Can not assume that sys.master_files_id is 2 and so check for type_desc = ‘LOG’

Keep in Mind

Please keep in mind that the SQL above only scripts the steps, it does no execute it.

Copy and Paste to perform actual execution.