SQL Server – Transaction Log Usage – Metrics

Background

Reviewing Transaction Log Utilization and wanted to briefly document the SQL Code snippets out on the Internet.

 

Outline

  1. DBCC SQLPERF(logspace)
  2. sys.dm_os_performance_counters
    • Object Name
      • SQLServer:Databases
    • Counter Name
      • Log File(s) Size (KB)
      • Log File(s) Used Size (KB)

Performance Counters

DBCC

DBCC sqlperf

Code

dbcc sqlperf(logspace)
go

Output

DBCCSQLPerf_20180611_1139AM.png

Performance Counters

Performance Counters – “Object Name :- SQLServer:Databases” // “Counter Name :- LogFile”

Code – Tabulated

SELECT
        [object]
		= rtrim(pc1.[object_name])

	, [database]
		= rtrim(pc1.instance_name)

	, [logSizeMB]
		= cast
			(
				pc1.cntr_value/1024.0
					as decimal(30, 2)
			)

	, [logUsedMB]
		= cast
			(
				pc2.cntr_value/1024.0
					as decimal(30, 2)
			)

	, [logAvailMB]
		= cast
			(
				(pc1.cntr_value - pc2.cntr_value)
					/1024.0
					as decimal(30, 2)
			)

	, [percentUtilization]
		= cast
			(
				(

						( pc2.cntr_value*100.00 )
					/ NULLIF(pc1.cntr_value, 0)

				)
				as decimal(10, 2)
			)

FROM   sys.dm_os_performance_counters as pc1

JOIN   sys.dm_os_performance_counters as pc2

	ON   pc1.[instance_name] = pc2.[instance_name]
	AND  pc1.[object_name] = pc2.[object_name]

where ( 1 = 1 )

AND pc1.[object_name] = 'SQLServer:Databases'

AND pc2.[object_name] = 'SQLServer:Databases'

AND pc1.counter_name = 'Log File(s) Size (KB)'

AND pc2.counter_name = 'Log File(s) Used Size (KB)'

AND pc1.instance_name not in
		(
				'_Total'
			, 'mssqlsystemresource'
		)

AND pc1.cntr_value > 0

order by

	[logUsedMB] desc

 

Output

performanceCounter_20180611_1133AM.png

Code – XML

select 

	tblInner.[node]

from
	(

		SELECT
              [object]
			    = rtrim(pc1.[object_name])

			, [database]
				= rtrim(pc1.instance_name)

			, [logSizeMB]
				= cast
					(
						pc1.cntr_value/1024.0
							as decimal(30, 2)
					)

			, [logUsedMB]
				= cast
					(
						pc2.cntr_value/1024.0
							as decimal(30, 2)
					)

			, [logAvailMB]
				= cast
					(
						(pc1.cntr_value - pc2.cntr_value)
							/1024.0
							as decimal(30, 2)
					)

			, [percentUtilization]
				= cast
					(
						(

								( pc2.cntr_value*100.00 )
							/ NULLIF(pc1.cntr_value, 0)

						)
						as decimal(10, 2)
					)

		FROM   sys.dm_os_performance_counters as pc1

		JOIN   sys.dm_os_performance_counters as pc2

			ON   pc1.[instance_name] = pc2.[instance_name]
			AND  pc1.[object_name] = pc2.[object_name]

		where ( 1 = 1 )

		AND pc1.[object_name] = 'SQLServer:Databases'

		AND pc2.[object_name] = 'SQLServer:Databases'

		AND pc1.counter_name = 'Log File(s) Size (KB)'

		AND pc2.counter_name = 'Log File(s) Used Size (KB)'

		AND pc1.instance_name not in
				(
					  '_Total'
					, 'mssqlsystemresource'
				)

		AND pc1.cntr_value > 0

		order by
			[logUsedMB] desc

		for xml
			  PATH('database')
			, root('list')

	) tblInner(node)

Output – XML

performanceCounter_XML_20180611_1137AM

 

References

  1. Microsoft
    • Developer
      • Downloads > Programs > Community > Documentation
        • Is there something like DBCC SQLPERF(LOGSPACE) that will show be a list of the databases and how much free space in each?
          Link

SQL Server – Unable to shrink Transaction Log file

Background

Received an alert stating that we have gotten below our threshold of 10% on one of DB Servers.

Thanks goodness no late night phone calls.

Only saw it coming to work this morning.

The good and bad about smartphone.

TroubleShooting

Windows Desktop

Folder

Image

Explanation

  1. ppsivr_8.ldf is 28 GB

SQL Metadata

Outline

Here are dynamic management views ( DMVs) that we will access…

  1. dbcc sqlperf(logspace)
  2. sys.database_files
  3. sys.databases
  4. sys.dm_exec_requests & sysprocesses

 

Review Transaction Log File Sizes

SQL


dbcc sqlperf(logspace)

Output

 

Explanation

  1. ppsivr
    • Log Size MB
      • 27,699 MB
      • 27 GB

Review Database File Sizes

SQL


-- replace with targeted database
use [database]

SELECT 
		  [name]

		, [type]
			= tblSDF.[type_desc]

		, [filename]
			= tblSDF.[physical_name]

		, [allocatedMB]
			= tblSDF.size / 128

		, [usedMB]
			= CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)
				/ 128

		, [availableSpaceMB]
				 = ( size - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int) )
					/ 128

FROM sys.database_files tblSDF

order by
		tblSDF.size desc
;

Output

Review Database Transaction Log Re-usability State

SQL


declare @dbid   int
declare @dbname sysname

set @dbname = 'ppsivr'
set @dbid = db_id(@dbname)

select 
		    tblSD.[database_id]
		  , tblSD.[name]
		  , tblSD.[log_reuse_wait_desc]

from   sys.databases tblSD

where  (
			   (@dbid is null)

			or ( tblSD.[database_id] = @dbid)
		
	   )

Output

Explanation

  1. log_reuse_wait_desc
    • The database is not re-cycling its transaction log due to an ‘Active Transaction’

Review Current DB Requests

Code



declare @dbid   int
declare @dbname sysname

set @dbname = 'ppsivr'
set @dbid = db_id(@dbname)

select 
		  tblSDER.[session_id]

		, [database] 
			= db_name(tblSDER.[database_id])

		, [openTransaction]
			= tblSP.[open_tran]

		, tblSDER.[status]

		, tblSDER.[command]

		, tblSDER.[percent_complete]

		, tblSDER.wait_type

		, tblSDER.last_wait_type

		, tblSDER.[reads]

		, tblSDER.[writes]

from   sys.dm_exec_requests tblSDER

inner join master..sysprocesses tblSP

	on tblSDER.[session_id] = tblSP.[spid]

where  (

	         ( tblSDER.[open_tran] != 0 )

	      or ( tblSDER.[database_id] = @dbid)

      )



Output

Explanation

  1. User Transactions
    • It does not appear that we have user transactions that are current using our targeted database
  2. Background
    • WAIT_XTP_OFFLINE_CKPT_NEW_LOG
      • It does does not apply that WAIT_XTP_OFFLINE_CKPT_NEW_LOG updates the percent_complete column as so it is a bit difficult to track its current progress

 

Remediation

Here are the things we tried:

  1. dbcc shrinkfile
  2. Take database offline and bring back online

Attempt to Shrink Transaction Log Files

SQL


use [ppsivr]
go

dbcc shrinkfile('ppsivr_log')
go

Output

Explanation

  1. We want to take a good look at Current Size and Used Pages
    • In our case they are same
    • What is in our Transaction Log File?

 

Take Database Offline and bring it back online

Force termination of ongoing sessions

SQL


use master
go

ALTER DATABASE [ppsivr]
	SET SINGLE_USER WITH ROLLBACK IMMEDIATE
	;

alter database [ppsivr] set offline;

alter database [ppsivr] set online;

ALTER DATABASE [ppsivr]
	SET MULTI_USER WITH ROLLBACK IMMEDIATE
	;

 

Summary

But, nothing worked!

We will come back and talk about what worked.

References

  1. Microsoft
    • Support.Microsoft.com
      • FIX: Offline checkpoint thread shuts down without providing detailed exception information in SQL Server 2014
        Link

SQL Server – Transaction Log Management – Recovery Mode

Background

It is finally time to revisit a blog posting I started on Dec 21st, 2016.

That post is titled “SQL Server – Error – Cannot shrink log file because the logical file located at the end of the file is in use“.

And, it is here.

Issue

Received an alarm that our database files are growing and so sought to find out why.

 

TroubleShooting

Transaction Log – Utilization Metrics from Performance Counter

transactionLog.IUDF_UtilizationMetricsFromPerformanceCounters

Code

use master
go

if schema_id('transactionLog') is null
begin

	exec('create schema [transactionLog] authorization [dbo]')

end
go

if object_id('[transactionLog].[IUDF_UtilizationMetricsFromPerformanceCounters]') is null
begin

	exec(
			'create function [transactionLog].[IUDF_UtilizationMetricsFromPerformanceCounters]
			()
			RETURNS table
			AS
			RETURN 
			(
				SELECT 
					[shell] = 1/0
		   );
	   	   '
		)

end
go


alter function [transactionLog].[IUDF_UtilizationMetricsFromPerformanceCounters]
()
returns TABLE
as
return
(

	with cte
	(
		  [dbid]
		, [database]
		, [counter]
		, [size]
	)
	as
	(
 
		SELECT
				  [dbid]
					= db_id(RTRIM(instance_name)) 
 
				, [database]
					= RTRIM(instance_name) 
 
				, tblDMOPC.counter_name
 
				, cntr_value log_size_kb
 
		FROM  sys.dm_os_performance_counters tblDMOPC
 
		WHERE tblDMOPC.object_name = 'SQLServer:Databases'
 
		AND   tblDMOPC.counter_name like 'Log File' + '%'
 
		AND   tblDMOPC.instance_name <> '_Total'
 
	)
	, ctePivot
	as
	(
		select *
 
		from   cte
		pivot 
				(
					avg ([size]) 
							for [counter] in
								(
									  [Log File(s) Used Size (KB)]                                                                                                           
									, [Log File(s) Size (KB)]                                                                                                           
								)
				) as avgSize
 
	)
 
	select 

			  [cteP].[dbid]

			, [cteP].[database]
 
			, [usedMB] = [Log File(s) Used Size (KB)]
							/ 1000
 
			, [totalMB] =  [Log File(s) Size (KB)]
							/ 1000
 
			, [%]
				= cast
					(
						( [Log File(s) Used Size (KB)] * 100.00)
						/ ( [Log File(s) Size (KB)])
						as decimal(10,2 )
					)

			, [recoveryModel]
				= tblSD.recovery_model_desc

			, [log reuse wait]
				= tblSD.log_reuse_wait_desc
			
	from   ctePivot cteP

	inner join sys.databases tblSD

		on cteP.[dbid] = tblSD.database_id
 

)
go


Output

qa_20170207_0958pm

Explanation

  1. Before capturing the screenshot above, our target database, QA_rbpivr1, was in FULL RECOVERY
    • It’s transaction log size was 8 GB
    • And, 3.2 GB of the 8GB is in use

 

Why so much Transaction Log is being held up?

Recovery Mode

Again for QA_rbpiv1  3.2 GB is being used.

Let us review how database is configured

qa_recoverymode_20170207_1009am

We see that the Recovery Mode for our database is Full.

Backup Dates

databaseproperty_options_general_backupdate_qa_20170207_1013am

 

Explanation
  1. The last full backup was taken last night Feb 2nd, 2017 and it completed at 8:35 PM
  2. And, the last Transaction backup was taken on Sunday Feb 5th, 2017 at 6 PM

 

 

Transaction Backup History

Let us review our Transaction Backup History

 

sqlserveragent_transactionbackuphistory_qa_20170207_1019am

Explanation:

  1. Before today, transaction backups were occurred on 2/5/2017, 1/29/2017, 1/22/2017
  2. And, they were occurring each hour
  3. Looked back at the Calendar and those days fell on Sundays

 

 

Review Scheduled Transaction Backup

jobproperties-databasebackup-userdatabases-log-schedules-sunday-eachhour

 

Explanation:

  1. Schedule is Weekly on Sundays
  2. And, the frequency is each hour

 

Correction

Our bad.

Revise Schedule

sqlserveragent_transactionbackuphistory_qa_20170207_1028am

Explanation:

  1. Changed Schedule’s frequency from Weekly to Daily

 

Metadata

Review Checkpoints in Error Log File

Enable Logging of Checkpoints in ErrorLog file

Enable Trace Flags

Code

dbcc traceon (3502, 3504, 3605, -1 )

Review

Code

dbcc tracestatus

Output

dbcctracestatus

Review ErrorLog file

Let us review our error logs and make sure that Checkpoints are indeed been logged.

logfileviewer_20170207_1220pm

 

transactionLog.sp_checkPointsInErrorLogFileAggregate

 

Code

 


use [master]
go

if object_id('transactionLog.sp_checkPointsInErrorLogFileAggregate') is null
begin

	exec('create procedure [transactionLog].[sp_checkPointsInErrorLogFileAggregate] as ')

end
go

alter procedure [transactionLog].[sp_checkPointsInErrorLogFileAggregate] 
as  

begin

	set nocount on;
  
	declare @p1  INT
	declare @p2  INT
	declare @p3  VARCHAR(255)
	declare @p4  VARCHAR(255)
  
	declare @tblErrorLog     TABLE
	(
		  [id] int not null
			identity(1,1)
  
		, [LogDate] datetime
  
		, [ProcessInfo] sysname
  
		, [Text]        varchar(600) null
  
		, primary key
			(
				  [LogDate]
				, [id]
			)
  
		, INDEX INDX_ID
			(
				[id]
			)
  
	)
  
	set @p1 = 0 -- current
	set @p2 = 1 -- error log
	set @p3 = 'Ckpt'
	set @p3 = 'Ckpt dbid'
	set @p4  = ''
  
	insert into @tblErrorLog
	(
		  [LogDate]
		, [ProcessInfo]
		, [Text]        
	)
	exec [sys].[sp_readerrorlog]
			 @p1   = @p1  
		   , @p2   = @p2
		   , @p3   = @p3
		   , @p4   = @p4
  
  
	select
			  tblSD.[name]
  
			, tblSD.log_reuse_wait_desc
  
			, [count] 
				= count(tblEL.LogDate)
  
			, [LogDateMin]
				= min(tblEL.LogDate)
  
			, [LogDateMax]
				= max(tblEL.LogDate)
   
	from  sys.databases tblSD
  
	left outer join @tblErrorLog tblEL
  
			on   tblEL.[Text]  like '%' + cast(tblSD.database_id as varchar(10)) + '%'
          
			and  tblEL.[Text] like 'Ckpt dbid ' + '%' + 'started'
 
	group by
  
			   tblSD.[name]
			 , tblSD.log_reuse_wait_desc
  
	order by
  
			   tblSD.log_reuse_wait_desc asc
			 , count(tblEL.LogDate) desc
			 , tblSD.[name]

end
go



 

Output

errorlogfile_checkpoints_transactionlogcheckpoint_20170207_1036am

Explanation

  1. Database
    • QA_rbpivr1
      • LogDateMax
        • From digging in the Transaction Log File, we can see that the last Transaction Log on 2017-02-06 20:34
        • The timeline is Yesterday, Monday Feb 6th, 2017 8:34 PM
      • LogDateMin
        • LogDateMin is 2017-01-15 04:00
        • In readable term Jan 15th, 2017 @ 4 AM

 

Source Control

GitHub

  1. DanielAdeniji/SQLServerTransactionLogMetrics
    Link

 

Summary

In summary, our transaction size is growing and not being pruned because unfortunately our scheduled Transaction backup job was incorrectly set to only occur on Sundays.

Please keep in mind this only affects databases set for FULL RECOVERY.

From the metrics above, we can see that our identified database Transaction Log Checkpoints are only occurring at the tail end of Full Backups.

And, as aforementioned on Sundays.

 

Microsoft – SQL Server – Log File Utilization and Running Total

Here are I am taking a good look at Database Log File Sizes. And, needing a good
way to determine the overall running total.

I think I can use “dbcc sqlperf(logspace)”.

But, as I had a working code already, I chose to use that one.

The working code basically reads through sys.database_files and uses the MetaData function fileProperty.

The only thing missing is a good path to arriving at running total.

So Googled for one and ended up a finding a good one in very quick succession.

Give Credit to Mike Forman
http://stackoverflow.com/questions/860966/calculate-a-running-total-in-sqlserver

The APPLY operator in SQL 2005 and higher works for this:

select
    t.id ,
    t.somedate ,
    t.somevalue ,
    rt.runningTotal
from TestTable t
 cross apply (
               select sum(somevalue) as runningTotal
    	       from TestTable
    	       where somedate <= t.somedate
    	     ) as rt
order by t.somedate

Giving credit to Mikael Erikson
http://stackoverflow.com/questions/860966/calculate-a-running-total-in-sqlserver

In SQL Server 2012 you can use SUM() with the OVER() clause.

select
    id,
    somedate,
    somevalue,
    sum(somevalue) 
       over(order by somedate rows
             unbounded preceding
           ) as runningtotal
from TestTable

It reminds me of the song that talks about:

  • You ‘re not supposed to say the Word Cancer in a song
  • And, telling Folks that Jesus is the answer, can rub them wrong
  • Well, you like to drink a cold one on the weekend
  • Do you wanna say I am sorry, but don’t know how
  • (King) George Strait – Amarillo by the Morning
  • Tammy Wynette’s – Stand by your Man (re-birth by Hillary Clinton)

During the 44th Country Music (heard on Nov 2010) Award the Artist who sang it had this to say:

I wanna talk to the fans.  It sounds like a cliche when you say thanks to the fans.

The great thing about country fans is, when you say fans, I don’t even mean mine. You guys are loyal to everyone in this room. It’s the most amazing loyal fan base in the world.

Here is a sample code to get SQL Server Transaction Log File Sizes:

select 
      file_id as fileID  
   --, name 
   --, physical_name
   , (size / 128) as sizeInMB
   , (((FILEPROPERTY(name, 'spaceUsed'))) / 128) as spaceUsedInMB    , tblDBFileInUsedRT.spaceUsed as spaceusedRunningTotalInMB
   , (( size - (FILEPROPERTY(name, 'spaceUsed'))) / 128) as 
       availInMB
  , tblDBFileAvailRT.avail as availRunningTotalInMB

from    sys.database_files tblDBFile	

cross apply 
  (

  select 
    sum(((FILEPROPERTY(tblDBFileTotal.name, 'spaceUsed'))) / 128)        as spaceUsed	
  from   sys.database_files tblDBFileTotal
  where  (tblDBFileTotal.type = 1)
  and    tblDBFileTotal.file_id <= tblDBFile.file_id

  )	tblDBFileInUsedRT

cross apply 
  (

    select sum(( size - (FILEPROPERTY(name, 'spaceUsed'))) / 128) 
             as avail	
    from   sys.database_files tblDBFileTotal
    where  (tblDBFileTotal.type = 1)
    and    tblDBFileTotal.file_id <= tblDBFile.file_id

  ) tblDBFileAvailRT

where
	(

 	   (tblDBFile.type = 1)

	)

order by tblDBFile.file_id asc

 

The Artist I was referring to is “Brad Paisley”.  And, the video is available @

Brad Paisley – This is Country Music

http://www.youtube.com/watch?v=n_KxM4rU38Q

 

As my Church’s founding pastor likes to say “Give it a little time”.

Hopefully, sooner or later you will have enough respect for what you do and realize that you only have a small window to do it. And, once that dawns on you, you will truly enjoy LIFE more and appreciate and stand behind’s other work.

The silliness of “Java vs .Net”, “Oracle vs MySQL vs SQL Server” starts to give way to what you ‘re trying to do for your paying customers.

References:

Microsoft – SQL Server – Virtual Log File (VLF)

What are Virtual Log Files?

Querying for Virtual Log Files offers an insight into the journal of Log file fragments creation and usage.

As log file fragments are created the system keeps a record of their creation. Also, as the system uses the fragment the status column is updated, with a non-zero value .

Get Virtual Log File

   dbcc loginfo()
   go

Any Issues?

If a SQL Server Instance is not well managed, numerous Virtual Log Files can be created. Over creation results in a bit of fragmentation.

How to determine Fragmentation?

There are no hard rules to determine fragmentation. But, a system with over 300 or more VLF entries is probably fragmented.

What are the effects of Fragmentation?

      • When SQL Server Instance is starting, the database recovery takes longer as the system reviews each VLF and its contents. Each content is examined and a determination is made about rollback, etc.

     

        • Data Manipulate Language (DML) statements such as insert, delete, and update can also take a bit longer as the system transverses the log searching for clean buffers.

      How To fix:

      1] Backup Transaction Log

      2] Identify Transaction Log

              select
      		  db_name(dbid) as databaseName
      		, fileid
      		, name as fileSymbolicName
      		, groupid
      		, cast((size * 8 / 1024) as sysname) 
                           + ' MB' as sizeInMB
              from   master.sys.sysaltfiles
              where  groupid = 0
              --replace <database-name> with actual db-name
              and    dbid = db_id(<database-name>)
              ;

      3] Shrink Transaction Log files

              use ;
      
              -- shrink transaction log file, passing in file-id from
              -- sys.sysaltfiles
              dbcc shrinkfile();

      4] If Transaction Log does not shrink

          Query sys.databases and review the log_reuse_wait_desc column for reason why the transaction log might currently be in-use.

      Changes that will help forestall future Fragmentation?

          Setting appropriate levels for database log file growth
          Set reasonable growth size for the model database transaction log. This will help with new database created on that instance
          Have database job or monitoring script that identifies sub-optimally configured databases.
          Regular Scheduled Transaction Backup
          Review Application Code and ETL jobs to ensure granularity in projected Number of Records Affected
          Review SQL Server Profiler Traces and identify queries that generate high IO
      /*
         Find The Number of VLFs For All Databases
         By Ken Simmons
      http://sqlserverpedia.com/blog/sql-server-bloggers/find-the-number-of-vlfs-for-all-databases/
      */
      if object_id('tempdb..#LogInfo') is not null
      begin
      	drop table #LogInfo
      end	
      
      if object_id('tempdb..#LogInfo2') is not null
      begin
      	drop table #LogInfo2
      end
      
      CREATE TABLE #LogInfo(
                      FileID BIGINT,
                      FileSize BIGINT,
                      StartOffset BIGINT,
                      FSeqNo BIGINT,
                      Status BIGINT,
                      Parity BIGINT,
                      CreateLSN VARCHAR(50))
      CREATE TABLE #LogInfo2(
                      DatabaseName SYSNAME,
                      FileID BIGINT,
                      FileSize BIGINT,
                      StartOffset BIGINT,
                      FSeqNo BIGINT,
                      Status BIGINT,
                      Parity BIGINT,
                      CreateLSN VARCHAR(50))
      EXEC master.dbo.sp_MSFOREACHDB
          'USE [?] INSERT INTO #LogInfo EXECUTE (''DBCC LOGINFO'');
                 INSERT INTO #LogInfo2 SELECT ''?'', * FROM #LogInfo;
                 DELETE FROM #LogInfo'
      
      SELECT DatabaseName,
             COUNT(*) AS VLFs
      from #LogInfo2
      group by DatabaseName
      order by count(*) desc  
      
      drop table #LogInfo
      drop table #LogInfo2

      Get Database file Sizes and Growth Pattern\Size

      /*
      Datafile Growth in SQL Server - Getting the Statistics Part I
      http://sqlserver.livejournal.com/73512.html
      */
      select
      	  db_name(dbid) as databaseName
      	, fileid
      	, name as fileSymbolicName
      	, groupid
      	, cast((size * 8 / 1024) as sysname) + ' MB' as sizeInMB
      	, growth = (
         		      case
      			(status & 0x100000) when 0x100000
      				then cast(growth as sysname) + '%'
      				  else cast((growth * 8 / 1024) as
                                 sysname) + ' MB'
      			end
      		)
      from   master.sys.sysaltfiles
      where  dbid = db_id()
      ;

      References:

      1. SQL 2000 – YES! LOTS OF VLF’S ARE BAD! IMPROVE THE PERFORMANCE OF YOUR TRIGGERS AND LOG BACKUPS ON 2000
        http://sqlblogcasts.com/blogs/tonyrogerson/archive/2007/07/25/sql-2000-yes-lots-of-vlf-s-are-bad-improve-the-performance-of-your-triggers-and-log-backups-on-2000.aspx
      2. Linchi Shea – Performance impact: a large number of virtual log files – Part I
        http://sqlblog.com/blogs/linchi_shea/archive/2009/02/09/performance-impact-a-large-number-of-virtual-log-files-part-i.aspx
      3. Kimberly L. Tripp – Transaction Log VLFs – too many or too few?
        http://www.sqlskills.com/BLOGS/KIMBERLY/post/Transaction-Log-VLFs-too-many-or-too-few.aspx
      4. Real World DBA – Virtual Log Files – VLF – And SQL Server Performance
         http://www.realworlddba.com/post/2011/06/07/Virtual-Log-Files-VLF-And-SQL-Server-Performance.aspx