Temporarily using Bulkcopy to copy data from production Operational DB to a development Reporting DB.
Here are some of the areas we are looking at to speed up the insertion of data into the destination DB.
To measure the fastest response let us assume that the Reporting DB will not be serving actual requests during the refresh cycle.
In making that assumption, we can be quite aggressive about the choices we make.
That is, we can make choices and assume that users will not overly complain about empty reports while the data is being emptied and reloaded.
- Use minimal logging
- Review DB Engine processes when inserting data into destination DB
- Review Database Engine locks requested
- Review Database Constraints checked
- Review Triggers processed
- Import Batch sizes
- Sorted data
Minimal Database Logging
As we are only serving DB reporting requests from our destination SQL instance, we do not risk data loss.
If we are scared of losing data, we will take regular full database backups and more incessantly transaction log backup.
There are three recovery modes : FULL, SIMPLE, and “Bulk_Logged”.
- Our Operational database is configured for FULL RECOVERY
- Our Reporting database, we have a choice of Simple or “Bulk_Logged”
- Simple :– Minimal Logging kept on all operations
- Bulk_Logged :- System automatically calibrates itself and strives for balance between Performance and Recoverability
Database Engine Locks
The DB Engine exhausts quite a lot of energy and resources to request and manage database locks. To temporary alleviate this cycle we will request a table lock when pruning old data and when importing new data.
Pruning old data
If other tables do not reference our targeted table, we are able to truncate the targeted table.
truncate table [schema].[table]
On the other hand, if other tables reference our targeted table, we have to use the delete statement.
Here is a sample code where we do the following:
- Prune in batches; doing so allows us stabilize our resource (Memory, I/O, Log) requirements
- Use TABLOCK to ensure that we request and take out a single lock rather than multiple individual records or page locks;
Individual records are escalated to page locks once certain limits are reached
set nocount on;
declare @iNumberofRecordsInCycle int
declare @iNumberofRecordsPruned int
set @iNumberofRecordsInCycle = 50000
set @iNumberofRecordsPruned = -1
while (@iNumberofRecordsPruned != 0)
delete tblD top (@iNumberofRecordsInCycle)
from [schema-name].[table-name] tblD WITH (TABLOCK)
set @iNumberofRecordsPruned = @@rowcount
When bringing in data, we use the –h operator and pass in TABLOCK as part of our argument list.
bcp %DBTargetDB%.dbo.tblSales in %TEMP%\dbo.tblSales.data -E -T -n -h"TABLOCK, ORDER (SALE_ID ASC)" -b%BATCH_SIZE% -S%DBTargetServer%
As we trust our originating system, there is little need to revalidate the data we are bringing in.
We will temporarily suspend constraints checking by issuing “ALTER TABLE [table-name] NOCHECK CONSTRAINT [constraint-name]”.
Disable Constraint Checking:
sqlcmd -e -b -Q"EXEC sp_MSforeachtable @command1='ALTER TABLE ? NOCHECK CONSTRAINT ALL' " -S%DBTargetServer% -d%DBTargetDB%
Resume Constraint Checking:
sqlcmd -e -b -Q"EXEC sp_MSforeachtable @command1='PRINT ''?''; ALTER TABLE ? WITH CHECK CHECK CONSTRAINT ALL' " -S%DBTargetServer% -d%DBTargetDB%
The resume “Constraint checking” code is quite interesting; for the following reasons:
- There are two CHECKS “CHECK CHECK”
- The first CHECK enables constraints
- The second CHECK ensures that existing data meets the criteria
- As all relevant existing data is checked for conformity it will likely take a while
If there are unaligned data, we will see errors such as :
Msg 547, Level 16, State 0, Line 1
The ALTER TABLE statement conflicted with the FOREIGN KEY constraint "FK_tblTranslate_tblToken".
The conflict occurred in database "HRDB", table "dbo.tblToken", column 'pkTokenID'.
When implemented, triggers automatically fire during DML operations.
One really needs to have a good grasp of the code logic embedded within the trigger and determine which ones are needed on our destination system.
To disable all triggers, here is what we did:
sqlcmd -e -b -Q"exec sp_msforeachtable 'ALTER TABLE ? DISABLE TRIGGER all'" -S%DBTargetServer% -d%DBTargetDB%
Once we have copied that data over to our destination, we re-enabled them.
sqlcmd -e -b -Q"exec sp_msforeachtable 'ALTER TABLE ? ENABLE TRIGGER all'" -S%DBTargetServer% -d%DBTargetDB%
Bulk Insert Batch Size
Microsoft’s documentation almost emphatically states that when importing data, one should use a batch size that covers the entire incoming dataset.
I have enough bones to pick with Microsoft, than to lawyer this one.
And, so unless your Database Server, Client, Reporting DB concurrent processes demonstrable state and show otherwise, I will say use a big batch size.
rem trying out 1 million records
bcp %DBTargetDB%.dbo.[tblDevice] in %TEMP%\dbo.tblDevice.data -T -E -n -h"TABLOCK, ORDER (pkDeviceID ASC)" -b%BATCH_SIZE% -S%DBTargetServer%
Before this review, I would not have thought that Sorting would matter.
But, according to Microsoft, it helps to sort the data before hand, and guide the BCP application that the incoming data is pre-sorted.
The BCP app thus skips the in-built sorting steps.
In the example below, using the –h operand we combine two hints; the erstwhile TABLOCK explicit locking request and the ORDER hint.
rem trying out 1 million records
bcp %DBTargetDB%.dbo.[tblDevice] in %TEMP%\dbo.tblDevice.data -T -E -n -h "TABLOCK, ORDER (pkDeviceID ASC)" -b%BATCH_SIZE% -S%DBTargetServer%
- In our sample, we referenced our Clustering key (pkDeviceID)
- If your table is a heap, that is no clustered index, I will suggest that you
- Earnestly review your entity physical model and consider adding a clustered index
- I am doubtful that it will be helpful, but consider changing your extract to use “query out” and as part of your query “add an order by [col1], [col2]”
As always, your immutable problem will likely be hardware.
You need plentiful I/O, Memory, and Network bandwidth.
- Often database servers are hidden deep in the castle
- To gain access to them one has to transverse through various Network Firewalls and proxies
- In multi-homed host set-ups, ensure that traffic has been configured to use the pre-arranged network card
- In essence, use trace route and network monitoring tools to trace and audit traffic routes
- I/O is all about Network Storage
- Are you using Fiber Channel or Ethernet card & switches?
- Just an in Network Analysis, is your I/O path sufficiently segregated?
- Can never have too much
- Having lots of memory lets the system and one get away with things one will otherwise not get away with
Other areas to consider are:
- Database table partitioning
- Storage Layout
- Are your flat files local or network
- If local, do they share same physical drive as your database files
- Database File Growth
- Are your database files pre-allotted
- Have you properly sized your auto-growth sizes
Review Database Resource Locks
-- , t1.resource_database_id
, databaseName = db_name(t1.resource_database_id)
-- , t1.resource_associated_entity_id
, objectName =
WHEN resource_type = 'OBJECT' THEN object_name(resource_associated_entity_id)
WHEN resource_type IN ('DATABASE', 'FILE', 'METADATA') THEN 'N/A'
WHEN resource_type IN ('KEY', 'PAGE', 'RID') THEN
FROM sys.partitions tblSP_Inner
WHERE tblSP_Inner.hobt_id = t1.resource_associated_entity_id
, CASE t1.REQUEST_MODE
WHEN 'S' THEN 'Shared'
WHEN 'U' THEN 'Update'
WHEN 'X' THEN 'Exclusive'
WHEN 'IS' THEN 'Intent Shared'
WHEN 'IU' THEN 'Intent Update'
WHEN 'IX' THEN 'Intent Exclusive'
WHEN 'SIU' THEN 'Shared Intent Update'
WHEN 'SIX' THEN 'Shared Intent Exclusive'
WHEN 'UIX' THEN 'Update Intent Exclusive'
WHEN 'BU' THEN 'Bulk Update'
WHEN 'RangeS_S' THEN 'Shared Range S'
WHEN 'RangeS_U' THEN 'Shared Range U'
WHEN 'RangeI_N' THEN 'Insert Range'
WHEN 'RangeI_S' THEN 'Insert Range S'
WHEN 'RangeI_U' THEN 'Insert Range U'
WHEN 'RangeI_X' THEN 'Insert Range X'
WHEN 'RangeX_S' THEN 'Exclusive range S'
WHEN 'RangeX_U' THEN 'Exclusive range U'
WHEN 'RangeX_X' THEN 'Exclusive range X'
WHEN 'SCH-M' THEN 'Schema-Modification'
WHEN 'SCH-S' THEN 'Schema-Stability'
END AS REQUEST_LOCK_MODE
FROM sys.dm_tran_locks as t1
LEFT OUTER JOIN sys.dm_os_waiting_tasks as t2
ON t1.lock_owner_address = t2.resource_address
where t1.resource_database_id = db_id()
Here is what happens when the table is locked.
Constraints – Foreign Key – Review
Here we look for Un-trusted foreign key constraints.
objectName = object_name(tblFK.parent_object_id)
, objectReferenced = object_name(tblFK.referenced_object_id)
, [name] = tblFK.name
, isDisabled = tblFK.is_disabled
, isNotTrusted = tblFK.is_not_trusted
, isNotForReplication = tblFK.is_not_for_replication
FROM sys.foreign_keys tblFK
Review Network Connections
c.session_id, c.net_transport, c.encrypt_option
, c.auth_scheme, s.host_name, s.program_name
FROM sys.dm_exec_connections AS c
JOIN sys.dm_exec_sessions AS s
ON c.session_id = s.session_id
LEFT OUTER JOIN sys.dm_exec_requests r
ON c.session_id = r.session_id
Here is what our connection looks like when we use SQLCMD.exe to remove existing rows:
Here is what our connection looks like when we use BCP.exe to import new rows:
- Our transport layer is Shared memory; as we are running bcp from same host as our DB Server, we are able to use the fastest available communication protocol
- Unfortunately, no encryption
- Authentication Scheme is NTLM, not as good as kerberos
- SQLCMD is using OLE/DB; whereas BCP is using ODBC
Follow Up Tasks
- Index Maintenance – Defragmentation
- Clustered Indexes – It is unlikely that your Clustered Index will be fragmented; especially if your data file is sorted based on your Clustered Index; the default mode
- Unclustered Indexes – Not so fast with your Unclustered Indexes; personally I use Olla’s wonderful gift, SQL Server Index and Statistics Maintenance, which is available @ https://ola.hallengren.com/sql-server-index-and-statistics-maintenance.html
- Index Statistics
- Please Update your Statistics, as well
Monitoring your SQL Instance, System, Network, and Storage will yield new insights as to where your bottlenecks are.
Like Ron, who I met yesterday, in my Power Lunch Walk, said to me it is the little things we miss…
That’s Why I’m Here
Composers:Mark Alan Springer, Shaye Smith
Producers:Buddy Cannon, Norro Wilson
Song By:Kenny Chesney