|

SQL Server and Other Topics

rss

I got my start with SQL Server in 1997 with SQL Server 6.5 at BellSouth Cellular. From that point on I've had the opportunity to work on Versions 6.0 to 2014. I specialize in Performance Tuning, High Availability and SQL Development. Contact me through this site or through my Twitter handle @SQLDiver


Ed Watson and I have been presenting a tag team performance tuning session for over a year now at local SQLSaturdays. We've had the pleasure of presenting the subject in multiple formats from 1 hour (rapid fire taste) to 8 hours as a precon session.

You probably already know who I am if you're reading my blog, in short I am a Linchpin People teammate and independent consultant with almost 20 years of SQL Server experience. My forte is performance tuning for large enterprise corporations with serious performance issues.

Ed Watson is a Microsoft MVP, and is a consultant in his day job. Ed has more than 15 years in IT, and has been a senior Production DBA for many years... you probably know him best as @SQLGator and can find him blogging at the SQLSwampland... yes he's a Florida Gator... don't hold it against him (I try not to).

September 25th, 2015, Ed Watson (b|t) and I will be presenting the session, "Getting the New DBA Up to Speed with Performance Tuning" as a full 8 hours class.

This session is for those guys or gals wanting to become a DBA, are an "accidental DBA" or a DBA with a little experience who wants to gain some add knowledge to their performance tuning playbook.

In this session we'll start at the beginning of the process of building the foundation following best practices for a highly tuned or tune-able environment. We will cover how to configure the OS and database server to best take advantage of features built in to the OS and SQL Server for peak performance. We'll dig into some of the database server configurations that effect performance such as "Optimize for Adhoc Workloads", MAXDOP and "Cost Threshold for Parallelism".

We'll talk about TempDb optimizations, and why they make a difference and how to find the correct number of data files. Mike Lawell

We'll cover benchmark basics and how to see that your database performance is suffering compared to historical expectations.

Once you have the foundation needed to build a stable environment we'll look at some of the top performance issues and how to identify them when you've had a complaint about performance. Using some of the free tools that come with SQL Server, such as extended events, we'll teach you how to trap what is causing the slowness in the database for many issues encountered by a DBA.

We'll spend more than an hour going over execution plans from beginning concepts to some advanced topics (don't worry, we'll make sure we don't go too far). We'll make sure you know where to get more information from some of our performance tuning heros!

But wait, there's more... we'll share with you some of our favorite scripts that we've collected over time that will give you a little bit of an edge next time someone calls and says, "everything is slow", help!!

All of this for an extremely low price of $150... that is the sum total of $18.75 per hour!!

 We can't wait to see you at "Getting the New DBA Up to Speed with Performance Tuning"

SQL Server Innovators Guild (ssig)
Friday, September 25, 2015 from 8:30 AM to 5:00 PM (EDT)
Spartanburg, SC

Register Today

 

 

 

 


 

Friday I was notified that my "Execution Plans for Mere Mortals" was chosen for PASS Summit 2015. I've been presenting this session for over a year at many SQL Saturdays, and expect to present it and fine tune it at 3 - 4 more SQLSaturdays prior to Summit.

This will be my first time presenting at a PASS Summit, so I'm very excited and humbled to be on the speaker list.

Session Details

 


Once again, I served on the organizing team. I decided to spread my time across 3 different teams as backup lead, Precons, speaker selection, and swag. I also assisted with the printing team as I had been the lead two years in a row and had two new members to the organizing team running it.

This years theme was Back to the Future, as this was the 30th anniversary of the first back to the future and this year, 2015 was a subject of the sequel.

The Atlanta team always tries to have a theme that will be fun for the attendees, volunteers and speakers. By adding fun to the formula, we feel it is a easier for us to unify the team working on the "project" and also a good marketing ploy to bring new attendees in.

This year, like last year, I designed the event pin. We have gotten into the habit of creating pins for every event for all of the speakers and attendees as a memento of the event. They turned out awesome.

 

This year we "stole" a great idea from SQLSaturday #363 - Nashville. They created a special name badge for the speakers that was very cool. I designed a badge that was very theme related:

The event was one of our smoothest events, like each of the SQLSaturdays we've hosted, we suffered a little bit from the number of attendees that attended. We might have to move to a new venue next year due to the capacity limits of the venue we've been using. 

 The after party was a huge success as always. Rob Volk, the lead, decided to reenact the prom scene from the first Back to the Future movie, from hiring a band dressed in tuxes to the "Enchantment Under the Sea" banners.

We had many speakers and attendees show up to socialize. I met quite a few new people and met some people I've been friends but have never met in person.

 This year we had a visit from Marty McFly!

 


I'm at a client that uses SQL Server Database Mirroring for their high availability solution. The principal and mirror for the databases happen to have the data file on two different drive locations (i.e. they don't match). Why is this important when you create a file? Easy, the CREATE FILE command is a transaction that is sent to the mirror.

 Run the CREATE FILE T-SQL or add a file from SSMS and you'll have an ugly situation on your hands.

 Suspended? What this suspended status about? Oh, no, you've just made a mess.

 Error log from the mirror server:

2015-04-08 12:29:18.94 spid30s     CREATE FILE encountered operating system error 3(The system cannot find the path specified.) while attempting to open or create the physical file 'E:\SQLData\StupidFileBrokeMe.ndf'.
2015-04-08 12:29:18.94 spid30s     Error: 5123, Severity: 16, State: 1.
2015-04-08 12:29:18.94 spid30s     CREATE FILE encountered operating system error 3(The system cannot find the path specified.) while attempting to open or create the physical file 'E:\SQLData\StupidFileBrokeMe.ndf'.
2015-04-08 12:29:18.94 spid30s     Error: 1454, Severity: 16, State: 1.

So lesson one when adding a data file, know where your data files are on both principal and mirror. If you've gotten to this point you're going to have to remove the mirror and re-add it from the beginning.  Not only that, but it is going to require a SQL Server service restart. You can't drop the mirror, you can't bring it online, you can put it into emergency... you're gonna have to trick the system. I actually had to shutdown the service, delete the data files. When the service started up I was able to drop the database. It turned out to be a bigger pain than I thought it would be just to demo it for you. Thank goodness I didn't have this happen to me in production.

So, the client was smart enough to already know the data files were on different drive letters and paths. Bonus! Now we can really work some magic.

 So, let's look at what really needs to happen and save us from having to go through the effort of dropping the mirror and re-adding the mirror from scratch. I certainly don't want the production environment unprotected during the massive ... across WAN ... copy of the backup to the DR site.

The process is as follows:

  1. Remove Mirror
  2. Add File on "Principal"
  3. Backup Transaction Log
  4. Restore Transaction Log to "Mirror" copy WITH MOVE to the new location of the data files.
  5. Add the mirror partner and witness (if it exists)
  6. Set safety off if asynchronous.

So easy right? Well... the hard part is making sure every step is done correctly... time to look for that old tool... you know the one you may never have used because you didn't quite understand it. Ever used SQLCMD mode in SSMS? Yes, this is the magic.

 Like many of you DBAs out there it took me years to see SQLCMD as the tool it really was. Getting others to find the magic has taken almost as long.

I recommended it as a tool to implement the data file addition, and the DBA got a look in his eyes, almost a fear that it would take too long to learn. 

I whipped out my cape and pulled out the old wooden toolbox, and started digging through the tools... and buried deep in the bottom was this bright shiny tool that was all smooth and balanced and comfortable to use. It was very simplistic in its usage.

 

For SQLCMD, start with the easy basic commands:

:CONNECT INSTANCENAME

:SETVAR var "variable value"

There is enough to munch on for years in just those two SQLCMD commands. Look how simple this script is, but how much easier than writing multiple scripts connected to several servers. We can change our connections back and forth between principal and mirror server (don't forget your GOs).

 

 

  
 :SETVAR db "TST_SWITCH"	 
 
:SETVAR prisrv "SERVER01" 
:SETVAR mirsrv "SERVER02" 
 
:SETVAR filename "TST_SWITCH_NEW_FILE" 
:SETVAR filegroup "JOB" 
:SETVAR phyfilename "TST_SWITCH_NEW_FILE.ndf" 
:SETVAR priloc "E:\SQLData" 
:SETVAR mirloc "C:\SQLServer\Data" 
:SETVAR backuploc "\\server02\Backups\TST_SWITCH.trn" 
:SETVAR filesize "204800KB" 
:SETVAR filegrowth "204800KB" 
:SETVAR mir "TCP://SERVER01.domain.com:5022" 
:SETVAR pri "TCP://SERVER02.domain.com:5022" 
:SETVAR wit "TCP://SERVER03.domain.com:5022" 
-- ALTER DATABASE ADD FILEGROUP [Job] 
 
 
-- Remove Mirror, Add Data File, backup logfile 
:CONNECT $(prisrv) 
USE [$(db)] 
ALTER DATABASE [$(db)] ADD FILEGROUP [$(filegroup)] 
GO 
 
 
ALTER DATABASE [$(db)] SET PARTNER OFF 
GO 
ALTER DATABASE [$(db)]  
ADD FILE (  
NAME = N'$(filename)',  
FILENAME = N'$(priloc)\$(phyfilename)' ,  
SIZE = $(filesize) ,  
FILEGROWTH = $(filesize) )  
TO FILEGROUP $(filegrowth) 
GO 
BACKUP LOG [$(db)] TO DISK = '$(backuploc)' WITH INIT 
GO 
 
:CONNECT $(mirsrv) 
 
RESTORE DATABASE [$(db)]  
FROM DISK = '$(backuploc)' 
WITH NORECOVERY, 
MOVE '$(filename)' to '$(mirloc)\$(phyfilename)' 
GO 
 
 
:CONNECT $(mirsrv) 
ALTER DATABASE [$(db)] SET PARTNER = '$(mir)' 
GO 
 
:CONNECT $(prisrv) 
ALTER DATABASE [$(db)] SET PARTNER = '$(pri)' 
ALTER DATABASE [$(db)] SET WITNESS = '$(wit)' 
GO 

 

The key to this script is the ability to test the script in another environment to make sure it works. Be prepared to make mistakes, set up your test environment by creating a database and set up mirroring. It doesn't have to have data in it. Make sure the data files are on different file paths.

 If you tried to do this with the SSMS GUI it would be easy to forget a step, run a step out of order, get confused where you are. With SQLCMD, its all about the variables, the T-SQL code doesn't change.

The best part of this script, after helping the DBA write a script with SQLCMD to get comfortable with SQLCMD, I gave him my script with the SETVAR for each of the variables that would change. He was able to test it multiple times in a lower environment. When we was comfortable he modified the parameters to match production...and at 5:30AM (while I was sleeping), it ran flawlessly.It ran flawlessly for several reasons, 1) it was tested many times, 2) he was comfortable with the processes and 3) he was very careful to use the correct variables.

 

 Summary

I covered the simplicity of SQLCMD and why it is a great tool to use for processes that need precise coordination and timing across multiple servers. It is important for you to push your comfort level to make your life easier. Most DBAs have more work than they can handle (its why I'm a consultant) so make your life easier and learn SQLCMD today.

I used the example of adding a new data file to a mirrored database where the principal and mirror used different drive paths for the data files. This script simplifies the implementation of adding a new data file to principal and mirror.

 

 


SQL Server Database Mirroring

Yes, mirroring is a deprecated feature with the addition of AlwaysOn Availability Groups. I needed to test my process of moving from Mirroring to AGs over and over and needed a quick way to implement Database Mirroring and tear it down and start over. 

The process actually involves setting up the Mirror, then creating log shipping from the mirror across a slow WAN. That article will come out some time in the near future.

It took me some research because this is my first real foray into PowerShell, so I found a blog by John P. Wood, "Using Powershell to Set Up SQL Server Mirroring", and bastardized it. I rewrote most of the script and kept some important steps.

 

* Warning - I'm not a PowerShell Guru, this code can be written different ways, some may be better. Let me know when improvements can be made.

Script

The script pulls the needed variables from csvFileName which is passed into the script.

 

 
 Param( 
    [Parameter(Mandatory=$true)] 
    [string]$csvFilename 
) 
 

The csv file has the following columns:

 database SourceServer
 SourcePath  DestServer DestPath
 QA_PSW_RC_NY_SWITCH  Q-MSSQL01-LB
 \\Q-MSSQL01-LB\Backup\ Q-MSSQL11-LB

\\Q-MSSQL01-LB\Backup\


Domain
Mode
DestDataFilePath DestLogFilePath Witness Timeout BreakMirror FileName
 domain.com  complete C:\SQLServer\Data C:\SQLServer\Logs
QA-MSSQL03-NY
30
true

Functions

Function Get-FileName { 
    Param([string]$path) 
    $names = $path.Split('\\') 
    $names[$names.Count - 1] 
}

 I'm sure there is a better way to do this, I got this from someone else.

 

Function New-SMOconnection { 
    Param ( 
        [string]$server 
    ) 
    $conn = New-Object Microsoft.SqlServer.Management.Common.ServerConnection($server) 
    $conn.applicationName = "PowerShell SMO" 
    $conn.StatementTimeout = 0 
    $conn.Connect() 
    if ($conn.IsOpen -eq $false) { 
        Throw "Could not connect to server $($server) for database backup of $($dbname)." 
    } 
    $smo = New-Object Microsoft.SqlServer.Management.Smo.Server($conn) 
    $smo 
}

Function Invoke-SqlBackup { 
    Write-Host "Performing Backup of"$database 
    $dbbk = new-object ('Microsoft.SqlServer.Management.Smo.Backup') 
    $dbbk.Action = [Microsoft.SqlServer.Management.Smo.BackupActionType]::Database 
    $dbbk.BackupSetDescription = "Full backup of $database"  
    $dbbk.BackupSetName = $database + " Backup" 
    $dbbk.Database = $database 
    $dbbk.MediaDescription = "Disk" 
    Write-Host "$SourcePath\$bkpfile" 
    $device = "$SourcePath\$bkpfile" 
    $dbbk.Devices.AddDevice($device, 'File') 
    $smo = New-SMOconnection -server $SourceServer 
    Try { 
        $dbbk.SqlBackup($smo) 
        $dbbk.Action = [Microsoft.SqlServer.Management.Smo.BackupActionType]::Log 
        $dbbk.SqlBackup($smo) 
        $smo.ConnectionContext.Disconnect() 
    } 
    Catch { 
        $ex = $_.Exception 
        Write-Output $ex.message 
        $ex = $ex.InnerException 
        while ($ex.InnerException) 
        { 
            Write-Output $ex.InnerException.message 
            $ex = $ex.InnerException 
        }; 
        continue 
    } 
    Finally { 
        if ($smo.ConnectionContext.IsOpen -eq $true) { 
            $smo.ConnectionContext.Disconnect() 
        } 
    } 
}

 

Function Invoke-SqlRestore { 
    Param( 
        [string]$filename, 
        [string]$DestDataFilePath, 
        [string]$DestLogFilePath 
    ) 
    # Get a new connection to the server 
    $smo = New-SMOconnection -server $DestServer 
    $backupDevice = New-Object("Microsoft.SqlServer.Management.Smo.BackupDeviceItem") ($filename, "File") 
    # Get local paths to the Database and Log file locations 
 
    #Set Data File Path 
    If ($DestDataFilePath.Length -gt 1){ 
        $DBPath=$DestDataFilePath         
    } 
    ElseIf ($smo.Settings.DefaultFile.Length -eq 0) {$DBPath = $smo.Information.MasterDBPath } 
    Else { $DBPath = $smo.Settings.DefaultFile} 
 
    #Set Log File Path 
    If ($DestLogFilePath.Length -gt 1){ 
        $DBLogPath=$DestLogFilePath 
    } 
    ElseIf ($smo.Settings.DefaultLog.Length -eq 0 ) {$DBLogPath = $smo.Information.MasterDBLogPath } 
    Else { $DBLogPath = $smo.Settings.DefaultLog} 
  
    $backupFile=$DestPath+"\"+$filename 
 
    # Load up the Restore object settings 
    Write-Host "Preparing Restore" 
    $Restore = new-object Microsoft.SqlServer.Management.Smo.Restore 
    $Restore.Action = 'Database' 
    $Restore.Database = $database 
    $Restore.ReplaceDatabase = $true 
    $Restore.NoRecovery = $true 
    $Restore.Devices.AddDevice($backupFile,[Microsoft.SqlServer.Management.Smo.DeviceType]::File) 
  
    # Get information from the backup file 
    $RestoreDetails = $Restore.ReadBackupHeader($smo) 
    $DataFiles = $Restore.ReadFileList($smo) 
 
    # Restore all backup files 
    #Write-Host "Restoring all data files" 
    ForEach ($DataRow in $DataFiles) { 
 
        $LogicalName = $DataRow.LogicalName 
        $PhysicalName = Get-FileName -path $DataRow.PhysicalName 
         
        $RestoreData = New-Object("Microsoft.SqlServer.Management.Smo.RelocateFile") 
        $RestoreData.LogicalFileName = $LogicalName 
        if ($DataRow.Type -eq "D") { 
            # Restore Data file 
            Write-Host 'Restoring Data File' $DBPath"\"$PhysicalName 
            $RestoreData.PhysicalFileName = $DBPath + "\" + $PhysicalName 
        } 
        Else { 
            # Restore Log file 
            Write-Host 'Restoring Log File' $DBPath"\"$PhysicalName 
            $RestoreData.PhysicalFileName = $DBLogPath + "\" + $PhysicalName 
        } 
        [Void]$Restore.RelocateFiles.Add($RestoreData) 
  
    } 
    Try { 
    $Restore.FileNumber = 1 
    $Restore.SqlRestore($smo) 
    # If there are two files, assume the next is a Log 
    if ($RestoreDetails.Rows.Count -gt 1) { 
        $Restore.Action = [Microsoft.SqlServer.Management.Smo.RestoreActionType]::Log 
        $Restore.FileNumber = 2 
        $Restore.SqlRestore($smo) 
    } 
        $smo.ConnectionContext.Disconnect() 
    } 
    Catch { 
        $ex = $_.Exception 
        Write-Output $ex.message 
        $ex = $ex.InnerException 
        while ($ex.InnerException) 
        { 
            Write-Output $ex.InnerException.message 
            $ex = $ex.InnerException 
        }; 
        continue 
    } 
    Finally { 
        if ($smo.ConnectionContext.IsOpen -eq $true) { 
            $smo.ConnectionContext.Disconnect() 
        } 
    } 
}

 

Function Set-Mirror { 
    Param([string]$server,[string]$database,[string]$partner) 
    $conn = "Server=$server; Integrated Security=SSPI; Database=Master" 
    $cn = New-Object "System.Data.SqlClient.SqlConnection" $conn 
    $cn.Open() 
    $cmd = New-Object "System.Data.SqlClient.SqlCommand" 
    $cmd.CommandType = [System.Data.CommandType]::Text 
  
    $cmd.CommandText = "ALTER DATABASE $database SET PARTNER = 'TCP://" + $partner + "." + $Domain + ":$Port'" 
    $cmd.Connection = $cn 
     
 
 
 
    try { 
      $cmd.ExecuteNonQuery() 
      } 
    catch [System.Net.WebException] { 
      # $_ is set to the ErrorRecord of the exception 
      if ($_.Exception.InnerException) { 
        Out-Log $_.Exception.InnerException.Message 
      } else { 
        Out-Log $_.Exception.Message 
      } 
    } 
     
    $cn.Close() 
 
}

  
Function Set-Witness { 
    Param([string]$server,[string]$database,[string]$witness) 
    $conn = "Server=$server; Integrated Security=SSPI; Database=Master" 
    $cn = New-Object "System.Data.SqlClient.SqlConnection" $conn 
    $cn.Open() 
    $cmd = New-Object "System.Data.SqlClient.SqlCommand" 
    $cmd.CommandType = [System.Data.CommandType]::Text 
  
    $cmd.CommandText = "ALTER DATABASE $database SET WITNESS = '$Witness'" 
    Write-Host $cmd.CommandText 
    $cmd.Connection = $cn 
 
    try { 
      $cmd.ExecuteNonQuery() 
      } 
    catch [System.Net.WebException] { 
      # $_ is set to the ErrorRecord of the exception 
      if ($_.Exception.InnerException) { 
        Out-Log $_.Exception.InnerException.Message 
      } else { 
        Out-Log $_.Exception.Message 
      } 
    } 
     
    $cn.Close() 
}

 

Function Set-Timeout { 
    Param([string]$server,[string]$database,[string]$timeout) 
    $conn = "Server=$server; Integrated Security=SSPI; Database=Master" 
    $cn = New-Object "System.Data.SqlClient.SqlConnection" $conn 
    $cn.Open() 
    $cmd = New-Object "System.Data.SqlClient.SqlCommand" 
    $cmd.CommandType = [System.Data.CommandType]::Text 
 
    $cmd.CommandText = "ALTER DATABASE $database SET PARTNER TIMEOUT "+$Timeout 
    $cmd.Connection = $cn 
    try { 
      $cmd.ExecuteNonQuery() 
      } 
    catch [System.Net.WebException] { 
      # $_ is set to the ErrorRecord of the exception 
      if ($_.Exception.InnerException) { 
        Out-Log $_.Exception.InnerException.Message 
      } else { 
        Out-Log $_.Exception.Message 
      } 
    } 
     
    $cn.Close() 
}

Function Break-Mirror { 
    Param([string]$server,[string]$database) 
    Write-Host "Breaking Existing Mirror" 
    $conn = "Server=$server; Integrated Security=SSPI; Database=Master" 
    $cn = New-Object "System.Data.SqlClient.SqlConnection" $conn 
    $cn.Open() 
    $cmd = New-Object "System.Data.SqlClient.SqlCommand" 
    $cmd.CommandType = [System.Data.CommandType]::Text 
 
    $cmd.CommandText = "ALTER DATABASE $database SET PARTNER OFF " 
    $cmd.Connection = $cn 
    try { 
      $cmd.ExecuteNonQuery() 
      } 
    catch [System.Net.WebException] { 
      # $_ is set to the ErrorRecord of the exception 
      if ($_.Exception.InnerException) { 
        Out-Log $_.Exception.InnerException.Message 
      } else { 
        Out-Log $_.Exception.Message 
      } 
    } 
     
    $cn.Close() 
}

 

The Process - The loop

This is where the magic occurs. The variables are populated from the csv file that will be used to pass to the functions for each step that needs to be completed to instantiate the mirror.

 FOREACH ($Server in $ServerList) { 
    [string]$database=$Server.database 
    [string]$SourceServer=$Server.SourceServer 
    [string]$SourcePath=$Server.SourcePath 
    [string]$DestServer=$Server.DestServer 
    [string]$DestPath=$Server.DestPath 
    [string]$Port=$Server.Port 
    [string]$Domain=$Server.Domain 
    [string]$Mode=$Server.Mode 
    [string]$FileName=$Server.FileName 
    [string]$Witness=$Server.Witness 
    [string]$Timeout=$Server.Timeout 
    [string]$BreakMirror=$Server.BreakMirror 
    [string]$DestDataFilePath = $Server.DestDataFilePath 
    [string]$DestLogFilePath = $Server.DestLogFilePath

 

 
    if ($mode -eq 'Restore'){ 
    $bkpfile =$FileName 
    } 
    else { 
    $bkpfile = $($SourceServer.Replace("\", "$")) + "_" + $database + "_FULL_" + $(get-date -format yyyyMMdd-HHmmss) + ".bak" 
    } 
 
 
If a mirror exists on the source database remove the mirror if specified in the csv file. This is mostly for testing, so you can rerun the process over and over.  
    if ($BreakMirror -eq 'True'){ 
    Break-Mirror -server $SourceServer -database $database 
    } 
 This step creates the backup and if it is the Backup only step, it tells you where to put the backup based upon the csv data. This is for a large file to be copied across the slow WAN.  
    if ($mode -eq 'Backup') { 
        Invoke-SqlBackup 
        Write-Host "Copy your backups from: $SourcePath" 
        Write-Host "Put your backups here: $DestPath on Server: $DestServer" 
        Write-Host "Put your log backups here: "$DestPath\$database"_LOG on Server: $DestServer" 
        } 
    Elseif ($mode -eq 'Complete') {Invoke-SqlBackup} 
 
    if ($mode -ne 'Backup'){ 
 
        if ($mode -eq 'Complete') { 
            if($SourcePath -ne $DestPath){ 
                Copy-Item $(Join-Path $srcUNC $bkpfile) -Destination $destUNC -Verbose 
            } 
     
        } 
        This step restores the backup to the new location.  
        Invoke-SqlRestore -filename $bkpfile -DestDataFilePath $DestDataFilePath -DestLogFilePath $DestLogFilePath 
   
         This step sets the mirror on both mirror and principal.  
        if ($mode -eq 'Complete') { 
            # Establish Mirroring from the mirrored database 
            Set-Mirror -server $DestServer -database $database -partner $($SourceServer.Split('\\')[0]) 
            # Start the mirror 
            Set-Mirror -server $SourceServer -database $database -partner $($DestServer.Split('\\')[0]) 
         
 Add the witness if specified in the csv file. 
            if ([string]::IsNullOrEmpty($Witness)){ 
                Write-Host "Witness parameter not provided. Configure manually" 
                Write-Host "ALTER DATABASE $database SET WITNESS = [put witness name here]" 
                } 
                else 
                { 
                Write-Host 'Adding Witness' 
                $WitnessFile="TCP://"+$Witness+"."+$Domain+":"+$Port 
                Set-Witness -server $SourceServer -database $database -witness $WitnessFile 
                } 
  For a mirroring configuration across a WAN you'll want to modify the Timeout value (default of 10 seconds) to something greater. I start at 30 seconds based upon the network latency. 
                if ([string]::IsNullOrEmpty($Timeout)){ 
                Write-Host "Timeout parameter not provided. Default (10) set." 
                } 
                else 
                { 
                Set-Timeout -server $SourceServer -database $database -timeout $Timeout 
                } 
             
            } 
            else 
            { 
            Write-Host $SourceServer 
            $DestServTrim=$($DestServer.Split('\\')[0]).Trim() 
            Write-Host 'Backup Tail of Log and copy all log backups to destination server and restore.' 
            Write-Host "-- After logs have been restored, run:" 
            Write-Host "-- ALTER DATABASE [$database] SET PARTNER = 'TCP://$DestServTrim.${Domain}:$Port'" 
            Write-Host "TCP://"$Witness"."$Domain":"$Port 
         
 
                 
            } 
        } 
 
}

 

 

 Sample csv File

 PowerShell Mirroring Script

 


Part I | Part II

Part II of  Move What? No downtime? What?

 Log Shipping Configuration

The strategy of log shipping to the two instances that will be the principal and mirror, will be configured in this part of the blog article. This step, since it is for testing can be completed through the wizard, but I recommend doing it exactly as it will be done in production so you get the hang of the backup, copy  and restore process since it is across a slow WAN that could take many hours or possibly even days.

 

Configure Folders for Log Shipping Backups

In order to avoid as much confusion as possible, create a folder for each database specific to the log shipping process. If you don't have folders setup you're going to run into issues. 

On each server create an empty folder for the log shipping and a folder for each database inside the log shipping folder.

 

These folders will be the home for the database log backups during the migration process. The current log backup jobs will need to be disabled during this process to ensure log backups are not missed. The log shipping restore jobs will restore the log backups from this location across the WAN.

For the log shipping full backup, create a folder for each database on both servers where the secondaries will reside. The backups will be copied to each location and restored. This is to simulate a large backup file backup, copy and restore across a slow WAN.

Q-MSSQL01-LB

 

Q-MSSQL11-LB

 

Naming Conventions of Logical File Name

So future scripts work as expected (based on a standard naming convention), make sure your logical file names are named with the same naming convention. The following script will help with this step. Replace <<DatabaseName>> with the name of your database. This will create a script you can run against the database. You'll have to be creative if you have more than one data file. Remember this is just for testing.

 

SET NOCOUNT ON

SELECT

'ALTER DATABASE  ' + DB_NAME(database_id) + '

MODIFY FILE (NAME = ' + name +  ', NEWNAME = '+ DB_NAME(database_id) + ') '

FROM sys.master_files

WHERE database_id = db_id('<<DatabaseName>>')

AND type = 0

 

SELECT

'ALTER DATABASE ' + DB_NAME(database_id) + '

MODIFY FILE (NAME = ' + name +  ', NEWNAME = '+ DB_NAME(database_id) + '_log) '

FROM sys.master_files

WHERE database_id = db_id('<<DatabaseName>>')

AND type = 1

 

This is a good spot to take a break. The next step will involve SQLCMD mode.


Overview

At a client who is a credit card payment provider, with a 5 nines (99.999) availability requirement,  I've been asked to move a set of databases from one physical server to another with the least amount of down time possible. In addition to this requirement they have a Recovery Time Objective (RTO) of 15 seconds. Unfortunately this isn't possible with the environment they have. Good news is they know this and accept the downtime.

This is Phase One of moving to Microsoft SQL Server AlwaysOn Availability Groups.

 

Current State

  1. Two nodes of a Windows Server Failover Clustering (WSFC) cluster on Windows 2008 R2 Enterprise and SQL Server 2008 R2 Enterprise FCI in New York Data Center.
  2. Two nodes of a Windows Server Failover Clustering (WSFC) cluster on Windows 2008 R2 Enterprise and SQL Server 2008 R2 Enterprise FCI in Delaware Data Center.
  3. The databases on the WSFC cluster in NY are configured as a SQL Server Database Mirroring principal mirroring to the DE data center for high availability.

This is already a complex environment in that have the mirroring configured as synchronous over the WAN between NY and DE. The move is to a second set of servers already being used by a different set of application databases.

Future State

 One multi-subnet Windows 2012 R2 WSFC cluster across two data centers. The nodes in NY will serve as WSFC FCI active/passive failover partners and the nodes in DE will serve as WSFC FCI active/passive failover partners.

Mirroring will be configured with the principal homed on the active node in NY and the mirror on the active node in DE.

 

Requirements

 Migrate the production databases to an upgraded set of servers with a downtime goal of zero.

 Strategy for Migration

In order to minimize down time as well as ensure a minimal RTO for the duration of the migration, the following strategy will be used.
Log shipping will be implemented to the two instances that will act as principal and mirror. This will allow the database to maintain the RTO while the backups are copied to the DR site in DE.

During the cut over, a tail log backup will be completed, putting the database in no recovery to prevent additional transactions from occurring, and the backup will be applied to the two log ship secondary databases, the log ship secondary databases that will act as principal will be recovered (brought online).
A log backup will be taken from the databases that will be the principal and restored to the secondary databases that will act as the mirror then immediately implement mirroring between the new principal and mirror.

 

 The process will be scripted to prevent missing steps, human error, and complete the process in a way to minimize downtime to a bare minimum.

 Testing Configuration

Because of the necessity to avoid downtime this will have to be tested thoroughly. The risk is very high for this process as minutes down would be a very high loss of revenue.The servers and databases we will use for our testing will be as follows:

  1. QA-MSSQL03-NY - SQL Server 2008 R2
  2. Q-MSSQL01-LB - SQL Server 2014
  3. Q-MSSQL11-LB - SQL Server 2014
  4. Q-MSSQL12-LB - SQL Server 2014
  5. Q-MSSQL13-LB - SQL Server 2014

Databases to migrate TST_SWITCH and TST_TRANSACTIONS

Mirror Configuration

We want to setup mirroring on our test databases to simulate the production starting point. In production the databases are in pairs and will be move two at a time. We'll just use the wizard to configure mirroring. This assumes you have configured mirroring previously and understand the steps. Prior to this step, ensure you have mirror endpoints configured for each server being used.

 

 

 


SQLSaturday Session

On January 17, 2015, I presented an "Execution Plans for Mere Mortals" session. The session had 49 attendees, more than the number of desks. Although I had PowerPoint issues (it crashed every time I tried to return from SSMS), it was a good session. I had many people come to the front so I could answer questions. All of the Pluralsight free 30 days were gone and many of my business cards were gone. 

The participation was very good.

Thank you for all who attended and participated!

Here are the slides.

 

Precon Half Day Session

Ed Watson and my precon session was a success. We had a lot of fun with the people in the class. We had 17 attendees for the "Getting the New DBA Up to Speed with Performance Tuning" half day session.

Slides for: Getting the New DBA Up to Speed with Performance Tuning


In 2014, I started off the year struggling with half of January on the bench, looking for my next contract. Being an independent consultant has many huge benefits that I love, but there also comes the part many people struggle with of finding work. 

I'm blessed with many friends in the SQL Server Community with a solid footing for consulting. I jumped into consulting after drowning in the political BS frequently found in an enterprise organization. I had been developing my "side work" contracts planning on jumping into consulting when it was appropriate. One day in 2013, the Director of HR was reminding me why they refuse to pay out vacation banked by employees when they leave. I reminded him of the propensity of the employees to either use up their vacation then give notice, or give notice (less than a week) because they have nothing to stay for (vacation pay). So managers lose the transition period expected with an company that pays out the vacation. I digress. That day, I decided it was time to work for myself, and the day before a good friend of mine Geoff Hiten @SQLCraftsman called be to offer me a 3 month contract through MSC in New York City. It was a tough decision to leave the security of my position, not knowing what the next contract would be... but I felt it was a sign from above.

 

 

 

 


Robert Pearl @pearlknows did a blog post on SQLSaturday and sharing in November. I've been planning on doing the same for quite some time...

 From a fresh perspective, since this is the end of my first year as a SQLSaturday speaker, I want to give my perspective.

My first event was Jacksonville, FL, where so many existing speakers, Chris Skorlinski, Chad Churchwell, Bradley Balls, Rob Volk, Ed Watson were so helpful and gave such wonderful advice and feedback. This is what the SQL community is, and #SQLFamily is. In January I will be doing my first preconference session with my close buddy Ed Watson.

I've been a volunteer for years at Atlanta SQLSaturdays, serving on the organizing committee 3 years in a row. Through this I've met so many speakers that have continuously nudged me every chance they got to become a speaker as I've been a database professional for so many years that I have a lot of knowledge to share. I've had the pleasure to meet many of the people who's blogs I follow, where I've learned the most,like Denny Cherry, Wes Brown, Adam Machanic, Kalen Delaney, Stacia Misner, Kevin Kline, and many more.

You don't have to have 17 + years of experience in SQL Server to speak at a SQLSaturday, technically experience is not necessary, just the possession of information people want to hear about.I know many speakers who started out with less than a few years of SQL Server experience, but have gained so much knowledge about their topics they've become experts.As a matter of fact, I've dug into my topics and found new morsels I was unaware of during the authoring of my sessions. I like to present to beginners, they're open minded, ready to learn and stand to benefit the most from SQLSaturdays.

If you have been thinking about presenting and want some unbiased help, please don't hesitate to contact me, I'm always happy to help!

 SQLSaturday Atlanta will be coming fast next year, I hope we'll see you there!