SQL Server security token error

We had an issue that was bothering us.  Someone could not log into the SQL Server, when they were in a group.  The error message was:

“Token-based server access validation failed with an infrastructure error.”

We had his password reset, and had him re-added to the groups, we suspected maybe he was in too many groups.  Things just weren’t working.

When we dropped the login from SQL Server, and added it back, things started working.  The root cause ended up being that the Active Directory administrators were dropping the groups, and recreating them.  This generated a new SID.  The SID stored in SQL Server did not match the SID in AD.

So now we’ve told them “Don’t do that!”

Posted in Uncategorized | Leave a comment

Temporal tables in SQL Server 2016

Have you noticed the new temporal tables feature in SQL Server 2016?
https://msdn.microsoft.com/en-us/library/dn935015.aspx

Here is my take on it, for the use case of auditing.

Upsides:

  • Must turn off the feature to add a column, then turn it back on. I consider this an upside, because the problem with using triggers is that the definition gets old and they break.
  • Better than CDC (Change Data Capture), which reads the t-log , relies on Agent jobs, and is duplicating the new values and old values. Since it is LSN based, it is intended to be used to dump what you want into a different table that is easier to query.
  • Better than CT (Change Tracking) does not track the actual values that changed
  • The integration with the SELECT syntax is awesome.

 

Downsides:

  • The system versioning must be turned off to delete records. So deletes would probably be done in a downtime, unless you want to use partitioning.  I can see there being problems with runaway updates exploding the size of history tables, and then you need downtimes to clean everything up.
  • Be careful with LOBs because they will be duplicated. It copies all columns.  CDC might be better if you only want to track some columns.

In summary, it is best used for narrow tables with few changes, and few updates.  I can picture it being perfect for an Address table, for instance, since people don’t move too often, and there is a predictable set of columns.

 

Posted in Uncategorized | Leave a comment

vWorkspace single use plans

I found that vWorkspace uses a lot of single use plans.  However, SQL Server is good at flushing them out when needed.  vWorkspace is sharing instances with our client databases, so I’d rather it not take up many resources.

On one of our instances, vWorkspace is using 2 GB of single use plans.  99.6% of the cached plans have been used only once.   This instance has no memory pressure, so the plans just stayed in there.

The query with all the single plans is the following:

UPDATE DmSettings
SET DateTimeValue = GETUTCDATE()
WHERE SectionKey = ‘BrokerTiming’
AND ValueKey = ‘VdiLastTimeMonitored’
AND (CONVERT(VARCHAR(36), DateTimeValue, 109) = ‘May 26 2016  6:06:57:750PM’  OR DateTimeValue IS NULL)

(So vWorkspace, if you’re listening, please parameterize the date in future versions)

On a more active instance, vWorkspace has few single use plans in cache.  That is because the memory is needed, and the single use plans are flushed.

We’re planning to enable ‘optimize for ad hoc’ on our servers here, and that should hopefully reduce the amount of space that vWorkspace needs in the plan cache.

Posted in Uncategorized | Leave a comment

Find message type owners

When dropping a user from a database, sometimes they own a table or a schema, and you can go drop them.  This had become routine, until I came across a new message today.

“The database principal owns a message type in the database and cannot be dropped”

To find the owners of service types, run the following T-SQL:

select dp.name, smt.*
from sys.service_message_types smt
inner join sys.database_principals dp on smt.principal_id = dp.principal_id

To change the owner to dbo, run the following:
ALTER AUTHORIZATION ON MESSAGE TYPE::Name TO dbo;

Then another error was next up:

“The database principal owns a contract in the database, and cannot be dropped.”

select dp.name, sc.*
from sys.service_contracts sc
inner join sys.database_principals dp on sc.principal_id = dp.principal_id

To change the owner to dbo, run the following:
ALTER AUTHORIZATION ON CONTRACT::Name TO dbo;

Then finally, the error about the service showed up:

“The database principal owns a service in the database, and cannot be dropped.”

select dp.name, s.*
from sys.services s
inner join sys.database_principals dp on s.principal_id = dp.principal_id

To change the owner to dbo, run the following:
ALTER AUTHORIZATION ON SERVICE::Name TO dbo;

After changing the owners, I was able to drop the user from the database.  I recommend changing these changes on non-production, to see how the application does, before applying to production.

Posted in Uncategorized | Leave a comment

Dynamic Filtered Where-Object in Powershell

Hey all,

I had a new challenge to dynamically filter out an object, where the column that is filtered is passed in through a parameter.  Here is what I came up with.  This also shows how we’ve been executing queries lately.  We ditched Invoke-SqlCmd, and rely on SMO.

FUNCTION GetInventoryServers ([string]$FilterColumn, [string]$FilterValue)
{

#Load SMO Objects
[System.Reflection.Assembly]::LoadWithPartialName(“Microsoft.SqlServer.Smo”) | Out-Null
[System.Reflection.Assembly]::LoadWithPartialName(“Microsoft.SqlServer.ConnectionInfo”) | Out-Null;

$ManagementNodeServer = “ManagementServer”
$ManagementDatabaseName = “MgmtDB”

$MgmtServerSMO = New-Object (‘Microsoft.SqlServer.Management.SMO.Server’) $ManagementNodeServer
$MgmtDatabaseSMO = $MgmtServerSMO.Databases[$ManagementDatabaseName]
$GetActiveServersQuery = “SELECT Server,IPAddress,Client FROM Inventory”

$AllInventoryServers = ($MgmtDatabaseSMO.ExecuteWithResults($GetActiveServersQuery)).Tables[0]

if($FilterColumn -ne “” -and $FilterValue -ne “”)
{
$conditions = “`$_.” + $FilterColumn + ” -like ‘*” + $FilterValue + “*'”
$filter = [ScriptBlock]::Create($conditions)
$FilteredServers = $AllInventoryServers | WHERE-OBJECT $filter
return $FilteredServers
}
else
{
return $AllInventoryServers
}
}

Posted in Uncategorized | Leave a comment

Commvault automated restores

We recently switched to Commvault for our SQL Server backup solution, and it slowed down a lot of our restores, because it took a long time to click through the GUI.  It was also error prone and dangerous, since it defaults to overriding production.  And we have to change the file paths, and change some names, and click different options.  So I built a new GUI with drop downs, to choose the source and target locations, and database names, and then you just hit Submit, and it runs.  Then I built another utility that shows the results of the restore.

The most challenging part was to use the Commvault command line, and build a Powershell process to call it, and to receive output from it.  To learn how to do this, I used the Save As Script functionality of the restore.  This will produce an XML file and a .bat file.   What you want to do is have your Powershell generate an XML file, and then start the .bat file.

Not all of the XML is needed.  I took out the unnecessary values.  I won’t post an XML sample here, but I can answer any questions in the comments section.  One tip though, is that you may need UTF8 encoding when you create the file.

$XMLOutput | Out-File “RestoreParameters.xml” -Encoding “UTF8”;

Here is the code to start the batch process.

$pinfo = New-Object System.Diagnostics.ProcessStartInfo
$pinfo.FileName = “\\fileserver\RestoreDatabaseFromXMLv3.bat”
$pinfo.UseShellExecute = $false
$pinfo.RedirectStandardOutput = $true
$process = New-Object System.Diagnostics.Process

$process.StartInfo = $pinfo
$process.Start() | Out-Null; # ignore return value with Out-Null
while (!$process.HasExited )
{        $process.Refresh();
#custom function to update database with progress
UpdateProgress $RestoreFileNumber $RestoreProgressResults.PercentComplete                    Start-Sleep -s 10
}
$stdout = $process.StandardOutput.ReadToEnd();
Write-Host $stdout;

Notes – these lines help redirect the output, so that you can capture it in your automation.

  1. $pinfo.UseShellExecute = $false Setting this property to false enables you to redirect input, output, and error streams.  When UseShellExecute is false, you can start only executables by using the Process object.
  2. $pinfo.RedirectStandardOutput = $true.  When a Process writes text to its standard stream, that text is typically displayed on the console. By setting RedirectStandardOutput to true to redirect the StandardOutput stream, you can manipulate or suppress the output of a process
  3. $stdout = $process.StandardOutput.ReadToEnd(); You can then update your database with the value from $stdout.

Let me know if there is any interest on this topic, and I can flesh out the details more.

Posted in Uncategorized | Leave a comment

Purge orphan MDW query plans

The solution that Microsoft proposes to remove orphan query plans from the Data Collector was running too slowly.

After looking at the query plan, I created an index to help with the delete statements.  After this, it should run quickly.

CREATE NONCLUSTERED INDEX [ix_sql_handle]
ON [snapshots].[query_stats] ([sql_handle],[statement_start_offset],[statement_end_offset],[plan_generation_num],[plan_handle],[creation_time])

Using the second index, it runs very quickly.

 

Posted in Uncategorized | Leave a comment