Why would you want to log ship the same database to the same instance? To reduce the amount of time that the disaster recovery server has old data after a test.
Need for log shipping and DR tests
For one of my customers, databases are log shipped for high availability purposes. If a disaster were to hit the primary site, the application would be pointed to the alternate location.
Several times per year, there are tests to see if this will work. You wouldn’t want a disaster, and have an untested solution, and confusion over what needs to happen. Besides, these tests may be required by law.
In order to test the databases, they must be recovered, so that they are writable for the application. In order to get log shipping going again, a recent full backup of production databases must be restored to the DR server.
Takes too long to get things back to normal after a DR test
The full restore is where the problem lies. The database backups are large. The DR server is 2,000 miles from the production server. There is a lot of data already going through the pipe. So copying anything takes a lot of time. Maybe it would take several days.
After restored the full backup, any logs that were backed up since that full backup need to be applied. The longer it takes to retrieve the full backup, the more logs there are. The more logs there are, the longer would take to get the DR databases up to date. For this amount of time, the DR site is not truly available.
Duplicate databases rather than new instance
To keep data fresh all the time, I came up with the idea of duplicating the databases. One set could continue to apply new logs, while the other set is being tested and reinitialized.
At first I thought about creating a new instance, but then realized this came with a lot of overhead. The downside to instances are that I’d have to coordinate memory usage between the two instances. Any time I would do any administration, I’d need to do it twice. Plus, the instances would use a different port number. There would be unknowns as to whether DR would also work seamlessly on the other instance as well.
As an alternative, we just continually log shipped to databases with the “2” name appended to the end of them. In the rare case that they were needed, then a script could be run to remove the “2”. I say it is a rare case, because usually the normally named databases would already have the latest data.
For this company, it is worth it to spend on disk space rather than spend a lot of time without current data at the DR site.
At first I was concerned that the logs would be shipped twice. But SQL Server uses the same jobs for both databases. The logs are only copied once from production to the DR server. The same restore jobs are used. This is a downside, because the job fails when one of the sets has been recovered. However, by looking at msdb.dbo.restorehistory, I can see that, in my test, the second set continues to apply logs.
Assuming that the first log shipping destination is already set up:
- Copy a full backup from production and restore it to the DR server . Apply differentials if needed.
- Open Management Studio. Right-click on database to be shipped, and select Properties. On the left choose Transaction Log Shipping
- Under secondary server instances, select “Add…”
- Click Connect and select the instance for the destination
- Rename the Secondary database. Add a 2 to the end of it or use your naming convention. DBName2
- Select “No, the secondary database is initialized”
- On Copy Files tab, enter the destination folder for the files. Click disable this job. It should just ignore, and use the jobs that were already there for the first log shipping destination.
- On Restore Transaction Log tab, leave the default options. Click disable this job. It should just ignore, and use the jobs that were already there for the first log shipping destination.
- The an interesting warning pops up. It says “The new values for the copy destination folder and/or copied file delete time will also affect other secondary instances.”. From my experience, all this means is that the two destination databases will share the same jobs, and it will keep the settings from the first one you set up. Click OK.
- Click OK and assuming it was successful, click Close.