SQL Server 2008 - Shrinking the Transaction Log - Any way to automate?

I went in and checked my Transaction log the other day and it was something crazy like 15GB. I ran the following code:

USE mydb
GO
BACKUP LOG mydb WITH TRUNCATE_ONLY
GO
DBCC SHRINKFILE(mydb_log,8)
GO

Which worked fine, shrank it down to 8MB...but the DB in question is a Log Shipping Publisher, and the log is already back up to some 500MB and growing quick.

Is there any way to automate this log shrinking, outside of creating a custom "Execute T-SQL Statement Task" Maintenance Plan Task, and hooking it on to my log backup task? If that's the best way then fine...but I was just thinking that SQL Server would have a better way of dealing with this. I thought it was supposed to shrink automatically whenever you took a log backup, but that's not happening (perhaps because of my log shipping, I don't know).

Here's my current backup plan:

  • Full backups every night
  • Transaction log backups once a day, late morning (maybe hook the Log shrinking onto this...doesn't need to be shrank every day though)

Or maybe I just run it once a week, after I run a full backup task? What do you all think?

Answers


If you file grows every night at 500 MB there is only one correct action: pre-grow the file to 500MB and leave it there. Shrinking the log file is damaging. Having the log file auto-grow is also damaging.

  • you hit the file growth zero fill initialization during normal operations, reducing performance
  • your log grows in small increments creating many virtual log files, resulting in poorer operational performance
  • your log gets fragmented during shrinkage. While not as bad as a data file fragmentation, log file fragmentation still impact performance
  • one day the daily growth of 500MB will run out of disk space and you'd wish the file was pre-grown

You don't have to take my word for it, you can read on some of the MVP blogs what they have to say about the practice of log and file shrinkage on a regular basis:

There are more, I just got tired of linking them.

Every time you shrink a log file, a fairy loses her wings.


I'd think more frequent transaction log backups.


I think what you suggest in your question is the right approach. That is, "hook the Log shrinking onto" your nightly backup/maintenance task process. The main thing is that you are regularly doing transaction log backups, which will allow the database to be shrunk when you do the shrink task. The key thing to keep in mind is that this is a two-step process: 1) backup your transaction log, which automatically "truncates" your log file; 2) run a shrink against your log file. "truncate" doesn't necessarily (or ever?) mean that the file will shrink...shrinking it is a separate step you must do.


for SQL Server 2005

DBCC SHRINKFILE ( Database_log_file_name , NOTRUNCATE)

This statement don't break log shipping. But, you may need to run more than one. For each run, the log shipping backup, copy, and restored to run after again run this statement.

Shrink and truncate are different.

My experiences:

AA db, 6.8GB transaction log first run: 6.8 GB log shipping backup, copy, restore after second run: 1.9 GB log shipping backup, copy, restore after third run: 1.7 GB log shipping backup, copy, restore after fourth run: 1 GB

BB db, 50GB transaction log first run: 39 GB log shipping backup, copy, restore after second run: 1 GB


Creating a transaction log backup doesn't mean that the online transaction log file size will be reduced. The file size remains the same. When a transaction is backuped up, in the online transaction log it's marked for overwriting. It;s not automatically removed, and no spaces is freed, therefore, the size remains the same.

Once you set the LDF file size, maintain its size by setting the right transaction log backup frequency.

Paul Randal provides details here:

Understanding Logging and Recovery in SQL Server

Understanding SQL Server Backups


Based on Microsoft recommendation Before you intend to Shrink log file you should first try to perform the following capabilities:

  • Freeing disk space so that the log can automatically grow.
  • Moving the log file to a disk drive with sufficient space.
  • Increasing the size of a log file.
  • Adding a log file on a different disk.
  • Turn on auto growth by using the ALTER DATABASE statement to set a non-zero growth increment for the FILEGROWTH option.

    ALTER DATABASE EmployeeDB MODIFY FILE ( NAME = SharePoint_Config_log, SIZE = 2MB, MAXSIZE = 200MB, FILEGROWTH = 10MB );

Also, you should be aware of shrink operation via maintenance plan will effect on *.mdf file and *.ldf file. so you need to create a maintenance plan with SQL job task and write the following command to can only shrink *.ldf file to your appropriate target size.

use sharepoint_config
go
alter database sharepoint_config set recovery simple
go
dbcc shrinkfile('SharePoint_Config_log',100)
go
alter database sharepoint_config set recovery FUll
go

Note: 100 is called the target_size for the file in megabytes, expressed as an integer. If not specified, DBCC SHRINKFILE reduces the size to the default file size. The default size is the size specified when the file was created.

In my humble opinion, It’s not recommended to perform the shrink operation periodically! Only in some circumstances that you need to reduce the physical size.

You can also check this useful guide to Shrink a transaction log file Maintenance Plan in SQL Server


Need Your Help

How to do delayed zone reload in Tapestry?

java ajax reload tapestry zone

Is there a way to do a delayed zone reload in Tapestry.