Shrinking a large database with limited outage windows

13

April 22, 2019 by Kenneth Fisher

Yes, I realize you shouldn’t shrink your database (data files or log files), but I recently had a case where it was completely appropriate to do a shrink. We had a table that was almost a terabyte on its own. It was decided to put in an archive process that got rid of 3/4 of the table. This left us with ~750gb of empty space. The total DB size was ~1.25tb so this was going to more than cut it in half.

We needed that space for other things so it was time to shrink down the database. Now, you can work in the database while it’s being shrunk but it’s not something I would do on a busy production system. Shrink is also pretty slow. In this case, I expected it could take quite a bit of time. Maybe not days, but maybe 10-20 hours. Unfortunately, I don’t have that much time. This particular system has an outage window/slow time of a few hours a day. So what can I do?

Well, there are a few thoughts/tips.

  • First of all I like to target my shrinks by using DBCC SHRINKFILE instead of DBCC SHRINKDATABASE. In this particular case, I only had one file (of 10) that needed to be shrunk. No point in hitting the entire database when a single file will do.
  • Both SHRINKDATABASE and SHRINKFILE have an option TRUNCATEONLY. It’s pretty quick (a few seconds to a minute usually) and doesn’t actually move any pages around. All it does go out to the very last page in use and get rid of all of the empty space after that. So for example in my case using TRUNCATEONLY almost immediately got rid of ~200gb. It didn’t fix things but it provided some much-needed space on the drive.
  • You can move tables from one filegroup to another. You could potentially move each table to a different filegroup and then truncate the file(s) in the original filegroup or even empty them out and get rid of the filegroup entirely.
  • This is a very important part of the BOL entry:

    You can stop DBCC SHRINKFILE operations at any point and any completed work is preserved.

    You can run the shrink for an hour or two a day and eventually it will finish! This is what I ended up doing. It took a bit over 2 weeks of running the shrink every day from 1-3 hours a day but it got there.

13 thoughts on “Shrinking a large database with limited outage windows

  1. Zhunya says:

    Or you just run DBCC SHRINKFILE with [ , target_size ] which is less than actual size for 500MB or 1GB, and repeat the step…

  2. Steve Hall says:

    I wasn’t aware of being able to run DBCC SHRINKFILE in ‘lumps’ like that – thank you. I’ve been pondering how to recover an obscene amount of free space from our 7TB main DB, without downtime and this is helpful.

    • Yea, that’s why I wrote it. I was looking for some advice and couldn’t find anything among all of the “why you shouldn’t shrink” (including my own). Not that I’m an expert but I put down what I know 🙂

      • Ray Herring says:

        I have a script that I think of as a “nibbler”. I can set the Target Size in MB, the shrink increment in MB, a delay in seconds, and a stop time.
        I usually run the script overnight with a shrink increment between 100MB and 500MB and a delay of 2 or 3 seconds.
        The users never notice any impact as long as the shrink increment is around 100MB.

        We won’t go into the reasons I need the script 😦 or I why I am still on SQL 2008R2 ;(

        • michaelrichardedwardsgmailcom says:

          Hi there Ray. Any chance you could share your ‘Nibbler’ script with me? Very many thanks. Mike (michael.richard.edwards@gmail.com)

        • michaelrichardedwardsgmailcom says:

          If anyone can send me an example of a shrink ‘nibbler’ script I’d be eternally grateful. Is it as simple as somthing like this:

          DBCC SHRINKFILE (N’DataFileLogicalName’ , 1303552)
          WAITFOR DELAY ’00:00:10′
          DBCC SHRINKFILE (N’DataFileLogicalName’ , 1302552​)
          WAITFOR DELAY ’00:00:10′
          DBCC SHRINKFILE (N’DataFileLogicalName’ , 1301552​)
          WAITFOR DELAY ’00:00:10′
          DBCC SHRINKFILE (N’DataFileLogicalName’ , 1300552​)
          WAITFOR DELAY ’00:00:10′
          DBCC SHRINKFILE (N’DataFileLogicalName’ , 1299552​)

          (note target size decreasing by 1GB each iteration)

          Thank you!

        • A Khan says:

          Hi Ray,

          can you share the script with me.
          amir.khattak133@gmail.com

  3. SAinCA55 says:

    Watch out, if using Bulk_Logged, or Full, recovery mode, and running TRAN backups – they argue, and DBCC usually loses. We have a 5GB nibbler, too, with GB limits per day and run time limits, in a SQL Agent Job, until we can get a similar multi-TB recovery completed. The script includes disabling the TRAN job, then enabling at end of run. You’ll possibly see about 100GB or so for a 500GB space recovery, but your mileage may vary depending on empty or full pages being moved. And, as many point out – you will fragment physically, so at least run a REORG for some semblance of organization. That temptation to REBUILD WITH (ONLINE=ON) for large tables should be a tad resisted. At least, that’s my mileage…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,755 other subscribers

Follow me on Twitter

ToadWorld Pro of the Month November 2013
%d bloggers like this: