About This Blog

Including my content from SQLBlog.com and some from SQLPerformance.com

Friday 17 November 2023

Setting a Fixed Size for Transaction Log Virtual Log Files (VLFs)

Setting a Fixed Size for Transaction Log VLFs

The documentation has this to say about virtual log file (VLF) sizes:

The SQL Server Database Engine divides each physical log file internally into several virtual log files (VLFs). Virtual log files have no fixed size, and there’s no fixed number of virtual log files for a physical log file. The Database Engine chooses the size of the virtual log files dynamically while it’s creating or extending log files. The Database Engine tries to maintain a few virtual files. The size of the virtual files after a log file has been extended is the sum of the size of the existing log and the size of the new file increment. The size or number of virtual log files can’t be configured or set by administrators.

It then goes on to describe the problems having too many VLFs can cause, and how the database owner can arrange things so a reasonable number of VLFs are created. There’s even a (mostly accurate) formula for the number and size of VLFs SQL Server will create when asked to extend a transaction log file.

This is all very familiar, of course, but it is also dumb. Why on earth should we have to worry about internal formulas? It seems ridiculous to have to provision or grow a transaction log in pieces just to get a reasonable VLF outcome.

Wouldn’t it be better to be able to specify a fixed size for VLFs instead?

Starting with SQL Server 2022, there is now a way though it is undocumented and unsupported for the time being at least.

You can’t use it in a production database and there’s a real risk of it damaging your database beyond repair. Aside from those warnings, there’s no reason not to play around with it in a development environment. Or, if you’re simply curious to know more, read on.

Monday 13 November 2023

Why Batch Mode Sort Spills Are So Slow

Why Batch Mode Sort Spills Are So Slow

Batch mode sorting was added to SQL Server in the 2016 release under compatibility level 130. Most of the time, a batch mode sort will be much faster than the row mode equivalent.

This post is about an important exception to this rule, as recently reported by Erik Darling (video).

No doubt you’ll visit both links before reading on, but to summarize, the issue is that batch mode sorts are very slow when they spill—much slower than an equivalent row mode sort.

This also seems like a good opportunity to write down some sorting details I haven’t really covered before. If you’re not interested in those details and background to the current issue, you can skip down to the section titled, “Erik’s Demo”.

Friday 20 October 2023

Fast Key Optimization for Row Mode Sorts

Fast Key Optimization for Row Mode Sorts

SQL Server row-mode sorts generally use a custom implementation of the well-known merge sort algorithm to order data.

As a comparison-based algorithm, this performs a large number of value comparisons during sorting—usually many more than the number of items to sort.

Although each comparison is typically not expensive, even moderately sized sorting can involve a very large number of comparisons.

SQL Server can be called upon to sort a variety of data types. To facilitate this, the sorting code normally calls out to a specific comparator to determine how two compared values should sort: lower, higher, or equal.

Although calling comparator code has low overhead, performing enough of them can cause noticeable performance differences.

To address this, SQL Server has always (since at least version 7) supported a fast key optimization for simple data types. This optimization performs the comparison using highly optimized inline code rather than calling out to a separate routine.

Wednesday 2 August 2023

Importing a File in Batches

Importing a File in Batches

There are a million ways to import data into SQL Server. Most of the time, we want to ingest the new data as quickly and efficiently possible but that’s not always the case.

Sometimes, we need to accept data at a rate that will not dominate resource usage on the target system or cause excessive transaction log growth. In other cases, each row from the data source needs specific server-side processing to validate and persist the data across multiple relational tables, perhaps involving foreign keys and identity columns.

All this can be achieved with client-side tools and programming. It can also be done server-side by importing the raw data into a staging table before processing using T-SQL procedures.

Other times, the need arises to ingest data without using client-side tools and without making a complete copy of the raw data on the server. This article describes one possible approach in that situation.