About This Blog

Including my content from SQLBlog.com and some from SQLPerformance.com
Showing posts with label Uniqueness. Show all posts
Showing posts with label Uniqueness. Show all posts

Sunday 31 May 2020

Pulling Group By Above a Join

Pulling Group By Above a Join

One of the transformations available to the SQL Server query optimizer is pulling a logical Group By (and any associated aggregates) above a Join.

Visually, this means transforming a tree of logical operations from:

Group By Below Join

…to this:

Group By Above Join

The above diagrams are logical representations. They need to be implemented as physical operators to appear in an execution plan. The options are:

  • Group By
    • Hash Match Aggregate
    • Stream Aggregate
    • Distinct Sort
  • Join
    • Nested Loops Join
    • Nested Loops Apply
    • Hash Match Join
    • Merge Join

When the optimizer moves a Group By above a Join it has to preserve the semantics. The new sequence of operations must be guaranteed to return the same results as the original in all possible circumstances.

One cannot just pick up a Group By and arbitrarily move it around the query tree without risking incorrect results.

Thursday 4 April 2013

Optimizer Limitations with Filtered Indexes

Optimizer Limitations with Filtered Indexes

One of the filtered index use cases mentioned in the product documentation concerns a column that contains mostly NULL values. The idea is to create a filtered index that excludes the NULLs, resulting in a smaller nonclustered index that requires less maintenance than the equivalent unfiltered index.

Another popular use of filtered indexes is to filter NULLs from a UNIQUE index, giving the behaviour users of other database engines might expect from a default UNIQUE index or constraint: Uniqueness enforced only for non-NULL values.

Unfortunately, the query optimizer has limitations where filtered indexes are concerned. This post looks at a couple of less well-known examples.

Friday 1 February 2013

A creative use of IGNORE_DUP_KEY

A creative use of IGNORE_DUP_KEY

Let’s say you have a big table with a clustered primary key, and an application that inserts batches of rows into it. The nature of the business is that the batch will inevitably sometimes contain rows that already exist in the table.

The default SQL Server INSERT behaviour for such a batch is to throw error 2627 (primary key violation), terminate the statement, roll back all the inserts (not just the rows that conflicted) and keep any active transaction open:

Thursday 4 August 2011

Avoiding Uniqueness for Performance

Avoiding Uniqueness for Performance

In my last post, Enforcing Uniqueness for Performance, I showed how using a unique index could speed up equality seeks by around 40%.

Friday 29 July 2011

Enforcing Uniqueness for Performance

Enforcing Uniqueness for Performance

A little while back, I posted a short series on seeks and scans:

One of the things I highlighted in the middle post was the difference between a singleton seek and a range scan:

  • A singleton equality seek always retrieves exactly one row, and is guaranteed to do so because a unique index exists to enforce it.

  • A range scan seeks down the B-tree to a starting (or ending) point, and scans forward (or backward) from that point using the next or previous page pointers.

Today’s short post shows how much faster a singleton seek is, compared with a range scan, even when both return exactly the same number of records.