About This Blog

Including my content from SQLBlog.com and some from SQLPerformance.com
Showing posts with label Internals. Show all posts
Showing posts with label Internals. Show all posts

Monday 12 March 2012

Fun with Scalar and Vector Aggregates

Fun with Scalar and Vector Aggregates

There are interesting things to be learned from even the simplest queries.

For example, imagine you are asked to write a query that lists AdventureWorks product names, where the product has at least one entry in the transaction history table, but fewer than ten.

Friday 23 December 2011

Forcing a Parallel Query Execution Plan

Forcing a Parallel Query Execution Plan

This article is for SQL Server developers who have experienced the special kind of frustration that only comes from spending hours trying to convince the query optimizer to generate a parallel execution plan.

This situation often occurs when making an apparently innocuous change to the text of a moderately complex query — a change which somehow manages to turn a parallel plan that executes in ten seconds, into a five-minute serially-executing monster.

Tuesday 9 August 2011

SQL Server, Seeks, and Binary Search

SQL Server, Seeks, and Binary Search

The following table summarizes the results from my last two articles, Enforcing Uniqueness for Performance and Avoiding Uniqueness for Performance. It shows the CPU time used when performing 5 million clustered index seeks into a unique or non-unique index:

Test summary

In test 1, making the clustered index unique improved performance by around 40%.

In test 2, making the same change reduced performance by around 70% (on 64-bit systems – more on that later).

Tuesday 19 July 2011

Join Performance, Implicit Conversions, and Residuals

Join Performance, Implicit Conversions, and Residuals

Introduction

You probably already know that it’s important to be aware of data types when writing queries, and that implicit conversions between types can lead to poor query performance.

Some people have gone so far as to write scripts to search the plan cache for CONVERT_IMPLICIT elements, and others routinely inspect plans for that type of thing when tuning.

Now, that’s all good, as far as it goes. It may surprise you to learn that not all implicit conversions are visible in query plans, and there are other important factors to consider too.

Saturday 5 March 2011

How Parallelism Works in SQL Server

How Parallelism Works in SQL Server

You might have noticed that January was a quiet blogging month for me.

Part of the reason was that I was working on an article for Simple Talk, looking at how parallel query execution really works. The first part is published today at:

Understanding and Using Parallelism in SQL Server.

This introductory piece is not quite as technical as normal, but I hope there be enough interesting material there to make it worth a read.

© Paul White
email: SQLkiwi@gmail.com
twitter: @SQL_Kiwi

Wednesday 23 February 2011

Advanced TSQL Tuning: Why Internals Knowledge Matters

Advanced T-SQL Tuning: Why Internals Knowledge Matters

There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes. Query tuning is not complete as soon as the query returns results quickly in the development or test environments.

In production, your query will compete for memory, CPU, locks, I/O, and other resources on the server. Today’s post looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better T-SQL.

Saturday 19 February 2011

I see no LOBs!

I see no LOBs!

Is it possible to see LOB (large object) logical reads from STATISTICS IO output on a table with no LOB columns?

I was asked this question today by someone who had spent a good fraction of their afternoon trying to work out why this was occurring — even going so far as to re-run DBCC CHECKDB to see if corruption was the cause.

The table in question wasn’t particularly pretty. It had grown somewhat organically over time, with new columns being added every so often as the need arose.

Nevertheless, it remained a simple structure with no LOB columns — no text or image, no xml, no max types — nothing aside from ordinary integer, money, varchar, and datetime types.

To add to the air of mystery, not every query that ran against the table would report LOB logical reads — just sometimes — but when it did, the query often took much longer to execute.

Thursday 17 February 2011

Seeking Without Indexes

Seeking Without Indexes

A seek can contain one or more seek predicates, each of which can either identify (at most) one row in a unique index (a singleton lookup) or a range of values (a range scan).

When looking at an execution plan, we often need to look at the details of the seek operator in the Properties window to see how many operations it is performing, and what type of operation each one is.

As seen in the first post of this mini-series, When is a Seek not a Seek? the number of hidden seeking operations can have an appreciable impact on performance.

Tuesday 14 December 2010

Beware Sneaky Reads with Unique Indexes

Beware Sneaky Reads with Unique Indexes

I saw a question asked recently on the #sqlhelp hash tag:

Might SQL Server retrieve (out-of-row) LOB data from a table, even if the column isn’t referenced in the query?

Leaving aside trivial cases like selecting a computed column that does reference the LOB data, one might be tempted to say that no, SQL Server does not read data you haven’t asked for.

In general, that is correct; however, there are cases where SQL Server might sneakily read a LOB column.

Thursday 4 November 2010

Myth: SQL Server Caches a Serial Plan with every Parallel Plan

Myth: SQL Server Caches a Serial Plan with every Parallel Plan

Many people believe that whenever SQL Server creates an execution plan that uses parallelism, an alternative serial plan is also cached.

The idea seems to be that the execution engine then decides between the parallel and serial alternatives at runtime. I’ve seen this on forums, in blogs, and even in books.

In fairness, a lot of the official documentation is not as clear as it might be on the subject. In this post I will show that only a single (parallel) plan is cached. I will also show that SQL Server can execute a parallel plan on a single thread.

Monday 1 November 2010

The Case of the Missing Shared Locks

The Case of the Missing Shared Locks

This post covers a little-known locking optimization that provides a surprising answer to the question:

If I hold an exclusive lock on a row, can another transaction running at the default read committed isolation level read it?

Most people would answer ‘no’, on the basis that the read would block when it tried to acquire a shared lock. Others might respond that it depends on whether the READ_COMMITTED_SNAPSHOT database option was in effect, but let’s assume that is not the case, and we are dealing simply with the default (locking) read committed isolation level.

Thursday 23 September 2010

A Tale of Two Index Hints

A Tale of Two Index Hints

If you look up Table Hints in the official documentation, you’ll find the following statements:

If a clustered index exists, INDEX(0) forces a clustered index scan and INDEX(1) forces a clustered index scan or seek.

If no clustered index exists, INDEX(0) forces a table scan and INDEX(1) is interpreted as an error.

The interesting thing there is that both hints can result in a scan. If that is the case, you might wonder if there is any effective difference between the two.

This blog entry explores that question, and highlights an optimizer quirk that can result in a much less efficient query plan when using INDEX(0). I’ll also cover some stuff about ordering guarantees.

Friday 27 August 2010

Sorting, Row Goals, and the TOP 100 Problem

Sorting, Row Goals, and the TOP 100 Problem

When you write a query to return the first few rows from a potential result set, you’ll often use the TOP clause.

To give a precise meaning to the TOP operation, it will normally be accompanied by an ORDER BY clause. Together, the TOP…ORDER BY construction can be used to precisely identify which top ‘n’ rows should be returned.

Sunday 22 August 2010

Row Goals and Grouping

Row Goals and Grouping

You might recall from Inside the Optimizer: Row Goals In Depth that query plans containing a row goal tend to favour nested loops or sort-free merge join over hashing.

This is because a hash join has to fully process its build input (to populate its hash table) before it can start probing for matches on its other input. Hash join therefore has a high start-up cost, balanced by a lower per-row cost once probing begins.

In this post, we will take a look at how row goals affect grouping operations.

Wednesday 18 August 2010

Inside the Optimizer: Row Goals In Depth

Inside the Optimizer: Row Goals In Depth

Background

One of the core assumptions made by the SQL Server query optimizer cost model is that clients will eventually consume all the rows produced by a query.

This results in plans that favour the overall execution cost, though it may take longer to begin producing rows.

Wednesday 11 August 2010

The Impact of Non-Updating Updates

The Impact of Non-Updating Updates

From time to time, I encounter a system design that always issues an UPDATE against the database after a user has finished working with a record — without checking to see if any of the data was in fact altered.

The prevailing wisdom seems to be “the database will sort it out”. This raises an interesting question: How smart is SQL Server in these circumstances?

In this post, I’ll look at a generalisation of this problem: What is the impact of updating a column to the value it already contains?

The specific questions I want to answer are:

  • Does this kind of UPDATE generate any log activity?
  • Do data pages get marked as dirty (and so eventually get written out to disk)?
  • Does SQL Server bother doing the update at all?

Thursday 5 August 2010

Iterators, Query Plans, and Why They Run Backwards

Iterators, Query Plans, and Why They Run Backwards

Iterators

SQL Server uses an extensible architecture for query optimization and execution, using iterators as the basic building blocks.

Iterators are probably most familiar in their graphical showplan representation, where each icon represents a single iterator. They also show up in XML query plan output as RelOp nodes:

Each iterator performs a single simple function, such as applying a filtering condition, or performing an aggregation. It can represent a logical operation, a physical operation, or (most often) both.

Saturday 31 July 2010

Inside the Optimizer: Constructing a Plan – Part 4

Inside the Optimizer: Constructing a Plan – Part 4

More undocumented ways to explore how the query optimizer works.

Inside the Optimizer: Constructing a Plan – Part 3

Inside the Optimizer: Constructing a Plan – Part 3

Presenting an undocumented Dynamic Management View we can use to identify the optimization rules involved in producing an executable plan.

Thursday 29 July 2010

Inside the Optimizer: Constructing a Plan - Part 2

Inside the Optimizer: Constructing a Plan - Part 2

Continuing the series of posts looking at how the optimizer matches and applies internal rules to refine a query plan.

The last post ended with this query plan:

The optimizer has pushed the predicate ProductNumber LIKE 'T%' down from a Filter to the Index Scan on the Product table, but it remains as a residual predicate.