About This Blog

Including my content from SQLBlog.com and some from SQLPerformance.com
Showing posts with label Internals. Show all posts
Showing posts with label Internals. Show all posts

Friday 17 August 2012

Temporary Table Caching Explained

Temporary Table Caching Explained

SQL Server (since 2005) caches temporary tables and table variables referenced in stored procedures for reuse, reducing contention on tempdb allocation structures and catalogue tables.

A number of things can prevent this caching (none of which are allowed when working with table variables):

  • Named constraints (bad idea anyway, since concurrent executions can cause a name collision)
  • DDL after creation (though what is considered DDL is interesting)
  • Creation using dynamic SQL
  • Table created in a different scope
  • Procedure executed using WITH RECOMPILE

Temporary objects are often created and destroyed at a high rate in production systems, so caching can be an important optimization.

Thursday 3 May 2012

Parallel Execution Plans Suck

Parallel Execution Plans Suck

Summary: A deep dive into SQL Server parallelism, and a potential performance problem with parallel plans that use TOP.

Tuesday 1 May 2012

Query Optimizer Deep Dive - Part 4

Query Optimizer Deep Dive - Part 4

This is the final part in a series of posts based on the content of the Query Optimizer Deep Dive presentations I have given over the last month or so at the Auckland SQL Users’ Group, and SQL Saturday events in Wellington, New Zealand and Adelaide, Australia.

Links to other parts of this series: Part 1 Part 2 Part 3

Beating the Optimizer

Our AdventureWorks test query produces an optimized physical execution plan that is quite different from the logical form of the query.

The estimated cost of the execution plan shown below is 0.0295 units.

Optimizer plan

Since we know the database schema very well, we might wonder why the optimizer did not choose to use the unique nonclustered index on Name in the Product table to filter rows based on the LIKE predicate.

Sunday 29 April 2012

Query Optimizer Deep Dive – Part 3

Query Optimizer Deep Dive – Part 3

This is the third in a series of posts based on the content of the Query Optimizer Deep Dive presentations I have given over the last month or so at the Auckland SQL Users’ Group, and SQL Saturday events in Wellington, New Zealand and Adelaide, Australia.

Links to other parts of this series: Part 1 Part 2 Part 4

Storage of Alternative Plans

We saw in part 2 how optimizer rules are used to explore logical alternatives for parts of the query tree, and how implementation rules are used to find physical operations to perform each logical steps.

To keep track of all these options, the cost-based part of the SQL Server query optimizer uses a structure called the Memo. This structure is part of the Cascades general optimization framework developed by Goetz Graefe.

Saturday 28 April 2012

Query Optimizer Deep Dive – Part 2

Query Optimizer Deep Dive – Part 2

This is the second in a series of posts based on the content of the Query Optimizer Deep Dive presentations I have given over the last month or so at the Auckland SQL Users’ Group, and SQL Saturday events in Wellington, New Zealand and Adelaide, Australia.

Links to other parts of this series: Part 1 Part 3 Part 4

Cost-Based Optimization Overview

The input to cost-based optimization is a tree of logical operations produced by the previous optimization stages discussed in part one.

Cost-based optimization takes this logical tree, explores logical alternatives (different logical tree shapes that will always produce the same results), generates physical implementations, assigns an estimated cost to each, and finally chooses the cheapest physical option overall.

The goal of cost-based optimization is not to find the best possible physical execution plan by exploring every possible alternative. Rather, the goal is to find a good plan quickly.

Query Optimizer Deep Dive - Part 1

Query Optimizer Deep Dive - Part 1

This is the first in a series of posts based on the content of the Query Optimizer Deep Dive presentations I have given over the last month or so at the Auckland SQL Users’ Group, and SQL Saturday events in Wellington, New Zealand and Adelaide, Australia.

Links to other parts of this series: Part 2 Part 3 Part 4

Introduction

The motivation behind writing these sessions is finding that relatively few people have a good intuition for the way the optimizer works. This is partly because the official documentation is rather sparse, and partly because what information is available is dispersed across many books and blog posts.

The content presented here is very much geared to my preferred way of learning. It shows the concepts in what seems to me to be a reasonably logical sequence, and then provides tools to enable the interested reader to explore further, if desired.

Monday 12 March 2012

Fun with Scalar and Vector Aggregates

Fun with Scalar and Vector Aggregates

There are interesting things to be learned from even the simplest queries.

For example, imagine you are asked to write a query that lists AdventureWorks product names, where the product has at least one entry in the transaction history table, but fewer than ten.

Friday 23 December 2011

Forcing a Parallel Query Execution Plan

Forcing a Parallel Query Execution Plan

This article is for SQL Server developers who have experienced the special kind of frustration that only comes from spending hours trying to convince the query optimizer to generate a parallel execution plan.

This situation often occurs when making an apparently innocuous change to the text of a moderately complex query — a change which somehow manages to turn a parallel plan that executes in ten seconds, into a five-minute serially-executing monster.

Tuesday 9 August 2011

SQL Server, Seeks, and Binary Search

SQL Server, Seeks, and Binary Search

The following table summarizes the results from my last two articles, Enforcing Uniqueness for Performance and Avoiding Uniqueness for Performance. It shows the CPU time used when performing 5 million clustered index seeks into a unique or non-unique index:

Test summary

In test 1, making the clustered index unique improved performance by around 40%.

In test 2, making the same change reduced performance by around 70% (on 64-bit systems – more on that later).

Tuesday 19 July 2011

Join Performance, Implicit Conversions, and Residuals

Join Performance, Implicit Conversions, and Residuals

Introduction

You probably already know that it’s important to be aware of data types when writing queries, and that implicit conversions between types can lead to poor query performance.

Some people have gone so far as to write scripts to search the plan cache for CONVERT_IMPLICIT elements, and others routinely inspect plans for that type of thing when tuning.

Now, that’s all good, as far as it goes. It may surprise you to learn that not all implicit conversions are visible in query plans, and there are other important factors to consider too.

Saturday 5 March 2011

How Parallelism Works in SQL Server

How Parallelism Works in SQL Server

You might have noticed that January was a quiet blogging month for me.

Part of the reason was that I was working on an article for Simple Talk, looking at how parallel query execution really works. The first part is published today at:

Understanding and Using Parallelism in SQL Server.

This introductory piece is not quite as technical as normal, but I hope there be enough interesting material there to make it worth a read.

© Paul White
email: SQLkiwi@gmail.com
twitter: @SQL_Kiwi

Wednesday 23 February 2011

Advanced TSQL Tuning: Why Internals Knowledge Matters

Advanced T-SQL Tuning: Why Internals Knowledge Matters

There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes. Query tuning is not complete as soon as the query returns results quickly in the development or test environments.

In production, your query will compete for memory, CPU, locks, I/O, and other resources on the server. Today’s post looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better T-SQL.

Saturday 19 February 2011

I see no LOBs!

I see no LOBs!

Is it possible to see LOB (large object) logical reads from STATISTICS IO output on a table with no LOB columns?

I was asked this question today by someone who had spent a good fraction of their afternoon trying to work out why this was occurring — even going so far as to re-run DBCC CHECKDB to see if corruption was the cause.

The table in question wasn’t particularly pretty. It had grown somewhat organically over time, with new columns being added every so often as the need arose.

Nevertheless, it remained a simple structure with no LOB columns — no text or image, no xml, no max types — nothing aside from ordinary integer, money, varchar, and datetime types.

To add to the air of mystery, not every query that ran against the table would report LOB logical reads — just sometimes — but when it did, the query often took much longer to execute.

Thursday 17 February 2011

Seeking Without Indexes

Seeking Without Indexes

A seek can contain one or more seek predicates, each of which can either identify (at most) one row in a unique index (a singleton lookup) or a range of values (a range scan).

When looking at an execution plan, we often need to look at the details of the seek operator in the Properties window to see how many operations it is performing, and what type of operation each one is.

As seen in the first post of this mini-series, When is a Seek not a Seek? the number of hidden seeking operations can have an appreciable impact on performance.

Tuesday 14 December 2010

Beware Sneaky Reads with Unique Indexes

Beware Sneaky Reads with Unique Indexes

I saw a question asked recently on the #sqlhelp hash tag:

Might SQL Server retrieve (out-of-row) LOB data from a table, even if the column isn’t referenced in the query?

Leaving aside trivial cases like selecting a computed column that does reference the LOB data, one might be tempted to say that no, SQL Server does not read data you haven’t asked for.

In general, that is correct; however, there are cases where SQL Server might sneakily read a LOB column.

Thursday 4 November 2010

Myth: SQL Server Caches a Serial Plan with every Parallel Plan

Myth: SQL Server Caches a Serial Plan with every Parallel Plan

Many people believe that whenever SQL Server creates an execution plan that uses parallelism, an alternative serial plan is also cached.

The idea seems to be that the execution engine then decides between the parallel and serial alternatives at runtime. I’ve seen this on forums, in blogs, and even in books.

In fairness, a lot of the official documentation is not as clear as it might be on the subject. In this post I will show that only a single (parallel) plan is cached. I will also show that SQL Server can execute a parallel plan on a single thread.

Monday 1 November 2010

The Case of the Missing Shared Locks

The Case of the Missing Shared Locks

This post covers a little-known locking optimization that provides a surprising answer to the question:

If I hold an exclusive lock on a row, can another transaction running at the default read committed isolation level read it?

Most people would answer ‘no’, on the basis that the read would block when it tried to acquire a shared lock. Others might respond that it depends on whether the READ_COMMITTED_SNAPSHOT database option was in effect, but let’s assume that is not the case, and we are dealing simply with the default (locking) read committed isolation level.

Thursday 23 September 2010

A Tale of Two Index Hints

A Tale of Two Index Hints

If you look up Table Hints in the official documentation, you’ll find the following statements:

If a clustered index exists, INDEX(0) forces a clustered index scan and INDEX(1) forces a clustered index scan or seek.

If no clustered index exists, INDEX(0) forces a table scan and INDEX(1) is interpreted as an error.

The interesting thing there is that both hints can result in a scan. If that is the case, you might wonder if there is any effective difference between the two.

This blog entry explores that question, and highlights an optimizer quirk that can result in a much less efficient query plan when using INDEX(0). I’ll also cover some stuff about ordering guarantees.

Friday 27 August 2010

Sorting, Row Goals, and the TOP 100 Problem

Sorting, Row Goals, and the TOP 100 Problem

When you write a query to return the first few rows from a potential result set, you’ll often use the TOP clause.

To give a precise meaning to the TOP operation, it will normally be accompanied by an ORDER BY clause. Together, the TOP…ORDER BY construction can be used to precisely identify which top ‘n’ rows should be returned.

Sunday 22 August 2010

Row Goals and Grouping

Row Goals and Grouping

You might recall from Inside the Optimizer: Row Goals In Depth that query plans containing a row goal tend to favour nested loops or sort-free merge join over hashing.

This is because a hash join has to fully process its build input (to populate its hash table) before it can start probing for matches on its other input. Hash join therefore has a high start-up cost, balanced by a lower per-row cost once probing begins.

In this post, we will take a look at how row goals affect grouping operations.