Quantcast
Channel: Erik Darling – Brent Ozar Unlimited®
Viewing all articles
Browse latest Browse all 370

Memory Grants and Data Size

$
0
0

What does memory do?

In SQL Server, just about everything. We cache our data and execution plans in it (along with some other system stuff), it gets used as scratch space for temporary objects, and of course queries use it for certain operations, most notably sorting and hashing. And of course, now Hekaton comes along to eat up more of our RAM.

In general, not having enough of it means reading pages from disk all the time, but it can have RAMifications down the line (GET IT?!), like queries not having enough memory to compile or run, and your plan cache constantly being wiped out.

If you’re struggling with the limits of Standard Edition, older hardware, bad hardware choices, or budget issues, you may not be able to adequately throw hardware at the problem. So you’re left to have someone spend way more money on your time to try to mitigate issues. This of course means query and index tuning, perhaps Resource Governor if you’ve made some EXTRA BAD choices, and last but not least: cleaning up data types.

How can this help?

Leaving aside the chance to maybe make your storage admins happy, you can also cut down on large memory grants for some queries. Here’s a quick example.

We’ll create a simple table. In order to make Joe Celko happy, it has a PK/CX. We have an integer column that we’ll use to ORDER BY. The reason for this is that if you order by a column that doesn’t have a supporting index, SQL will need a memory grant. The VARCHAR columns are just to show you how memory grants increase to account for larger chunks of data.

CREATE TABLE dbo.MemoryGrants
(
  ID INT PRIMARY KEY CLUSTERED,
  Ordering INT,
  Crap1 VARCHAR(10),
  Crap2 VARCHAR(1000),
  Crap3 VARCHAR(8000)
);

INSERT dbo.MemoryGrants WITH (TABLOCK)
        ( ID, Ordering, Crap1, Crap2, Crap3 )
SELECT 
    c,
  c % 1000,
  REPLICATE('X', c * 10 % 10), 
  REPLICATE('Y', c * 1000 % 1000), 
  REPLICATE('Z', c * 8000 % 8000)
FROM (
SELECT TOP (100000) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS c
FROM sys.messages AS m
CROSS JOIN sys.messages AS m2
) x

SELECT mg.ID --Returning one column
FROM dbo.MemoryGrants AS mg
ORDER BY mg.Ordering
GO 

SELECT mg.ID, mg.Ordering --Returning a couple INTs
FROM dbo.MemoryGrants AS mg
ORDER BY mg.Ordering
GO 

SELECT mg.ID, mg.Ordering, mg.Crap1 --Returning a VARCHAR(10)
FROM dbo.MemoryGrants AS mg
ORDER BY mg.Ordering
GO 

SELECT mg.ID, mg.Ordering, mg.Crap2 --Returning a VARCHAR(1000)
FROM dbo.MemoryGrants AS mg
ORDER BY mg.Ordering
GO 

SELECT mg.ID, mg.Ordering, mg.Crap3 --Returning a VARCHAR(8000)
FROM dbo.MemoryGrants AS mg
ORDER BY mg.Ordering
GO 


SELECT mg.ID, mg.Ordering, mg.Crap1, mg.Crap2, mg.Crap3 --Returning all columns
FROM dbo.MemoryGrants AS mg
ORDER BY mg.Ordering
GO 

Some test queries

When we run the queries above, we can see in the query plans, and thanks to fairly recent updates (2014 SP2, 2016 SP1), a warning in actual plans about memory grant issues.

To make this a little easier to visualize, we’ll use an Extended Events session using a new event called query_memory_grant_usage. If you want to use this on one of your servers, you’ll want to change or get rid of the filter on session ID — 55 just happens to be the session ID I have.

CREATE EVENT SESSION [QueryMemoryGrantUsage] ON SERVER 
ADD EVENT sqlserver.query_memory_grant_usage(
    ACTION(sqlserver.sql_text)
    WHERE ([sqlserver].[session_id]=(55)))
WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=OFF,STARTUP_STATE=OFF)
GO

Here’s what we get from our XE session.

Does anyone have a calculator?

Our query memory grants range from around 8 MB to around 560 MB. This isn’t even ordering BY the larger columns, this is just doing the work to sort results by them. Even if you’re a smarty pants, and you don’t use unnecessary ORDER BY clauses in your queries, SQL may inject them into your query plans to support operations that require sorted data. Things like stream aggregates, merge joins, and occasionally key lookups may still be considered a ‘cheaper’ option by the optimizer, even with a sort in the plan.

Of course, in our query plans, we have warnings on the last two queries, which had to order the VARCHAR(8000) column.

Barf

Sort it out

You may legitimately need large N/VARCHAR columns for certain things, but we frequently see pretty big columns being used to hold things that will just never approach the column size. I’m not going to sit here and chastise you for choosing datetime over date or bigint over int or whatever. Those are trivial in comparison. But especially when troubleshooting memory grant issues (or performance issues in general), foolishly wide columns can sometimes be an easy tuning win.

Thanks for reading!

Brent says: whenever anybody asked me, “Why can’t I just use VARCHAR(1000) for all my string fields?” I didn’t really have a good answer. Now I do.

Free SQL Server training: the GroupBy session videos are out!


Viewing all articles
Browse latest Browse all 370

Trending Articles