SQL Server 2008 Page Compression and Multiple Processors (More is better... mostly)
Linchi Shea - SQL Server 2008 Page Compression: Using multiple processors
"SQL Server 2008 has introduced a long sought after feature -- Data Compression. This is a great feature, and I have no doubt it'll be widely used. The key compression method is called page compression, which uses the following three techniques to reduce the space taken up by duplicates on a page:
- Row compression. This technique changes the storage formats of many fixed-length datatypes (e.g. char, int, money, binary, datetime, and so on) so that they occupy only the required number of bytes plus a small overhead.
- Prefix compression. This technique finds duplicate prefixes on a page for each column , and replaces each duplicate with a small reference number.
- Dictionary compression. This technique finds duplicate values on a page, collects them into a dictionary stored after the page header but before the data rows, and replaces the duplicate values with their corresponding offsets in the dictionary.
You can read more about SQL Server 2008 data compression in SQL2008 CTP6 Books Online.
In this post, I'll focus on a very specific question: How does the number of processors impact rebuilding a table with page compression? Note that one way to enable page compression on a table is to rebuild it with the option data_compression set to page. The following is an example:
..."
I thought this was a very nice and well focused post on SQL Server 2008's new Page Compression feature and the effect multiple processors have on the compression time.
Mostly, more is better, to a point...
No comments:
Post a Comment