Tuesday, October 05, 2010

Percolator. The Question is; how does Google store all that index data and keep it updated?

InfoQ - Percolator: a System for Incrementally Processing Updates to a Large Data Set

“As the amount of data being collected and stored expands at astounding rates, scalability requirements once reserved to the Googles of the world are becoming more common and often require dedicated solutions. Daniel Peng and Frank Dabek just published a paper detailing Percolator, Google's indexing system which stores tens of petabytes of data and processes billions of updates per day on thousands of machines.

image

…”

Google Research - Large-scale Incremental Processing Using Distributed Transactions and Notifications

“Updating an index of the web as documents are crawled requires continuously transforming a large repository of existing documents as new documents arrive. This task is one example of a class of data processing tasks that transform a large repository of data via small, independent mutations. These tasks lie in a gap between the capabilities of existing infrastructure. Databases do not meet the storage or throughput requirements of these tasks: Google's indexing system stores tens of petabytes of data and processes billions of updates per day on thousands of machines. MapReduce and other batch-processing systems cannot process small updates individually as they rely on creating large batches for efficiency.

We have built Percolator, a system for incrementally processing updates to a large data set, and deployed it to create the Google web search index. By replacing a batch-based indexing system with an indexing system based on incremental processing using Percolator, we process the same number of documents per day, while reducing the average age of documents in Google search results by 50%.

…”

Here’s a snip of the 14 page PDF;

image

I’m one of those kinds of guys who likes to see how something works. I can’t tell you the number of things I disassembled when I was younger (I just wish I were smart enough to put them back together… LOL) just to see what was “inside.” So when I saw that this paper gives a glimpse into how Google keeps its massive index, i.e. letting us see a little inside, I knew I had to check it out. 

Sure we’re not going to recreate Google with this paper, but seeing into the thinking behind the index storage is just officially cool (I also thought it cool that there was even a column of code in the paper… ;)

1 comment:

  1. What happens with MapReduce is interesting now.

    Google's use of MapReduce was a common justification for it in many scenarios. Including arguments that mutating transactionally large dataset is not viable, because if it was Google was doing it.

    I have in fact a blog entry on this: http://bigdatacraft.com/archives/240

    ReplyDelete

NOTE: Anonymous Commenting has been turned off for a while... The comment spammers are just killing me...

ALL comments are moderated. I will review every comment before it will appear on the blog.

Your comment WILL NOT APPEAR UNTIL I approve it. This may take some hours...

I reserve, and will use, the right to not approve ANY comment for ANY reason. I will not usually, but if it's off topic, spam (or even close to spam-like), inflammatory, mean, etc, etc, well... then...

Please see my comment policy for more information if you are interested.

Thanks,
Greg

PS. I am proactively moderating comments. Your comment WILL NOT APPEAR UNTIL I approve it. This may take some hours...