samedi 28 novembre 2009

Do you know Google BIGTABLE?


BigTable development began in 2004 and is now used by a number of Google applications, such as MapReduce, which is often used for generating and modifying data stored in BigTable[2]Google Reader,[3] Google Maps,[4] Google Book Search, "My Search History", Google EarthBlogger.comGoogle Code hosting, Orkut[4], and YouTube[5]. Google's reasons for developing its own database include scalability, and better control of performance characteristics.[6]
BigTable is a fast and extremely large-scale DBMS. However, it departs from the typical convention of a fixed number of columns, instead described by the authors as "a sparse, distributed multi-dimensional sorted map", sharing characteristics of both row-oriented and column-oriented databases. BigTable is designed to scale into the petabyte range across "hundreds or thousands of machines, and to make it easy to add more machines [to] the system and automatically start taking advantage of those resources without any reconfiguration".[7]
Each table has multiple dimensions (one of which is a field for time, allowing versioning). Tables are optimized for GFS by being split into multiple tablets - segments of the table as split along a row chosen such that the tablet will be ~200 megabytes in size. When sizes threaten to grow beyond a specified limit, the tablets are compressed using the algorithm BMDiff[8] (referenced in [9]) and the secret algorithm Zippy[10], which is described as less space-optimal variation of LZO but more efficient in terms of computing time. The locations in the GFS of tablets are recorded as database entries in multiple special tablets, which are called "META1" tablets. META1 tablets are found by querying the single "META0" tablet, which typically has a machine to itself since it is often queried by clients as to the location of the "META1" tablet which itself has the answer to the question of where the actual data is located. Like GFS's master server, the META0 server is not generally a bottleneck since the processor time and bandwidth necessary to discover and transmit META1 locations is minimal and clients aggressively cache locations to minimize queries.

Aucun commentaire:

Enregistrer un commentaire