基于HADOOP的云计算平台搭建业设计翻译毕业论文.doc

上传人:小小飞 文档编号:3920521 上传时间:2019-10-10 格式:DOC 页数:26 大小:334KB
返回 下载 相关 举报
基于HADOOP的云计算平台搭建业设计翻译毕业论文.doc_第1页
第1页 / 共26页
基于HADOOP的云计算平台搭建业设计翻译毕业论文.doc_第2页
第2页 / 共26页
基于HADOOP的云计算平台搭建业设计翻译毕业论文.doc_第3页
第3页 / 共26页
基于HADOOP的云计算平台搭建业设计翻译毕业论文.doc_第4页
第4页 / 共26页
基于HADOOP的云计算平台搭建业设计翻译毕业论文.doc_第5页
第5页 / 共26页
点击查看更多>>
资源描述

《基于HADOOP的云计算平台搭建业设计翻译毕业论文.doc》由会员分享,可在线阅读,更多相关《基于HADOOP的云计算平台搭建业设计翻译毕业论文.doc(26页珍藏版)》请在三一文库上搜索。

1、 MapReduce: Simplied Data Processing on Large Clusters Jeffrey Dean and Sanjay GhemawatAbstract MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a map function that processes a key/value pair to generate a set of intermedi

2、ate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key. Many real world tasks are expressible in this model, as shown in the paper. Programs written in this functional style are automatically parallelized and executed on a large clust

3、er of commodity machines. The run-time system takes care of the details of partitioning the input data, scheduling the programs execution across a set of machines, handling machine failures, and managing the required inter-machine communication. This allows programmers without any experience with pa

4、rallel and distributed systems to easily utilize the resources of a large distributed system. Our implementation of MapReduce runs on a large cluster of commodity machines and is highly scalable: a typical MapReduce computation processes many terabytes of data on thousands of machines. Programmers n

5、d the system easy to use: hundreds of MapReduce programs have been implemented and upwards of one thousand MapReduce jobs are executed on Googles clusters every day. 1 Introduction Over the past five years, the authors and many others at Google have implemented hundreds of special-purpose computatio

6、ns that process large amounts of raw data, such as crawled documents, web request logs, etc., to compute various kinds of derived data, such as inverted indices, various representations of the graph structure of web documents, summaries of the number of pages crawled per host, the set of most freque

7、nt queries in a To appear in given day, etc. Most such computations are conceptually straightforward. However, the input data is usually large and the computations have to be distributed across hundreds or thousands of machines in order to nish in a reasonable amount of time. The issues of how to pa

8、rallelize the computation, distribute the data, and handle failures conspire to obscure the original simple computation with large amounts of complex code to deal with these issues. As a reaction to this complexity, we designed a new abstraction that allows us to express the simple computations we w

9、ere trying to perform but hides the messy details of parallelization, fault-tolerance, data distribution and load balancing in a library. Our abstraction is inspired by the map and reduce primitives present in Lisp and many other functional languages. We realized that most of our computations involv

10、ed applying a map operation to each logical “record” in our input in order to compute a set of intermediate key/value pairs, and then applying a reduce operation to all the values that shared the same key, in order to combine the derived data appropriately. Our use of a functional model with userspe

11、cied map and reduce operations allows us to parallelize large computations easily and to use re-execution as the primary mechanism for fault tolerance. The major contributions of this work are a simple and powerful interface that enables automatic parallelization and distribution of large-scale comp

12、utations, combined with an implementation of this interface that achieves high performance on large clusters of commodity PCs. Section 2 describes the basic programming model and gives several examples. Section 3 describes an implementation of the MapReduce interface tailored towards our cluster-bas

13、ed computing environment. Section 4 describes several renements of the programming model that we have found useful. Section 5 has performance measurements of our implementation for a variety of tasks. Section 6 explores the use of MapReduce within Google including our experiences in using it as the

14、basis for a rewrite of our production indexing system. Section 7 discusses related and future work. 2 Programming Model The computation takes a set of input key/value pairs, and produces a set of output key/value pairs. The user of the MapReduce library expresses the computation as two functions: Ma

15、p and Reduce. Map, written by the user, takes an input pair and produces a set of intermediate key/value pairs. The MapReduce library groups together all intermediate values associated with the same intermediate key I and passes them to the Reduce function. The Reduce function, also written by the u

16、ser, accepts an intermediate key I and a set of values for that key. It merges together these values to form a possibly smaller set of values. Typically just zero or one output value is produced per Reduce invocation. The intermediate values are supplied to the users reduce function via an iterator.

17、 This allows us to handle lists of values that are too large to t in memory. 2.1 Example Consider the problem of counting the number of occurrences of each word in a large collection of documents. The user would write code similar to the following pseudo-code:map(String key, String value):/ key: doc

18、ument name/ value: document contentsfor each word w in value:EmitIntermediate(w, 1);reduce(String key, Iterator values):/ key: a word/ values: a list of countsint result = 0;for each v in values:result += ParseInt(v);Emit(AsString(result);The map function emits each word plus an associated count of

19、occurrences (just 1 in this simple example). The reduce function sums together all counts emitted for a particular word. In addition, the user writes code to ll in a mapreduce specication object with the names of the input and output les, and optional tuning parameters. The user then invokes the Map

20、Reduce function, passing it the speci- cation object. The users code is linked together with the MapReduce library (implemented in C+). Appendix A contains the full program text for this example. 2.2 TypesEven though the previous pseudo-code is written in terms of string inputs and outputs, conceptu

21、ally the map and reduce functions supplied by the user have associated types: map (k1,v1) list(k2,v2)reduce (k2,list(v2) list(v2)I.e., the input keys and values are drawn from a different domain than the output keys and values. Furthermore, the intermediate keys and values are from the same domain a

22、s the output keys and values. Our C+ implementation passes strings to and from the user-dened functions and leaves it to the user code to convert between strings and appropriate types. 2.3 More ExamplesHere are a few simple examples of interesting programs that can be easily expressed as MapReduce c

23、omputations. Distributed Grep: The map function emits a line if it matches a supplied pattern. The reduce function is an identity function that just copies the supplied intermediate data to the output. Count of URL Access Frequency: The map function processes logs of web page requests and outputs UR

24、L, 1 . The reduce function adds together all values for the same URL and emits a URL, total count pair. Reverse Web-Link Graph: The map function outputs target, source pairs for each link to a target URL found in a page named source. The reduce function concatenates the list of all source URLs assoc

25、iated with a given target URL and emits the pair: Term-Vector per Host: A term vector summarizes the most important words that occur in a document or a set of documents as a list of word, f requency pairs. The map function emits a hostname, term vector pair for each input document (where the hostnam

26、e is extracted from the URL of the document). The reduce function is passed all per-document term vectors for a given host. It adds these term vectors together, throwing away infrequent terms, and then emits a nal pair. Figure 1: Execution overviewInverted Index: The map function parses each documen

27、t, and emits a sequence of word, document ID pairs. The reduce function accepts all pairs for a given word, sorts the corresponding document IDs and emits a word, list(document ID) pair. The set of all output pairs forms a simple inverted index. It is easy to augment this computation to keep track o

28、f word positions. Distributed Sort: The map function extracts the key from each record, and emits a key, record pair. The reduce function emits all pairs unchanged. This computation depends on the partitioning facilities described in Section 4.1 and the ordering properties described in Section 4.2.

29、3 Implementation Many different implementations of the MapReduce interface are possible. The right choice depends on the environment. For example, one implementation may be suitable for a small shared-memory machine, another for a large NUMA multi-processor, and yet another for an even larger collec

30、tion of networked machines. This section describes an implementation targeted to the computing environment in wide use at Google: large clusters of commodity PCs connected together with switched Ethernet 4. In our environment: (1) Machines are typically dual-processor x86 processors running Linux, w

31、ith 2-4 GB of memory per machine.(2) Commodity networking hardware is used typically either 100 megabits/second or 1 gigabit/second at the machine level, but averaging considerably less in overall bisection bandwidth. (3) A cluster consists of hundreds or thousands of machines, and therefore machine

32、 failures are common. (4) Storage is provided by inexpensive IDE disks attached directly to individual machines. A distributed le system 8 developed in-house is used to manage the data stored on these disks. The le system uses replication to provide availability and reliability on top of unreliable

33、hardware. (5) Users submit jobs to a scheduling system. Each job consists of a set of tasks, and is mapped by the scheduler to a set of available machines within a cluster. 3.1 Execution Overview The Map invocations are distributed across multiple machines by automatically partitioning the input dat

34、a into a set of M splits. The input splits can be processed in parallel by different machines. Reduce invocations are distributed by partitioning the intermediate key space into R pieces using a partitioning function (e.g., hash(key) mod R). The number of partitions (R) and the partitioning function

35、 are specied by the user. Figure 1 shows the overall ow of a MapReduce operation in our implementation. When the user program calls the MapReduce function, the following sequence of actions occurs (the numbered labels in Figure 1 correspond to the numbers in the list below): 1. The MapReduce library

36、 in the user program rst splits the input les into M pieces of typically 16 megabytes to 64 megabytes (MB) per piece (controllable by the user via an optional parameter). It then starts up many copies of the program on a cluster of machines. 2. One of the copies of the program is special the master.

37、 The rest are workers that are assigned work by the master. There are M map tasks and R reduce tasks to assign. The master picks idle workers and assigns each one a map task or a reduce task. 3. A worker who is assigned a map task reads the contents of the corresponding input split. It parses key/va

38、lue pairs out of the input data and passes each pair to the user-dened Map function. The intermediate key/value pairs produced by the Map function are buffered in memory. 4. Periodically, the buffered pairs are written to local disk, partitioned into R regions by the partitioning function. The locat

39、ions of these buffered pairs on the local disk are passed back to the master, who is responsible for forwarding these locations to the reduce workers. 5. When a reduce worker is notied by the master about these locations, it uses remote procedure calls to read the buffered data from the local disks

40、of the map workers. When a reduce worker has read all intermediate data, it sorts it by the intermediate keys so that all occurrences of the same key are grouped together. The sorting is needed because typically many different keys map to the same reduce task. If the amount of intermediate data is t

41、oo large to t in memory, an external sort is used. 6. The reduce worker iterates over the sorted intermediate data and for each unique intermediate key encountered, it passes the key and the corresponding set of intermediate values to the users Reduce function. The output of the Reduce function is a

42、ppended to a nal output le for this reduce partition. To appear in OSDI 2004 7. When all map tasks and reduce tasks have been completed, the master wakes up the user program. At this point, the MapReduce call in the user program returns back to the user code. After successful completion, the output

43、of the mapreduce execution is available in the R output les (one per reduce task, with le names as specied by the user). Typically, users do not need to combine these R output les into one le they often pass these les as input to another MapReduce call, or use them from another distributed applicati

44、on that is able to deal with input that is partitioned into multiple les. 3.2 Master Data Structures The master keeps several data structures. For each map task and reduce task, it stores the state (idle, in-progress, or completed), and the identity of the worker machine (for non-idle tasks). The ma

45、ster is the conduit through which the location of intermediate le regions is propagated from map tasks to reduce tasks. Therefore, for each completed map task, the master stores the locations and sizes of the R intermediate le regions produced by the map task. Updates to this location and size infor

46、mation are received as map tasks are completed. The information is pushed incrementally to workers that have in-progress reduce tasks. 3.3 Fault ToleranceSince the MapReduce library is designed to help process very large amounts of data using hundreds or thousands of machines, the library must toler

47、ate machine failures gracefully. Worker Failure The master pings every worker periodically. If no response is received from a worker in a certain amount of time, the master marks the worker as failed. Any map tasks completed by the worker are reset back to their initial idle state, and therefore bec

48、ome eligible for scheduling on other workers. Similarly, any map task or reduce task in progress on a failed worker is also reset to idle and becomes eligible for rescheduling. Completed map tasks are re-executed on a failure because their output is stored on the local disk(s) of the failed machine and is therefore inaccessible. Completed reduce tasks do not need to be re-executed since their output is stored in a global le system. When a map task is executed rst by worker

展开阅读全文
相关资源
猜你喜欢
相关搜索

当前位置:首页 > 其他


经营许可证编号:宁ICP备18001539号-1