`
wsql
  • 浏览: 11776415 次
  • 性别: Icon_minigender_1
  • 来自: 深圳
文章分类
社区版块
存档分类
最新评论

海量数据快速搜索并及时回馈处理的“地图式分布搜索--递归型简化处理”(MapReduce)软件

 
阅读更多

MapReduce

由map和reduce两个单词组合而构成,

相当于:地图+递归,或:地图+化简,或:地图+还原,

直译为:地图--递归,或:地图--化简,或:地图--还原。

该人工合成的复合词指编程模型及软件(见:附件1-2)。

附录1

Programming model

Input & Output: each a set of key/value pairs

Programmer specifies two functions:

map (in_key, in_value) -> list(out_key, intermediate_value)

  • Processes input key/value pair
  • Produces set of intermediate pairs

reduce (out_key, list(intermediate_value)) -> list(out_value)

  • Combines all intermediate values for a particular key
  • Produces a set of merged output values (usually just one)

Inspired by similar primitives in LISP and other languages

http://labs.google.com/papers/mapreduce-osdi04-slides/index-auto-0003.html

Example: Count word occurrences

  map(String input_key, String input_value):
    // input_key: document name
    // input_value: document contents
    for each word w in input_value:
      EmitIntermediate(w, "1");


  reduce(String output_key, Iterator intermediate_values):
    // output_key: a word
    // output_values: a list of counts
    int result = 0;
    for each v in intermediate_values:
      result += ParseInt(v);
    Emit(AsString(result));

Pseudocode: See appendix in paper for real code

http://labs.google.com/papers/mapreduce-osdi04-slides/index-auto-0004.html

附录2:

http://code.google.com/intl/zh-CN/edu/submissions/mapreduce/listing.html

MapReduce in a Week

This page contains a comprehensive introduction to MapReduce including lectures, reading material, and programming assignments. The goal is to provide a set of lectures which can be integrated into an existing systems courses such as Operating Systems, Networking, etc, which already are taking an "under the hood" approach to computer science. Prerequisite knowledge includes Multithreading, Synchronization, locks, semaphores, barriers, etc, and sockets.

http://en.wikipedia.org/wiki/MapReduce

MapReduceis apatented[1]software frameworkintroduced byGooglein 2004 to supportdistributed computingon largedata setsonclustersof computers.[2]

The framework is inspired by themapandreducefunctions commonly used infunctional programming,[3]although their purpose in the MapReduce framework is not the same as their original forms.[4]

MapReducelibrarieshave been written inC++,C#,Erlang,Java,OCaml,Perl,Python,Ruby,F#,Rand other programming languages.

http://labs.google.com/papers/mapreduce.html

MapReduce:SimplifiedDataProcessingonLargeClusters
Jeffrey Dean and Sanjay Ghemawat

Abstract

MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key. Many real world tasks are expressible in this model, as shown in the paper.

Programs written in this functional style are automatically parallelized and executed on a large cluster of commodity machines. The run-time system takes care of the details of partitioning the input data, scheduling the program's execution across a set of machines, handling machine failures, and managing the required inter-machine communication. This allows programmers without any experience with parallel and distributed systems to easily utilize the resources of a large distributed system.

Our implementation of MapReduce runs on a large cluster of commodity machines and is highly scalable: a typical MapReduce computation processes many terabytes of data on thousands of machines. Programmers find the system easy to use: hundreds of MapReduce programs have been implemented and upwards of one thousand MapReduce jobs are executed on Google's clusters every day.

Appeared in:
OSDI'04: Sixth Symposium on Operating System Design and Implementation,
San Francisco, CA, December, 2004.

Download: PDF Version

Slides: HTML Slides

http://research.google.com/people/jeff/index.html


Jeffrey Dean
Google Fellow

I joined Google in mid-1999, and I'm currently a Google Fellow in the Systems Infrastructure Group. My areas of interest include large-scale distributed systems, performance monitoring, compression techniques, information retrieval, application of machine learning to search and other related problems, microprocessor architecture, compiler optimizations, and development of new products that organize existing information in new and interesting ways. While at Google, I've worked on the following projects:

  • The design and implementation of the initial version of Google's advertising serving system.

  • The design and implementation of five generations of our crawling, indexing, and query serving systems, covering two and three orders of magnitude growth in number of documents searched, number of queries handled per second, and frequency of updates to the system. I recently gave a talk at WSDM'09 about some of the issues involved in building large-scale retrieval systems (slides).

  • The initial development of Google's AdSense for Content product (involving both the production serving system design and implementation as well as work on developing and improving the quality of ad selection based on the contents of pages).

  • The development of Protocol Buffers, a way of encoding structured data in an efficient yet extensible format, and a compiler that generates convenient wrappers for manipulating the objects in a variety of languages. Protocol Buffers are used extensively at Google for almost all RPC protocols, and for storing structured information in a variety of persistent storage systems. A version of the protocol buffer implementation has been open-sourced and is available at http://code.google.com/p/protobuf/.

  • Some of the initial production serving system work for the Google News product, working with Krishna Bharat to move the prototype system he put together into a deployed system.

  • Some aspects of our search ranking algorithms, notably improved handling for dealing with off-page signals such as anchortext.

  • The design and implementation of the first generation of our automated job scheduling system for managing a cluster of machines.

  • The design and implementation of prototyping infrastructure for rapid development and experimentation with new ranking algorithms.

  • The design and implementation of MapReduce, a system for simplifying the development of large-scale data processing applications. A paper about MapReduce appeared in OSDI'04.

  • The design and implementation of BigTable, a large-scale semi-structured storage system used underneath a number of Google products. A paper about BigTable appeared in OSDI'06.

  • Some of the production system design for Google Translate, our statistical machine translation system. In particular, I designed and implemented a system for distributed high-speed access to very large language models (too large to fit in memory on a single machine).

  • Some internal tools to make it easy to rapidly search our internal source code repository. Many of the ideas from this internal tool were incorporated into our Google Code Search product, including the ability to use regular expressions for searching large corpora of source code.

I enjoy developing software with great colleagues, and I've been fortunate to have worked with many wonderful and talented people on all of my work here at Google. To help ensure that Google continues to hire people with excellent technical skills, I've also been fairly involved in our engineering hiring process.

I received a Ph.D. in Computer Science from the University of Washington, working with Craig Chambers on whole-program optimization techniques for object-oriented languages in 1996. I received a B.S., summa cum laude from the University of Minnesota in Computer Science & Economics in 1990. From 1996 to 1999, I worked for Digital Equipment Corporation's Western Research Lab in Palo Alto, where I worked on low-overhead profiling tools, design of profiling hardware for out-of-order microprocessors, and web-based information retrieval. From 1990 to 1991, I worked for the World Health Organization's Global Programme on AIDS, developing software to do statistical modelling, forecasting, and analysis of the HIV pandemic.

In 2009, I was elected to the National Academy of Engineering.

Selected Slides from Talks:

Selected Publications:

Personal:

I've lived in lots of places in my life: Honolulu, HI; Manila, The Phillipines; Boston, MA; West Nile District, Uganda; Boston (again); Little Rock, AR; Hawaii (again); Minneapolis, MN; Mogadishu, Somalia; Atlanta, GA; Minneapolis (again); Geneva, Switzerland; Seattle, WA; and (currently) Palo Alto, CA. I'm hard-pressed to pick a favorite, though: each place has its plusses and minuses.

One of my life goals is to play soccer and basketball on every continent. So far, I've done so in North America, South America, Europe, Asia, and Africa. I'm worried that Antarctica might be tough, though.

分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics