Grokbase Groups Pig user March 2013
FAQ
Hello all,

When I first saw pig, I was under the impressing that it generated java
code for a series of map/reduce jobs and then submitted that to hadoop. I
have since seen messages that indicate the is not the way it works.

I have been trying to find a document (preferably with diagrams) that shows
what the pig architecture is and how the various mappers/reducers are
defined and spawned.

I would appreciate it if someone could point me to that documentation.

Sincerely,

- Gardner

Search Discussions

  • Prashant Kommireddi at Mar 17, 2013 at 11:38 pm
    Hi Gardner,

    This paper would be a good starting point
    http://infolab.stanford.edu/~olston/publications/vldb09.pdf

    Additionally, you could check out some other material here
    https://cwiki.apache.org/confluence/display/PIG/PigTalksPapers

    On Mar 17, 2013, at 4:26 PM, Gardner Pomper wrote:

    Hello all,

    When I first saw pig, I was under the impressing that it generated java
    code for a series of map/reduce jobs and then submitted that to hadoop. I
    have since seen messages that indicate the is not the way it works.

    I have been trying to find a document (preferably with diagrams) that shows
    what the pig architecture is and how the various mappers/reducers are
    defined and spawned.

    I would appreciate it if someone could point me to that documentation.

    Sincerely,

    - Gardner
  • Aniket Mokashi at Mar 21, 2013 at 6:55 pm
    Also-
    https://cwiki.apache.org/confluence/display/PIG/Guide+for+new+contributors

    ~Aniket

    On Sun, Mar 17, 2013 at 4:37 PM, Prashant Kommireddi wrote:

    Hi Gardner,

    This paper would be a good starting point
    http://infolab.stanford.edu/~olston/publications/vldb09.pdf

    Additionally, you could check out some other material here
    https://cwiki.apache.org/confluence/display/PIG/PigTalksPapers

    On Mar 17, 2013, at 4:26 PM, Gardner Pomper wrote:

    Hello all,

    When I first saw pig, I was under the impressing that it generated java
    code for a series of map/reduce jobs and then submitted that to hadoop. I
    have since seen messages that indicate the is not the way it works.

    I have been trying to find a document (preferably with diagrams) that shows
    what the pig architecture is and how the various mappers/reducers are
    defined and spawned.

    I would appreciate it if someone could point me to that documentation.

    Sincerely,

    - Gardner


    --
    "...:::Aniket:::... Quetzalco@tl"

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupuser @
categoriespig, hadoop
postedMar 17, '13 at 11:26p
activeMar 21, '13 at 6:55p
posts3
users3
websitepig.apache.org

People

Translate

site design / logo © 2021 Grokbase