Grokbase Groups Pig user April 2011
FAQ
So I'm running into something strange. Consider the following code:

tfidf_all = LOAD '$TFIDF' AS (doc_id:chararray, token:chararray,
weight:double);
grouped = GROUP tfidf_all BY doc_id;
vectors = FOREACH grouped GENERATE group AS doc_id, tfidf_all.(token,
weight) AS vector;
DUMP vectors;

This, of course, runs just fine. tfidf_all contains 1,428,280 records.
The reduce output records should be exactly the number of documents,
which turn out to be 18,863 in this case. All well and good.

The strangeness comes when I add a SAMPLE command:

sampled = SAMPLE vectors 0.0012;
DUMP sampled;

Running this results in 1,513 reduce output records. So, am I insane or
shouldn't the reduce output records be much much closer to 22 or 23
records (eg. 0.0012*18863)?

--jacob
@thedatachef

Search Discussions

  • Alan Gates at Apr 25, 2011 at 4:03 pm
    You are not insane. Pig rewrites sample into filter, and then pushes
    that filter in front of the group. It shouldn't push that filter
    since the UDF is non-deterministic. If you add "-t PushUpFilter" to
    your command line when invoking pig this won't happen. Could you file
    a JIRA for this so we keep track of it?

    Alan.
    On Apr 24, 2011, at 10:41 AM, Jacob Perkins wrote:

    So I'm running into something strange. Consider the following code:

    tfidf_all = LOAD '$TFIDF' AS (doc_id:chararray, token:chararray,
    weight:double);
    grouped = GROUP tfidf_all BY doc_id;
    vectors = FOREACH grouped GENERATE group AS doc_id, tfidf_all.(token,
    weight) AS vector;
    DUMP vectors;

    This, of course, runs just fine. tfidf_all contains 1,428,280 records.
    The reduce output records should be exactly the number of documents,
    which turn out to be 18,863 in this case. All well and good.

    The strangeness comes when I add a SAMPLE command:

    sampled = SAMPLE vectors 0.0012;
    DUMP sampled;

    Running this results in 1,513 reduce output records. So, am I insane
    or
    shouldn't the reduce output records be much much closer to 22 or 23
    records (eg. 0.0012*18863)?

    --jacob
    @thedatachef
  • Jacob Perkins at Apr 26, 2011 at 2:44 pm
    JIRA filed, see:

    https://issues.apache.org/jira/browse/PIG-2014

    --jacob
    @thedatachef
    On Mon, 2011-04-25 at 09:02 -0700, Alan Gates wrote:
    You are not insane. Pig rewrites sample into filter, and then pushes
    that filter in front of the group. It shouldn't push that filter
    since the UDF is non-deterministic. If you add "-t PushUpFilter" to
    your command line when invoking pig this won't happen. Could you file
    a JIRA for this so we keep track of it?

    Alan.
    On Apr 24, 2011, at 10:41 AM, Jacob Perkins wrote:

    So I'm running into something strange. Consider the following code:

    tfidf_all = LOAD '$TFIDF' AS (doc_id:chararray, token:chararray,
    weight:double);
    grouped = GROUP tfidf_all BY doc_id;
    vectors = FOREACH grouped GENERATE group AS doc_id, tfidf_all.(token,
    weight) AS vector;
    DUMP vectors;

    This, of course, runs just fine. tfidf_all contains 1,428,280 records.
    The reduce output records should be exactly the number of documents,
    which turn out to be 18,863 in this case. All well and good.

    The strangeness comes when I add a SAMPLE command:

    sampled = SAMPLE vectors 0.0012;
    DUMP sampled;

    Running this results in 1,513 reduce output records. So, am I insane
    or
    shouldn't the reduce output records be much much closer to 22 or 23
    records (eg. 0.0012*18863)?

    --jacob
    @thedatachef

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupuser @
categoriespig, hadoop
postedApr 24, '11 at 5:41p
activeApr 26, '11 at 2:44p
posts3
users2
websitepig.apache.org

2 users in discussion

Jacob Perkins: 2 posts Alan Gates: 1 post

People

Translate

site design / logo © 2021 Grokbase