FAQ
Hello,

My map tasks are freezing after 100% .. I'm suspecting my mapper.close() function which does some sorting. Any better suggestion of where shall I put my sorting method ? I thought of mapper.close() so that each map task sorts its own output (which is local) and hence faster.

output is the following:
....
11/03/30 08:13:54 INFO mapred.JobClient: map 95% reduce 0%
11/03/30 08:14:09 INFO mapred.JobClient: map 96% reduce 0%
11/03/30 08:14:27 INFO mapred.JobClient: map 97% reduce 0%
11/03/30 08:14:42 INFO mapred.JobClient: map 98% reduce 0%
11/03/30 08:15:06 INFO mapred.JobClient: map 99% reduce 0%
11/03/30 08:15:45 INFO mapred.JobClient: map 100% reduce 0%
11/03/30 08:25:41 INFO mapred.JobClient: map 50% reduce 0%
11/03/30 08:25:49 INFO mapred.JobClient: Task Id : attempt_201103291035_0016_m_000001_0, Status : FAILED
Task attempt_201103291035_0016_m_000001_0 failed to report status for 600 seconds. Killing!
11/03/30 08:25:50 INFO mapred.JobClient: map 0% reduce 0%
11/03/30 08:25:52 INFO mapred.JobClient: Task Id : attempt_201103291035_0016_m_000000_0, Status : FAILED
Task attempt_201103291035_0016_m_000000_0 failed to report status for 600 seconds. Killing!
11/03/30 08:26:29 INFO mapred.JobClient: map 1% reduce 0%
11/03/30 08:26:53 INFO mapred.JobClient: map 2% reduce 0%
11/03/30 08:27:05 INFO mapred.JobClient: map 3% reduce 0%
11/03/30 08:27:29 INFO mapred.JobClient: map 4% reduce 0%
11/03/30 08:27:41 INFO mapred.JobClient: map 5% reduce 0%
...

Thank you for any thought,

Maha

Search Discussions

  • Maha at Mar 30, 2011 at 7:26 pm
    It's not the sorting, since the sorted files are produced in output, it's then mapper not existing well. so can anyone tell me if it's wrong to write mapper.close() function like this ?

    @Override
    public void close() throws IOException{
    helper.CleanUp();
    writer.close();
    // SORT PRODUCED OUTPUT
    try{
    Sorter SeqSort = new Sorter (hdfs,
    DocDocWritable.class,
    IntWritable.class,
    new Configuration());

    SeqSort.sort(tempSeq, new Path("sorted/S"+TaskID.getName()));
    }
    catch(Exception e){e.printStackTrace();}
    return;
    }

    and by the way, when it's executed ... it's not part of the "cleanup" phase that is shown in the UI .. which I think is supposed to be ... right?

    Thank you,
    Maha

    Hello,

    My map tasks are freezing after 100% .. I'm suspecting my mapper.close().

    output is the following:
    ....
    11/03/30 08:13:54 INFO mapred.JobClient: map 95% reduce 0%
    11/03/30 08:14:09 INFO mapred.JobClient: map 96% reduce 0%
    11/03/30 08:14:27 INFO mapred.JobClient: map 97% reduce 0%
    11/03/30 08:14:42 INFO mapred.JobClient: map 98% reduce 0%
    11/03/30 08:15:06 INFO mapred.JobClient: map 99% reduce 0%
    11/03/30 08:15:45 INFO mapred.JobClient: map 100% reduce 0%
    11/03/30 08:25:41 INFO mapred.JobClient: map 50% reduce 0%
    11/03/30 08:25:49 INFO mapred.JobClient: Task Id : attempt_201103291035_0016_m_000001_0, Status : FAILED
    Task attempt_201103291035_0016_m_000001_0 failed to report status for 600 seconds. Killing!
    11/03/30 08:25:50 INFO mapred.JobClient: map 0% reduce 0%
    11/03/30 08:25:52 INFO mapred.JobClient: Task Id : attempt_201103291035_0016_m_000000_0, Status : FAILED
    Task attempt_201103291035_0016_m_000000_0 failed to report status for 600 seconds. Killing!
    11/03/30 08:26:29 INFO mapred.JobClient: map 1% reduce 0%
    11/03/30 08:26:53 INFO mapred.JobClient: map 2% reduce 0%
    11/03/30 08:27:05 INFO mapred.JobClient: map 3% reduce 0%
    11/03/30 08:27:29 INFO mapred.JobClient: map 4% reduce 0%
    11/03/30 08:27:41 INFO mapred.JobClient: map 5% reduce 0%
    ...

    Thank you for any thought,

    Maha

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedMar 30, '11 at 3:33p
activeMar 30, '11 at 7:26p
posts2
users1
websitehadoop.apache.org...
irc#hadoop

1 user in discussion

Maha: 2 posts

People

Translate

site design / logo © 2022 Grokbase