FAQ
Hi,

I guess I am not the first one to see the following exception when
trying to initialize a LineRecordReader. However, so far I could't
figure out a workaround for this problem.

I saw that this problem was fixed in the svn, but when I checked out one
of the 0.23.0 versions (I can't remember which) I couldn't get the
servers started. (Not sure if that was due to the revision, or me
messing up the setup).

So maybe someone can point me to a known workaround for this problem or
at least hinting a revision that other people got to work?

Kind regards,
Claus


ps: Essentially I am just doing this:

public static class MyInputFormat extends
FileInputFormat<Text, ByteWritable>
{
@Override
public RecordReader<Text, ByteWritable>
createRecordReader(InputSplit inputSplit, TaskAttemptContext
taskAttemptContext)
throws IOException, InterruptedException
{
MyRecordReader result = new MyRecordReader();
result.initialize(inputSplit, taskAttemptContext);
return result;
}
}

public static class MyRecordReader extends
RecordReader<Text, ByteWritable>
{
LineRecordReader myReader = new LineRecordReader();
...
@Override
public void initialize(InputSplit inputSplit,
TaskAttemptContext taskAttemptContext)
throws IOException, InterruptedException
{
myReader.initialize(inputSplit, taskAttemptContext); //
EXCEPTION THROWN HERE
}
...
}

job.setInputFormatClass(MyInputFormat.class);


The exception is:

java.lang.Exception: java.lang.ClassCastException:
org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl cannot be cast
to org.apache.hadoop.mapreduce.MapContext
at
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:371)
~[hadoop-mapred-0.21.0.jar:na]
java.lang.ClassCastException:
org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl cannot be cast
to org.apache.hadoop.mapreduce.MapContext
at
org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize(LineRecordReader.java:75)
~[hadoop-mapred-0.21.0.jar:na]
(rest of the output is within my code)

Search Discussions

  • Amareshwari Sri Ramadasu at Apr 21, 2011 at 3:35 am
    Hmm.. This has been fixed in MAPREDUCE-1905, in 0.21.1

    Thanks
    Amareshwari
    On 4/21/11 7:27 AM, "Claus Stadler" wrote:

    Hi,

    I guess I am not the first one to see the following exception when
    trying to initialize a LineRecordReader. However, so far I could't
    figure out a workaround for this problem.

    I saw that this problem was fixed in the svn, but when I checked out one
    of the 0.23.0 versions (I can't remember which) I couldn't get the
    servers started. (Not sure if that was due to the revision, or me
    messing up the setup).

    So maybe someone can point me to a known workaround for this problem or
    at least hinting a revision that other people got to work?

    Kind regards,
    Claus


    ps: Essentially I am just doing this:

    public static class MyInputFormat extends
    FileInputFormat<Text, ByteWritable>
    {
    @Override
    public RecordReader<Text, ByteWritable>
    createRecordReader(InputSplit inputSplit, TaskAttemptContext
    taskAttemptContext)
    throws IOException, InterruptedException
    {
    MyRecordReader result = new MyRecordReader();
    result.initialize(inputSplit, taskAttemptContext);
    return result;
    }
    }

    public static class MyRecordReader extends
    RecordReader<Text, ByteWritable>
    {
    LineRecordReader myReader = new LineRecordReader();
    ...
    @Override
    public void initialize(InputSplit inputSplit,
    TaskAttemptContext taskAttemptContext)
    throws IOException, InterruptedException
    {
    myReader.initialize(inputSplit, taskAttemptContext); //
    EXCEPTION THROWN HERE
    }
    ...
    }

    job.setInputFormatClass(MyInputFormat.class);


    The exception is:

    java.lang.Exception: java.lang.ClassCastException:
    org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl cannot be cast
    to org.apache.hadoop.mapreduce.MapContext
    at
    org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:371)
    ~[hadoop-mapred-0.21.0.jar:na]
    java.lang.ClassCastException:
    org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl cannot be cast
    to org.apache.hadoop.mapreduce.MapContext
    at
    org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize(LineRecordReader.java:75)
    ~[hadoop-mapred-0.21.0.jar:na]
    (rest of the output is within my code)
  • Claus Stadler at Apr 25, 2011 at 10:52 pm
    Hi,


    Thank you for the reply, however now I have the question on what the
    recommended way is to get hadoop working with this fix?
    Is there a documentation for this?

    So some of my questions right now are:
    . Does the svn head revision usually work?
    . If not, is there a specific revision that is known to work (at least
    in regard to this bug)?
    . Or do I need to apply the patch?
    . Do I need to do some special post-configuration after checking the
    three sub-projects (common, mapred, hdfs) out from svn?
    . Is there an estimate on the next release?

    Kind regards,
    Claus



    On 04/21/2011 05:17 AM, Amareshwari Sri Ramadasu wrote:
    Hmm.. This has been fixed in MAPREDUCE-1905, in 0.21.1

    Thanks
    Amareshwari
    On 4/21/11 7:27 AM, "Claus Stadler"wrote:

    Hi,

    I guess I am not the first one to see the following exception when
    trying to initialize a LineRecordReader. However, so far I could't
    figure out a workaround for this problem.

    I saw that this problem was fixed in the svn, but when I checked out one
    of the 0.23.0 versions (I can't remember which) I couldn't get the
    servers started. (Not sure if that was due to the revision, or me
    messing up the setup).

    So maybe someone can point me to a known workaround for this problem or
    at least hinting a revision that other people got to work?

    Kind regards,
    Claus


    ps: Essentially I am just doing this:

    public static class MyInputFormat extends
    FileInputFormat<Text, ByteWritable>
    {
    @Override
    public RecordReader<Text, ByteWritable>
    createRecordReader(InputSplit inputSplit, TaskAttemptContext
    taskAttemptContext)
    throws IOException, InterruptedException
    {
    MyRecordReader result = new MyRecordReader();
    result.initialize(inputSplit, taskAttemptContext);
    return result;
    }
    }

    public static class MyRecordReader extends
    RecordReader<Text, ByteWritable>
    {
    LineRecordReader myReader = new LineRecordReader();
    ...
    @Override
    public void initialize(InputSplit inputSplit,
    TaskAttemptContext taskAttemptContext)
    throws IOException, InterruptedException
    {
    myReader.initialize(inputSplit, taskAttemptContext); //
    EXCEPTION THROWN HERE
    }
    ...
    }

    job.setInputFormatClass(MyInputFormat.class);


    The exception is:

    java.lang.Exception: java.lang.ClassCastException:
    org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl cannot be cast
    to org.apache.hadoop.mapreduce.MapContext
    at
    org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:371)
    ~[hadoop-mapred-0.21.0.jar:na]
    java.lang.ClassCastException:
    org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl cannot be cast
    to org.apache.hadoop.mapreduce.MapContext
    at
    org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize(LineRecordReader.java:75)
    ~[hadoop-mapred-0.21.0.jar:na]
    (rest of the output is within my code)

  • Claus Stadler at Jun 2, 2011 at 11:21 pm
    Hi,

    Sorry to bother you again, but could someone please give me advice on
    how to set up hadoop with a fixed LineRecordReader?
    (If I need to checkout from SVN, how do I get those three subprojects
    combined und the bash scripts working? I couldn't find a doc for this yet.)

    Kind regards,
    Claus
    On 04/26/2011 12:52 AM, Claus Stadler wrote:
    Hi,


    Thank you for the reply, however now I have the question on what the
    recommended way is to get hadoop working with this fix?
    Is there a documentation for this?

    So some of my questions right now are:
    . Does the svn head revision usually work?
    . If not, is there a specific revision that is known to work (at least
    in regard to this bug)?
    . Or do I need to apply the patch?
    . Do I need to do some special post-configuration after checking the
    three sub-projects (common, mapred, hdfs) out from svn?
    . Is there an estimate on the next release?

    Kind regards,
    Claus



    On 04/21/2011 05:17 AM, Amareshwari Sri Ramadasu wrote:
    Hmm.. This has been fixed in MAPREDUCE-1905, in 0.21.1

    Thanks
    Amareshwari
    On 4/21/11 7:27 AM, "Claus
    Stadler"wrote:

    Hi,

    I guess I am not the first one to see the following exception when
    trying to initialize a LineRecordReader. However, so far I could't
    figure out a workaround for this problem.

    I saw that this problem was fixed in the svn, but when I checked out one
    of the 0.23.0 versions (I can't remember which) I couldn't get the
    servers started. (Not sure if that was due to the revision, or me
    messing up the setup).

    So maybe someone can point me to a known workaround for this problem or
    at least hinting a revision that other people got to work?

    Kind regards,
    Claus


    ps: Essentially I am just doing this:

    public static class MyInputFormat extends
    FileInputFormat<Text, ByteWritable>
    {
    @Override
    public RecordReader<Text, ByteWritable>
    createRecordReader(InputSplit inputSplit, TaskAttemptContext
    taskAttemptContext)
    throws IOException, InterruptedException
    {
    MyRecordReader result = new MyRecordReader();
    result.initialize(inputSplit, taskAttemptContext);
    return result;
    }
    }

    public static class MyRecordReader extends
    RecordReader<Text, ByteWritable>
    {
    LineRecordReader myReader = new LineRecordReader();
    ...
    @Override
    public void initialize(InputSplit inputSplit,
    TaskAttemptContext taskAttemptContext)
    throws IOException, InterruptedException
    {
    myReader.initialize(inputSplit, taskAttemptContext); //
    EXCEPTION THROWN HERE
    }
    ...
    }

    job.setInputFormatClass(MyInputFormat.class);


    The exception is:

    java.lang.Exception: java.lang.ClassCastException:
    org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl cannot be cast
    to org.apache.hadoop.mapreduce.MapContext
    at
    org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:371)
    ~[hadoop-mapred-0.21.0.jar:na]
    java.lang.ClassCastException:
    org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl cannot be cast
    to org.apache.hadoop.mapreduce.MapContext
    at
    org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize(LineRecordReader.java:75)

    ~[hadoop-mapred-0.21.0.jar:na]
    (rest of the output is within my code)

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedApr 21, '11 at 1:58a
activeJun 2, '11 at 11:21p
posts4
users2
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2023 Grokbase