FAQ
We tried to add new nodes in a cdh4 cluster, using cdh manager, but since
the new nodes have different version of packages, we end up upgrading all
nodes in the cluster.

After we are done with that, we are able to start hdfs service properly
except for the namenode and check into the log, we saw error log as :

5:04:30.160 PMFATALorg.apache.hadoop.hdfs.server.namenode.NameNode

Exception in namenode join
java.lang.NoSuchMethodError: org.apache.hadoop.util.DataChecksum.getTypeFromName(Ljava/lang/String;)I
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:427)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:397)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:399)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:433)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:590)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1141)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1205)


Any idea where this error is from?

Search Discussions

  • Vinithra Varadharajan at Mar 14, 2013 at 12:29 am
    Joey,

    I suspect that your hadoop-common jar has not been updated. How did you go
    about upgrading CDH?

    If you upgraded to CDH4.2, you should
    have /usr/lib/hadoop/hadoop-common-2.0.0-cdh4.2.0.jar.

    -Vinithra
    On Wed, Mar 13, 2013 at 5:19 PM, Joey Pan wrote:

    We tried to add new nodes in a cdh4 cluster, using cdh manager, but since
    the new nodes have different version of packages, we end up upgrading all
    nodes in the cluster.

    After we are done with that, we are able to start hdfs service properly
    except for the namenode and check into the log, we saw error log as :

    5:04:30.160 PM FATAL org.apache.hadoop.hdfs.server.namenode.NameNode


    Exception in namenode join
    java.lang.NoSuchMethodError: org.apache.hadoop.util.DataChecksum.getTypeFromName(Ljava/lang/String;)I
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:497)

    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:427)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:397)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:399)

    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:433)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:609)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:590)

    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1141)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1205)


    Any idea where this error is from?
  • Joey Pan at Mar 14, 2013 at 2:04 am
    It was updated by running sudo yum update on all nodes.

    yes, the 4.2.0 jar is there in the folder and
    /usr/lib/hadoop/hadoop-common.jar points to it so i guess everything it
    should be good?

    thanks,
    joey
    On Wed, Mar 13, 2013 at 5:28 PM, Vinithra Varadharajan wrote:

    Joey,

    I suspect that your hadoop-common jar has not been updated. How did you go
    about upgrading CDH?

    If you upgraded to CDH4.2, you should
    have /usr/lib/hadoop/hadoop-common-2.0.0-cdh4.2.0.jar.

    -Vinithra

    On Wed, Mar 13, 2013 at 5:19 PM, Joey Pan wrote:

    We tried to add new nodes in a cdh4 cluster, using cdh manager, but since
    the new nodes have different version of packages, we end up upgrading all
    nodes in the cluster.

    After we are done with that, we are able to start hdfs service properly
    except for the namenode and check into the log, we saw error log as :

    5:04:30.160 PM FATAL org.apache.hadoop.hdfs.server.namenode.NameNode

    Exception in namenode join
    java.lang.NoSuchMethodError: org.apache.hadoop.util.DataChecksum.getTypeFromName(Ljava/lang/String;)I
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:497)


    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:427)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:397)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:399)


    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:433)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:609)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:590)


    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1141)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1205)


    Any idea where this error is from?
  • Joey Pan at Mar 14, 2013 at 2:45 am
    Another thing i see there , not sure if relevant or not is that, during
    update process, the cdh4 repo is updated to cdh/4/, our cdh manager repo is
    pointing to 4.1.1. We are currently running cdh manager 4.1.1 , but seems
    the chd4 is running the latest version, could that cause any mismatch issue
    because these 2 are running different versions?
    On Wednesday, March 13, 2013 5:28:52 PM UTC-7, Vinithra wrote:

    Joey,

    I suspect that your hadoop-common jar has not been updated. How did you go
    about upgrading CDH?

    If you upgraded to CDH4.2, you should
    have /usr/lib/hadoop/hadoop-common-2.0.0-cdh4.2.0.jar.

    -Vinithra

    On Wed, Mar 13, 2013 at 5:19 PM, Joey Pan <joymi...@gmail.com<javascript:>
    wrote:
    We tried to add new nodes in a cdh4 cluster, using cdh manager, but since
    the new nodes have different version of packages, we end up upgrading all
    nodes in the cluster.

    After we are done with that, we are able to start hdfs service properly
    except for the namenode and check into the log, we saw error log as :

    5:04:30.160 PM FATAL org.apache.hadoop.hdfs.server.namenode.NameNode

    Exception in namenode join
    java.lang.NoSuchMethodError: org.apache.hadoop.util.DataChecksum.getTypeFromName(Ljava/lang/String;)I
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:497)


    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:427)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:397)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:399)


    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:433)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:609)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:590)


    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1141)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1205)


    Any idea where this error is from?
  • Vinithra Varadharajan at Mar 14, 2013 at 7:29 am
    Joey,

    IIUC, you are using CM4.1.1 to manage CDH4.2? That combination should work
    fine.

    At this point I'd take a look at the classpath used by your NameNode. This
    should be printed at the start of your NameNode log file located in
    /var/log/hadoop-hdfs. Look for conflicting hadoop-common jars.

    -Vinithra
    On Wed, Mar 13, 2013 at 7:44 PM, Joey Pan wrote:


    Another thing i see there , not sure if relevant or not is that, during
    update process, the cdh4 repo is updated to cdh/4/, our cdh manager repo is
    pointing to 4.1.1. We are currently running cdh manager 4.1.1 , but seems
    the chd4 is running the latest version, could that cause any mismatch issue
    because these 2 are running different versions?
    On Wednesday, March 13, 2013 5:28:52 PM UTC-7, Vinithra wrote:

    Joey,

    I suspect that your hadoop-common jar has not been updated. How did you
    go about upgrading CDH?

    If you upgraded to CDH4.2, you should have /usr/lib/hadoop/hadoop-**
    common-2.0.0-cdh4.2.0.jar.

    -Vinithra

    On Wed, Mar 13, 2013 at 5:19 PM, Joey Pan wrote:

    We tried to add new nodes in a cdh4 cluster, using cdh manager, but
    since the new nodes have different version of packages, we end up upgrading
    all nodes in the cluster.

    After we are done with that, we are able to start hdfs service properly
    except for the namenode and check into the log, we saw error log as :

    5:04:30.160 PM FATAL org.apache.hadoop.hdfs.server.**namenode.NameNode


    Exception in namenode join
    java.lang.NoSuchMethodError: org.apache.hadoop.util.**DataChecksum.getTypeFromName(**Ljava/lang/String;)I
    at org.apache.hadoop.hdfs.server.**namenode.FSNamesystem.<init>(**FSNamesystem.java:497)



    at org.apache.hadoop.hdfs.server.**namenode.FSNamesystem.**loadFromDisk(FSNamesystem.**java:427)
    at org.apache.hadoop.hdfs.server.**namenode.FSNamesystem.**loadFromDisk(FSNamesystem.**java:397)

    at org.apache.hadoop.hdfs.server.**namenode.NameNode.**loadNamesystem(NameNode.java:**399)


    at org.apache.hadoop.hdfs.server.**namenode.NameNode.initialize(**NameNode.java:433)
    at org.apache.hadoop.hdfs.server.**namenode.NameNode.<init>(**NameNode.java:609)
    at org.apache.hadoop.hdfs.server.**namenode.NameNode.<init>(**NameNode.java:590)



    at org.apache.hadoop.hdfs.server.**namenode.NameNode.**createNameNode(NameNode.java:**1141)
    at org.apache.hadoop.hdfs.server.**namenode.NameNode.main(**NameNode.java:1205)


    Any idea where this error is from?
  • Joey Pan at Mar 14, 2013 at 7:33 am
    Hi Vinithra:
    Thanks so much for the help. I've resolved the issue, what happened was
    some package got corrupted during update, so I removed those and reinstall
    and that fix the problem.

    I'm having a new one now, everything is working, all services get
    started fine, but hue is not working properly and showing timed out error
    when connect to it. Under the misconfiguration tab I see these errors:

    Potential misconfiguration detected. Fix and restart Hue.
    liboozie.remote_deployement_dirCurrent value: /user/hue/oozie/deployments
    The deployment directory of Oozie workflows does not exist. Run "Setup
    Examples" on the Oozie workflow page./user/oozie/share/libOozie Share Lib
    not installed in default location.If this a known issue? how to fix that?
    On Thursday, March 14, 2013 12:28:47 AM UTC-7, Vinithra wrote:

    Joey,

    IIUC, you are using CM4.1.1 to manage CDH4.2? That combination should work
    fine.

    At this point I'd take a look at the classpath used by your NameNode. This
    should be printed at the start of your NameNode log file located in
    /var/log/hadoop-hdfs. Look for conflicting hadoop-common jars.

    -Vinithra

    On Wed, Mar 13, 2013 at 7:44 PM, Joey Pan <joymi...@gmail.com<javascript:>
    wrote:
    Another thing i see there , not sure if relevant or not is that, during
    update process, the cdh4 repo is updated to cdh/4/, our cdh manager repo is
    pointing to 4.1.1. We are currently running cdh manager 4.1.1 , but seems
    the chd4 is running the latest version, could that cause any mismatch issue
    because these 2 are running different versions?
    On Wednesday, March 13, 2013 5:28:52 PM UTC-7, Vinithra wrote:

    Joey,

    I suspect that your hadoop-common jar has not been updated. How did you
    go about upgrading CDH?

    If you upgraded to CDH4.2, you should have /usr/lib/hadoop/hadoop-**
    common-2.0.0-cdh4.2.0.jar.

    -Vinithra

    On Wed, Mar 13, 2013 at 5:19 PM, Joey Pan wrote:

    We tried to add new nodes in a cdh4 cluster, using cdh manager, but
    since the new nodes have different version of packages, we end up upgrading
    all nodes in the cluster.

    After we are done with that, we are able to start hdfs service properly
    except for the namenode and check into the log, we saw error log as :

    5:04:30.160 PM FATAL org.apache.hadoop.hdfs.server.**namenode.NameNode

    Exception in namenode join
    java.lang.NoSuchMethodError: org.apache.hadoop.util.**DataChecksum.getTypeFromName(**Ljava/lang/String;)I
    at org.apache.hadoop.hdfs.server.**namenode.FSNamesystem.<init>(**FSNamesystem.java:497)




    at org.apache.hadoop.hdfs.server.**namenode.FSNamesystem.**loadFromDisk(FSNamesystem.**java:427)
    at org.apache.hadoop.hdfs.server.**namenode.FSNamesystem.**loadFromDisk(FSNamesystem.**java:397)


    at org.apache.hadoop.hdfs.server.**namenode.NameNode.**loadNamesystem(NameNode.java:**399)


    at org.apache.hadoop.hdfs.server.**namenode.NameNode.initialize(**NameNode.java:433)
    at org.apache.hadoop.hdfs.server.**namenode.NameNode.<init>(**NameNode.java:609)
    at org.apache.hadoop.hdfs.server.**namenode.NameNode.<init>(**NameNode.java:590)




    at org.apache.hadoop.hdfs.server.**namenode.NameNode.**createNameNode(NameNode.java:**1141)
    at org.apache.hadoop.hdfs.server.**namenode.NameNode.main(**NameNode.java:1205)


    Any idea where this error is from?
  • Vinithra Varadharajan at Mar 14, 2013 at 7:41 am
    Glad you got that sorted!
    Hue not starting up doesn't have to do with these misconfigurations. Look
    at the Hue server log for errors.
    On Thu, Mar 14, 2013 at 12:33 AM, Joey Pan wrote:

    Hi Vinithra:
    Thanks so much for the help. I've resolved the issue, what happened was
    some package got corrupted during update, so I removed those and reinstall
    and that fix the problem.

    I'm having a new one now, everything is working, all services get
    started fine, but hue is not working properly and showing timed out error
    when connect to it. Under the misconfiguration tab I see these errors:

    Potential misconfiguration detected. Fix and restart Hue.
    liboozie.remote_deployement_dir Current value: /user/hue/oozie/deployments
    The deployment directory of Oozie workflows does not exist. Run "Setup
    Examples" on the Oozie workflow page. /user/oozie/share/lib Oozie Share
    Lib not installed in default location.If this a known issue? how to fix
    that?

    On Thursday, March 14, 2013 12:28:47 AM UTC-7, Vinithra wrote:

    Joey,

    IIUC, you are using CM4.1.1 to manage CDH4.2? That combination should
    work fine.

    At this point I'd take a look at the classpath used by your NameNode.
    This should be printed at the start of your NameNode log file located in
    /var/log/hadoop-hdfs. Look for conflicting hadoop-common jars.

    -Vinithra

    On Wed, Mar 13, 2013 at 7:44 PM, Joey Pan wrote:


    Another thing i see there , not sure if relevant or not is that, during
    update process, the cdh4 repo is updated to cdh/4/, our cdh manager repo is
    pointing to 4.1.1. We are currently running cdh manager 4.1.1 , but seems
    the chd4 is running the latest version, could that cause any mismatch issue
    because these 2 are running different versions?
    On Wednesday, March 13, 2013 5:28:52 PM UTC-7, Vinithra wrote:

    Joey,

    I suspect that your hadoop-common jar has not been updated. How did you
    go about upgrading CDH?

    If you upgraded to CDH4.2, you should have /usr/lib/hadoop/hadoop-**co*
    *mmon-2.0.0-cdh4.2.0.jar.

    -Vinithra

    On Wed, Mar 13, 2013 at 5:19 PM, Joey Pan wrote:

    We tried to add new nodes in a cdh4 cluster, using cdh manager, but
    since the new nodes have different version of packages, we end up upgrading
    all nodes in the cluster.

    After we are done with that, we are able to start hdfs service
    properly except for the namenode and check into the log, we saw error log
    as :

    5:04:30.160 PM FATAL org.apache.hadoop.hdfs.server.****
    namenode.NameNode


    Exception in namenode join
    java.lang.NoSuchMethodError: org.apache.hadoop.util.**DataChe**cksum.getTypeFromName(**Ljava/**lang/String;)I
    at org.apache.hadoop.hdfs.server.****namenode.FSNamesystem.<init>(**F**SNamesystem.java:497)





    at org.apache.hadoop.hdfs.server.****namenode.FSNamesystem.**loadFrom**Disk(FSNamesystem.**java:427)
    at org.apache.hadoop.hdfs.server.****namenode.FSNamesystem.**loadFrom**Disk(FSNamesystem.**java:397)



    at org.apache.hadoop.hdfs.server.****namenode.NameNode.**loadNamesyst**em(NameNode.java:**399)


    at org.apache.hadoop.hdfs.server.****namenode.NameNode.initialize(**N**ameNode.java:433)
    at org.apache.hadoop.hdfs.server.****namenode.NameNode.<init>(**NameN**ode.java:609)

    at org.apache.hadoop.hdfs.server.****namenode.NameNode.<init>(**NameN**ode.java:590)




    at org.apache.hadoop.hdfs.server.****namenode.NameNode.**createNameNo**de(NameNode.java:**1141)
    at org.apache.hadoop.hdfs.server.****namenode.NameNode.main(**NameNod**e.java:1205)


    Any idea where this error is from?
  • Romain Rigaux at Mar 14, 2013 at 7:52 am
    Hi,

    Feel free to do a quick search on
    https://groups.google.com/a/cloudera.org/forum/?fromgroups#!forum/hue-user?

    Which app is timing out in Hue? (and indeed, this warnings just means that
    the Oozie app is not completely setup but won't prevent the use of the
    other apps)
    You can find the logs in /var/log/hue or http://hue_host:hue_port/logs

    From past exeperiences I would bet that Beeswax is not started properly or
    crashed (you can quickly check Beeswax logs in the stdout of the process in
    CM).

    Romain
    On Thu, Mar 14, 2013 at 12:40 AM, Vinithra Varadharajan wrote:

    Glad you got that sorted!
    Hue not starting up doesn't have to do with these misconfigurations. Look
    at the Hue server log for errors.

    On Thu, Mar 14, 2013 at 12:33 AM, Joey Pan wrote:

    Hi Vinithra:
    Thanks so much for the help. I've resolved the issue, what happened
    was some package got corrupted during update, so I removed those and
    reinstall and that fix the problem.

    I'm having a new one now, everything is working, all services get
    started fine, but hue is not working properly and showing timed out error
    when connect to it. Under the misconfiguration tab I see these errors:

    Potential misconfiguration detected. Fix and restart Hue.
    liboozie.remote_deployement_dir Current value:
    /user/hue/oozie/deployments
    The deployment directory of Oozie workflows does not exist. Run "Setup
    Examples" on the Oozie workflow page. /user/oozie/share/lib Oozie Share
    Lib not installed in default location.If this a known issue? how to fix
    that?

    On Thursday, March 14, 2013 12:28:47 AM UTC-7, Vinithra wrote:

    Joey,

    IIUC, you are using CM4.1.1 to manage CDH4.2? That combination should
    work fine.

    At this point I'd take a look at the classpath used by your NameNode.
    This should be printed at the start of your NameNode log file located in
    /var/log/hadoop-hdfs. Look for conflicting hadoop-common jars.

    -Vinithra

    On Wed, Mar 13, 2013 at 7:44 PM, Joey Pan wrote:


    Another thing i see there , not sure if relevant or not is that, during
    update process, the cdh4 repo is updated to cdh/4/, our cdh manager repo is
    pointing to 4.1.1. We are currently running cdh manager 4.1.1 , but seems
    the chd4 is running the latest version, could that cause any mismatch issue
    because these 2 are running different versions?
    On Wednesday, March 13, 2013 5:28:52 PM UTC-7, Vinithra wrote:

    Joey,

    I suspect that your hadoop-common jar has not been updated. How did
    you go about upgrading CDH?

    If you upgraded to CDH4.2, you should have /usr/lib/hadoop/hadoop-**co
    **mmon-2.0.0-cdh4.2.0.jar.

    -Vinithra

    On Wed, Mar 13, 2013 at 5:19 PM, Joey Pan wrote:

    We tried to add new nodes in a cdh4 cluster, using cdh manager, but
    since the new nodes have different version of packages, we end up upgrading
    all nodes in the cluster.

    After we are done with that, we are able to start hdfs service
    properly except for the namenode and check into the log, we saw error log
    as :

    5:04:30.160 PM FATAL org.apache.hadoop.hdfs.server.****
    namenode.NameNode


    Exception in namenode join
    java.lang.NoSuchMethodError: org.apache.hadoop.util.**DataChe**cksum.getTypeFromName(**Ljava/**lang/String;)I
    at org.apache.hadoop.hdfs.server.****namenode.FSNamesystem.<init>(**F**SNamesystem.java:497)







    at org.apache.hadoop.hdfs.server.****namenode.FSNamesystem.**loadFrom**Disk(FSNamesystem.**java:427)
    at org.apache.hadoop.hdfs.server.****namenode.FSNamesystem.**loadFrom**Disk(FSNamesystem.**java:397)





    at org.apache.hadoop.hdfs.server.****namenode.NameNode.**loadNamesyst**em(NameNode.java:**399)


    at org.apache.hadoop.hdfs.server.****namenode.NameNode.initialize(**N**ameNode.java:433)
    at org.apache.hadoop.hdfs.server.****namenode.NameNode.<init>(**NameN**ode.java:609)



    at org.apache.hadoop.hdfs.server.****namenode.NameNode.<init>(**NameN**ode.java:590)




    at org.apache.hadoop.hdfs.server.****namenode.NameNode.**createNameNo**de(NameNode.java:**1141)
    at org.apache.hadoop.hdfs.server.****namenode.NameNode.main(**NameNod**e.java:1205)


    Any idea where this error is from?
  • Vinithra Varadharajan at Mar 14, 2013 at 8:45 am
    Re-adding scm-users so that the solution is shared with others who run into
    the same problem.

    Good to know you're up and running now!
    On Thu, Mar 14, 2013 at 1:28 AM, Joey Pan wrote:

    Thanks Vinithra! Really appreciate your help, turns out it's due to hive
    update from 0.9 to 0.10, need run a separate update for metadata schema
    since I use mysql for metastore, the time out error is due to hive failure
    i thk, once update the schema for hive, it's fixed.

    On Thursday, March 14, 2013 12:40:38 AM UTC-7, Vinithra wrote:

    Glad you got that sorted!
    Hue not starting up doesn't have to do with these misconfigurations. Look
    at the Hue server log for errors.
    On Thu, Mar 14, 2013 at 12:33 AM, Joey Pan wrote:

    Hi Vinithra:
    Thanks so much for the help. I've resolved the issue, what happened
    was some package got corrupted during update, so I removed those and
    reinstall and that fix the problem.

    I'm having a new one now, everything is working, all services get
    started fine, but hue is not working properly and showing timed out error
    when connect to it. Under the misconfiguration tab I see these errors:

    Potential misconfiguration detected. Fix and restart Hue.
    liboozie.remote_deployement_**dir Current value: /user/hue/oozie/**
    deployments
    The deployment directory of Oozie workflows does not exist. Run "Setup
    Examples" on the Oozie workflow page. /user/oozie/share/lib Oozie Share
    Lib not installed in default location.If this a known issue? how to
    fix that?

    On Thursday, March 14, 2013 12:28:47 AM UTC-7, Vinithra wrote:

    Joey,

    IIUC, you are using CM4.1.1 to manage CDH4.2? That combination should
    work fine.

    At this point I'd take a look at the classpath used by your NameNode.
    This should be printed at the start of your NameNode log file located in
    /var/log/hadoop-hdfs. Look for conflicting hadoop-common jars.

    -Vinithra

    On Wed, Mar 13, 2013 at 7:44 PM, Joey Pan wrote:


    Another thing i see there , not sure if relevant or not is that,
    during update process, the cdh4 repo is updated to cdh/4/, our cdh manager
    repo is pointing to 4.1.1. We are currently running cdh manager 4.1.1 ,
    but seems the chd4 is running the latest version, could that cause any
    mismatch issue because these 2 are running different versions?
    On Wednesday, March 13, 2013 5:28:52 PM UTC-7, Vinithra wrote:

    Joey,

    I suspect that your hadoop-common jar has not been updated. How did
    you go about upgrading CDH?

    If you upgraded to CDH4.2, you should have /usr/lib/hadoop/hadoop-**
    co****mmon-2.0.0-cdh4.2.0.jar.

    -Vinithra

    On Wed, Mar 13, 2013 at 5:19 PM, Joey Pan wrote:

    We tried to add new nodes in a cdh4 cluster, using cdh manager, but
    since the new nodes have different version of packages, we end up upgrading
    all nodes in the cluster.

    After we are done with that, we are able to start hdfs service
    properly except for the namenode and check into the log, we saw error log
    as :

    5:04:30.160 PM FATAL org.apache.hadoop.hdfs.server.******
    namenode.NameNode


    Exception in namenode join
    java.lang.NoSuchMethodError: org.apache.hadoop.util.**DataChe****cksum.getTypeFromName(**Ljava/**la**ng/String;)I
    at org.apache.hadoop.hdfs.server.******namenode.FSNamesystem.<init>(**F****SNamesystem.java:497)







    at org.apache.hadoop.hdfs.server.******namenode.FSNamesystem.**loadFrom****Disk(FSNamesystem.**java:427)
    at org.apache.hadoop.hdfs.server.******namenode.FSNamesystem.**loadFrom****Disk(FSNamesystem.**java:397)





    at org.apache.hadoop.hdfs.server.******namenode.NameNode.**loadNamesyst****em(NameNode.java:**399)


    at org.apache.hadoop.hdfs.server.******namenode.NameNode.initialize(**N****ameNode.java:433)
    at org.apache.hadoop.hdfs.server.******namenode.NameNode.<init>(**NameN****ode.java:609)



    at org.apache.hadoop.hdfs.server.******namenode.NameNode.<init>(**NameN****ode.java:590)




    at org.apache.hadoop.hdfs.server.******namenode.NameNode.**createNameNo****de(NameNode.java:**1141)
    at org.apache.hadoop.hdfs.server.******namenode.NameNode.main(**NameNod****e.java:1205)


    Any idea where this error is from?

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupscm-users @
categorieshadoop
postedMar 14, '13 at 12:19a
activeMar 14, '13 at 8:45a
posts9
users3
websitecloudera.com
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase