FAQ
Hello,

I am testing the CDH4 upgrade on a working CDH3 Update 1 cluster (
hadoop-0.20-namenode-0.20.2+923.97-1.noarch.rpm<http://c00000054624.corp.intuit.net/cdh3/hadoop-0.20-namenode-0.20.2+923.97-1.noarch.rpm>).
Trying a major version upgrade which is from CDH3-Update 1 --> CDH4

Manual upgrade following these instructions and hitting an issue with
metadata upgrade step -
https://ccp.cloudera.com/display/CDH4DOC/Upgrading+from+CDH3+to+CDH4#UpgradingfromCDH3toCDH4-Step3%3AUninstallCDH3Hadoop


I am at Step 6 #2 - "sudo service hadoop-hdfs-namenode upgrade" and
getting the following error. I have already verified the core-site.xml and
hdfs-site.xml files. Any pointers?

Error:
===

[root@pdevpdbos10p hadoop-hdfs]# cat
hadoop-hdfs-namenode-pdevpdbos10p.xxx.xxx.net.log

2012-07-23 17:16:58,460 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:

/************************************************************

STARTUP_MSG: Starting NameNode

STARTUP_MSG: host = pdevpdbos10p.xxx.xxxx.net/10.136.240.199

STARTUP_MSG: args = [-upgrade]

STARTUP_MSG: version = 2.0.0-cdh4.0.1

STARTUP_MSG: classpath =
/etc/hadoop/conf:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop/lib/stax-api-1.0.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.cloudera.1.jar:/usr/lib/hadoop/lib/oro-2.0.8.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/zookeeper-3.4.3-cdh4.0.1.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.cloudera.1.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/jets3t-0.6.1.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/jersey-server-1.8.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/log4j-1.2.15.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/jersey-json-1.8.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/commons-logging-api-1.1.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/json-simple-1.1.jar:/usr/lib/hadoop/lib/avro-1.5.4.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/jline-0.9.94.jar:/usr/lib/hadoop/lib/commons-math-2.1.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/commons-lang-2.5.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jersey-core-1.8.jar:/usr/lib/hadoop/lib/kfs-0.3.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/commons-io-2.1.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar:/usr/lib/hadoop/lib/aspectjrt-1.6.5.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/core-3.1.1.jar:/usr/lib/hadoop/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//hadoop-annotations-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.0.1-tests.jar:/usr/lib/hadoop/.//hadoop-auth-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop-hdfs/lib/zookeeper-3.4.3-cdh4.0.1.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.15.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-hdfs/lib/paranamer-2.3.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop-hdfs/lib/avro-1.5.4.jar:/usr/lib/hadoop-hdfs/lib/jline-0.9.94.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.3.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.0.1-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-mapreduce/.//*

STARTUP_MSG: build =
file:///data/1/jenkins/workspace/generic-package-rhel64-6-0/topdir/BUILD/hadoop-2.0.0-cdh4.0.1/src/hadoop-common-project/hadoop-common<file://///data/1/jenkins/workspace/generic-package-rhel64-6-0/topdir/BUILD/hadoop-2.0.0-cdh4.0.1/src/hadoop-common-project/hadoop-common> -r
4d98eb718ec0cce78a00f292928c5ab6e1b84695; compiled by 'jenkins' on Thu Jun
28 17:39:22 PDT 2012

************************************************************/

2012-07-23 17:16:58,719 INFO org.apache.hadoop.metrics2.impl.MetricsConfig:
loaded properties from hadoop-metrics2.properties

2012-07-23 17:16:58,811 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
period at 10 second(s).

2012-07-23 17:16:58,812 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
started

2012-07-23 17:16:58,867 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join

java.lang.IllegalArgumentException: Invalid URI for NameNode address (check
fs.defaultFS): file:/// <file://///> has no authority.

at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:315)

at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:303)

at
org.apache.hadoop.hdfs.server.namenode.NameNode.getRpcServerAddress(NameNode.java:356)

at
org.apache.hadoop.hdfs.server.namenode.NameNode.loginAsNameNodeUser(NameNode.java:408)

at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:420)

at
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:571)

at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1134)

at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1193)

2012-07-23 17:16:58,869 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at
pdevpdbos10p.xxx.xxx.net/10.136.240.199

************************************************************/


Thanks,

Randhir



--

Search Discussions

  • Harsh J at Jul 26, 2012 at 5:34 pm
    Hi Randhir,

    Can you paste the output of the following please?:

    $ cat /etc/hadoop/conf/core-site.xml
    On Thu, Jul 26, 2012 at 5:27 AM, Randhir wrote:
    Hello,

    I am testing the CDH4 upgrade on a working CDH3 Update 1 cluster
    (hadoop-0.20-namenode-0.20.2+923.97-1.noarch.rpm). Trying a major version
    upgrade which is from CDH3-Update 1 --> CDH4

    Manual upgrade following these instructions and hitting an issue with
    metadata upgrade step -
    https://ccp.cloudera.com/display/CDH4DOC/Upgrading+from+CDH3+to+CDH4#UpgradingfromCDH3toCDH4-Step3%3AUninstallCDH3Hadoop

    I am at Step 6 #2 - "sudo service hadoop-hdfs-namenode upgrade" and getting
    the following error. I have already verified the core-site.xml and
    hdfs-site.xml files. Any pointers?

    Error:
    ===

    [root@pdevpdbos10p hadoop-hdfs]# cat
    hadoop-hdfs-namenode-pdevpdbos10p.xxx.xxx.net.log

    2012-07-23 17:16:58,460 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:

    /************************************************************

    STARTUP_MSG: Starting NameNode

    STARTUP_MSG: host = pdevpdbos10p.xxx.xxxx.net/10.136.240.199

    STARTUP_MSG: args = [-upgrade]

    STARTUP_MSG: version = 2.0.0-cdh4.0.1

    STARTUP_MSG: classpath =
    /etc/hadoop/conf:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop/lib/stax-api-1.0.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.cloudera.1.jar:/usr/lib/hadoop/lib/oro-2.0.8.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/zookeeper-3.4.3-cdh4.0.1.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.cloudera.1.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/jets3t-0.6.1.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/jersey-server-1.8.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/log4j-1.2.15.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/jersey-json-1.8.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/commons-logging-api-1.1.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/json-simple-1.1.jar:/usr/lib/hadoop/lib/avro-1.5.4.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/jline-0.9.94.jar:/usr/lib/hadoop/lib/commons-math-2.1.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/commons-lang-2.5.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jersey-core-1.8.jar:/usr/lib/hadoop/lib/kfs-0.3.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/commons-io-2.1.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar:/usr/lib/hadoop/lib/aspectjrt-1.6.5.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/core-3.1.1.jar:/usr/lib/hadoop/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//hadoop-annotations-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.0.1-tests.jar:/usr/lib/hadoop/.//hadoop-auth-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop-hdfs/lib/zookeeper-3.4.3-cdh4.0.1.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.15.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-hdfs/lib/paranamer-2.3.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop-hdfs/lib/avro-1.5.4.jar:/usr/lib/hadoop-hdfs/lib/jline-0.9.94.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.3.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.0.1-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-mapreduce/.//*

    STARTUP_MSG: build =
    file:///data/1/jenkins/workspace/generic-package-rhel64-6-0/topdir/BUILD/hadoop-2.0.0-cdh4.0.1/src/hadoop-common-project/hadoop-common
    -r 4d98eb718ec0cce78a00f292928c5ab6e1b84695; compiled by 'jenkins' on Thu
    Jun 28 17:39:22 PDT 2012

    ************************************************************/

    2012-07-23 17:16:58,719 INFO org.apache.hadoop.metrics2.impl.MetricsConfig:
    loaded properties from hadoop-metrics2.properties

    2012-07-23 17:16:58,811 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period
    at 10 second(s).

    2012-07-23 17:16:58,812 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
    started

    2012-07-23 17:16:58,867 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join

    java.lang.IllegalArgumentException: Invalid URI for NameNode address (check
    fs.defaultFS): file:/// has no authority.

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:315)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:303)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.getRpcServerAddress(NameNode.java:356)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.loginAsNameNodeUser(NameNode.java:408)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:420)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:590)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:571)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1134)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1193)

    2012-07-23 17:16:58,869 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

    /************************************************************

    SHUTDOWN_MSG: Shutting down NameNode at
    pdevpdbos10p.xxx.xxx.net/10.136.240.199

    ************************************************************/


    Thanks,

    Randhir



    --



    --
    Harsh J

    --
  • Randhir at Jul 26, 2012 at 7:40 pm
    Hi Harsha,

    I was able to figure the problem with help of another team member. The
    issue identified was that the /etc/hadoop/conf was not pointing to the
    right directory.

    The symlink to my configuration files was broken....it seems like CDH4
    packages have broken it based on the date and timestamp.

    Created a symlink to my configuration files
    /etc/hadoop-0.20/conf.my_cluster and it worked. I had to do that on all the
    other nodes in the cluster ie Secondary NN & Data Nodes.

    Thanks,
    Randhir
    On Wednesday, July 25, 2012 4:57:32 PM UTC-7, Randhir wrote:

    Hello,

    I am testing the CDH4 upgrade on a working CDH3 Update 1 cluster (
    hadoop-0.20-namenode-0.20.2+923.97-1.noarch.rpm<http://c00000054624.corp.intuit.net/cdh3/hadoop-0.20-namenode-0.20.2+923.97-1.noarch.rpm>).
    Trying a major version upgrade which is from CDH3-Update 1 --> CDH4

    Manual upgrade following these instructions and hitting an issue with
    metadata upgrade step -
    https://ccp.cloudera.com/display/CDH4DOC/Upgrading+from+CDH3+to+CDH4#UpgradingfromCDH3toCDH4-Step3%3AUninstallCDH3Hadoop


    I am at Step 6 #2 - "sudo service hadoop-hdfs-namenode upgrade" and
    getting the following error. I have already verified the core-site.xml and
    hdfs-site.xml files. Any pointers?

    Error:
    ===

    [root@pdevpdbos10p hadoop-hdfs]# cat
    hadoop-hdfs-namenode-pdevpdbos10p.xxx.xxx.net.log

    2012-07-23 17:16:58,460 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:

    /************************************************************

    STARTUP_MSG: Starting NameNode

    STARTUP_MSG: host = pdevpdbos10p.xxx.xxxx.net/10.136.240.199

    STARTUP_MSG: args = [-upgrade]

    STARTUP_MSG: version = 2.0.0-cdh4.0.1

    STARTUP_MSG: classpath =
    /etc/hadoop/conf:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop/lib/stax-api-1.0.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.cloudera.1.jar:/usr/lib/hadoop/lib/oro-2.0.8.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/zookeeper-3.4.3-cdh4.0.1.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.cloudera.1.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/jets3t-0.6.1.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/jersey-server-1.8.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/log4j-1.2.15.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/jersey-json-1.8.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/commons-logging-api-1.1.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/json-simple-1.1.jar:/usr/lib/hadoop/lib/avro-1.5.4.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/jline-0.9.94.jar:/usr/lib/hadoop/lib/commons-math-2.1.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/commons-lang-2.5.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jersey-core-1.8.jar:/usr/lib/hadoop/lib/kfs-0.3.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/commons-io-2.1.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar:/usr/lib/hadoop/lib/aspectjrt-1.6.5.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/core-3.1.1.jar:/usr/lib/hadoop/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//hadoop-annotations-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.0.1-tests.jar:/usr/lib/hadoop/.//hadoop-auth-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop-hdfs/lib/zookeeper-3.4.3-cdh4.0.1.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.15.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-hdfs/lib/paranamer-2.3.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop-hdfs/lib/avro-1.5.4.jar:/usr/lib/hadoop-hdfs/lib/jline-0.9.94.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.3.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.0.1-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-mapreduce/.//*

    STARTUP_MSG: build =
    file:///data/1/jenkins/workspace/generic-package-rhel64-6-0/topdir/BUILD/hadoop-2.0.0-cdh4.0.1/src/hadoop-common-project/hadoop-common -r
    4d98eb718ec0cce78a00f292928c5ab6e1b84695; compiled by 'jenkins' on Thu Jun
    28 17:39:22 PDT 2012

    ************************************************************/

    2012-07-23 17:16:58,719 INFO
    org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
    hadoop-metrics2.properties

    2012-07-23 17:16:58,811 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
    period at 10 second(s).

    2012-07-23 17:16:58,812 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
    started

    2012-07-23 17:16:58,867 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join

    java.lang.IllegalArgumentException: Invalid URI for NameNode address
    (check fs.defaultFS): file:/// has no authority.

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:315)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:303)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.getRpcServerAddress(NameNode.java:356)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.loginAsNameNodeUser(NameNode.java:408)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:420)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:590)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:571)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1134)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1193)

    2012-07-23 17:16:58,869 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

    /************************************************************

    SHUTDOWN_MSG: Shutting down NameNode at
    pdevpdbos10p.xxx.xxx.net/10.136.240.199

    ************************************************************/


    Thanks,

    Randhir


    --
  • Harsh J at Jul 27, 2012 at 11:00 pm
    Randhir,

    Good to know. Just as an add-on, if you use Cloudera Manager, it
    manages your alternatives-based symlinks for client configs well and
    keeps service configs separately managed, so you never run into
    problems as trivial (yet hard to find) such as this :)
    On Fri, Jul 27, 2012 at 1:10 AM, Randhir wrote:
    Hi Harsha,

    I was able to figure the problem with help of another team member. The issue
    identified was that the /etc/hadoop/conf was not pointing to the right
    directory.

    The symlink to my configuration files was broken....it seems like CDH4
    packages have broken it based on the date and timestamp.

    Created a symlink to my configuration files /etc/hadoop-0.20/conf.my_cluster
    and it worked. I had to do that on all the other nodes in the cluster ie
    Secondary NN & Data Nodes.

    Thanks,
    Randhir

    On Wednesday, July 25, 2012 4:57:32 PM UTC-7, Randhir wrote:

    Hello,

    I am testing the CDH4 upgrade on a working CDH3 Update 1 cluster
    (hadoop-0.20-namenode-0.20.2+923.97-1.noarch.rpm). Trying a major version
    upgrade which is from CDH3-Update 1 --> CDH4

    Manual upgrade following these instructions and hitting an issue with
    metadata upgrade step -
    https://ccp.cloudera.com/display/CDH4DOC/Upgrading+from+CDH3+to+CDH4#UpgradingfromCDH3toCDH4-Step3%3AUninstallCDH3Hadoop

    I am at Step 6 #2 - "sudo service hadoop-hdfs-namenode upgrade" and
    getting the following error. I have already verified the core-site.xml and
    hdfs-site.xml files. Any pointers?

    Error:
    ===

    [root@pdevpdbos10p hadoop-hdfs]# cat
    hadoop-hdfs-namenode-pdevpdbos10p.xxx.xxx.net.log

    2012-07-23 17:16:58,460 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:

    /************************************************************

    STARTUP_MSG: Starting NameNode

    STARTUP_MSG: host = pdevpdbos10p.xxx.xxxx.net/10.136.240.199

    STARTUP_MSG: args = [-upgrade]

    STARTUP_MSG: version = 2.0.0-cdh4.0.1

    STARTUP_MSG: classpath =
    /etc/hadoop/conf:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop/lib/stax-api-1.0.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.cloudera.1.jar:/usr/lib/hadoop/lib/oro-2.0.8.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/zookeeper-3.4.3-cdh4.0.1.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.cloudera.1.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/jets3t-0.6.1.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/jersey-server-1.8.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/log4j-1.2.15.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/jersey-json-1.8.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/commons-logging-api-1.1.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/json-simple-1.1.jar:/usr/lib/hadoop/lib/avro-1.5.4.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/jline-0.9.94.jar:/usr/lib/hadoop/lib/commons-math-2.1.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/commons-lang-2.5.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jersey-core-1.8.jar:/usr/lib/hadoop/lib/kfs-0.3.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/commons-io-2.1.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar:/usr/lib/hadoop/lib/aspectjrt-1.6.5.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/core-3.1.1.jar:/usr/lib/hadoop/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//hadoop-annotations-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.0.1-tests.jar:/usr/lib/hadoop/.//hadoop-auth-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop-hdfs/lib/zookeeper-3.4.3-cdh4.0.1.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.15.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-hdfs/lib/paranamer-2.3.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop-hdfs/lib/avro-1.5.4.jar:/usr/lib/hadoop-hdfs/lib/jline-0.9.94.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.3.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.0.1-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-mapreduce/.//*

    STARTUP_MSG: build =
    file:///data/1/jenkins/workspace/generic-package-rhel64-6-0/topdir/BUILD/hadoop-2.0.0-cdh4.0.1/src/hadoop-common-project/hadoop-common
    -r 4d98eb718ec0cce78a00f292928c5ab6e1b84695; compiled by 'jenkins' on Thu
    Jun 28 17:39:22 PDT 2012

    ************************************************************/

    2012-07-23 17:16:58,719 INFO
    org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
    hadoop-metrics2.properties

    2012-07-23 17:16:58,811 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period
    at 10 second(s).

    2012-07-23 17:16:58,812 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
    started

    2012-07-23 17:16:58,867 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join

    java.lang.IllegalArgumentException: Invalid URI for NameNode address
    (check fs.defaultFS): file:/// has no authority.

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:315)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:303)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.getRpcServerAddress(NameNode.java:356)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.loginAsNameNodeUser(NameNode.java:408)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:420)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:590)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:571)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1134)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1193)

    2012-07-23 17:16:58,869 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

    /************************************************************

    SHUTDOWN_MSG: Shutting down NameNode at
    pdevpdbos10p.xxx.xxx.net/10.136.240.199

    ************************************************************/


    Thanks,

    Randhir

    --



    --
    Harsh J

    --
  • Anupam Ranjan at Apr 17, 2013 at 9:30 am
    Hi Harsha,

    I am using Cloudera Manager 4.1 and I am having the same issue as Randhir
    had with starting namenode.

    13/04/17 14:45:00 FATAL namenode.NameNode: Exception in namenode join
    java.lang.IllegalArgumentException: Invalid URI for NameNode address (check
    fs.defaultFS): file:/// has no authority.

    In cat /etc/hadoop/conf/core-site.xml there is nothing.

    I tried to restart from browser but could not able so I tried;

    [root@clouderra bin]# hdfs namenode

    Please suggest any solution for this.

    Thanks in adv.


    Thanks & Regards,
    Anupam Ranjan
    Software Engineer
    TCube Solutions Pvt Ltd


    On Saturday, 28 July 2012 04:30:04 UTC+5:30, Harsh J wrote:

    Randhir,

    Good to know. Just as an add-on, if you use Cloudera Manager, it
    manages your alternatives-based symlinks for client configs well and
    keeps service configs separately managed, so you never run into
    problems as trivial (yet hard to find) such as this :)
    On Fri, Jul 27, 2012 at 1:10 AM, Randhir wrote:
    Hi Harsha,

    I was able to figure the problem with help of another team member. The issue
    identified was that the /etc/hadoop/conf was not pointing to the right
    directory.

    The symlink to my configuration files was broken....it seems like CDH4
    packages have broken it based on the date and timestamp.

    Created a symlink to my configuration files
    /etc/hadoop-0.20/conf.my_cluster
    and it worked. I had to do that on all the other nodes in the cluster ie
    Secondary NN & Data Nodes.

    Thanks,
    Randhir

    On Wednesday, July 25, 2012 4:57:32 PM UTC-7, Randhir wrote:

    Hello,

    I am testing the CDH4 upgrade on a working CDH3 Update 1 cluster
    (hadoop-0.20-namenode-0.20.2+923.97-1.noarch.rpm). Trying a major
    version
    upgrade which is from CDH3-Update 1 --> CDH4

    Manual upgrade following these instructions and hitting an issue with
    metadata upgrade step -
    https://ccp.cloudera.com/display/CDH4DOC/Upgrading+from+CDH3+to+CDH4#UpgradingfromCDH3toCDH4-Step3%3AUninstallCDH3Hadoop
    I am at Step 6 #2 - "sudo service hadoop-hdfs-namenode upgrade" and
    getting the following error. I have already verified the core-site.xml
    and
    hdfs-site.xml files. Any pointers?

    Error:
    ===

    [root@pdevpdbos10p hadoop-hdfs]# cat
    hadoop-hdfs-namenode-pdevpdbos10p.xxx.xxx.net.log

    2012-07-23 17:16:58,460 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:

    /************************************************************

    STARTUP_MSG: Starting NameNode

    STARTUP_MSG: host = pdevpdbos10p.xxx.xxxx.net/10.136.240.199

    STARTUP_MSG: args = [-upgrade]

    STARTUP_MSG: version = 2.0.0-cdh4.0.1

    STARTUP_MSG: classpath =
    /etc/hadoop/conf:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop/lib/stax-api-1.0.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.cloudera.1.jar:/usr/lib/hadoop/lib/oro-2.0.8.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/zookeeper-3.4.3-cdh4.0.1.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.cloudera.1.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/jets3t-0.6.1.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/jersey-server-1.8.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/log4j-1.2.15.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/jersey-json-1.8.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/commons-logging-api-1.1.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/json-simple-1.1.jar:/usr/lib/hadoop/lib/avro-1.5.4.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/jline-0.9.94.jar:/usr/lib/hadoop/lib/commons-math-2.1.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/commons-lang-2.5.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jersey-core-1.8.jar:/usr/lib/hadoop/lib/kfs-0.3.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/commons-io-2.1.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar:/usr/lib/hadoop/lib/aspectjrt-1.6.5.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/core-3.1.1.jar:/usr/lib/hadoop/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//hadoop-annotations-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.0.1-tests.jar:/usr/lib/hadoop/.//hadoop-auth-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop-hdfs/lib/zookeeper-3.4.3-cdh4.0.1.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.15.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-hdfs/lib/paranamer-2.3.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop-hdfs/lib/avro-1.5.4.jar:/usr/lib/hadoop-hdfs/lib/jline-0.9.94.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.3.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.0.1-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-mapreduce/.//*
    STARTUP_MSG: build =
    file:///data/1/jenkins/workspace/generic-package-rhel64-6-0/topdir/BUILD/hadoop-2.0.0-cdh4.0.1/src/hadoop-common-project/hadoop-common
    -r 4d98eb718ec0cce78a00f292928c5ab6e1b84695; compiled by 'jenkins' on
    Thu
    Jun 28 17:39:22 PDT 2012

    ************************************************************/

    2012-07-23 17:16:58,719 INFO
    org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
    hadoop-metrics2.properties

    2012-07-23 17:16:58,811 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
    period
    at 10 second(s).

    2012-07-23 17:16:58,812 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
    system
    started

    2012-07-23 17:16:58,867 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode
    join
    java.lang.IllegalArgumentException: Invalid URI for NameNode address
    (check fs.defaultFS): file:/// has no authority.

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:315)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:303)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.getRpcServerAddress(NameNode.java:356)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.loginAsNameNodeUser(NameNode.java:408)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:420)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:590)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:571)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1134)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1193)
    2012-07-23 17:16:58,869 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

    /************************************************************

    SHUTDOWN_MSG: Shutting down NameNode at
    pdevpdbos10p.xxx.xxx.net/10.136.240.199

    ************************************************************/


    Thanks,

    Randhir

    --



    --
    Harsh J
    --
  • Harsh J at Apr 17, 2013 at 10:31 am
    How exactly are you starting your NameNode if you are using Cloudera
    Manager? You shouldn't be using a local command for this. Randhir was
    not using Cloudera Manager, so your trouble is entirely different. Try
    starting NN from the CM interface at http://cmserver:7180

    On Wed, Apr 17, 2013 at 3:00 PM, Anupam Ranjan
    wrote:
    Hi Harsha,

    I am using Cloudera Manager 4.1 and I am having the same issue as Randhir
    had with starting namenode.

    13/04/17 14:45:00 FATAL namenode.NameNode: Exception in namenode join
    java.lang.IllegalArgumentException: Invalid URI for NameNode address (check
    fs.defaultFS): file:/// has no authority.

    In cat /etc/hadoop/conf/core-site.xml there is nothing.

    I tried to restart from browser but could not able so I tried;

    [root@clouderra bin]# hdfs namenode

    Please suggest any solution for this.

    Thanks in adv.


    Thanks & Regards,
    Anupam Ranjan
    Software Engineer
    TCube Solutions Pvt Ltd


    On Saturday, 28 July 2012 04:30:04 UTC+5:30, Harsh J wrote:

    Randhir,

    Good to know. Just as an add-on, if you use Cloudera Manager, it
    manages your alternatives-based symlinks for client configs well and
    keeps service configs separately managed, so you never run into
    problems as trivial (yet hard to find) such as this :)
    On Fri, Jul 27, 2012 at 1:10 AM, Randhir wrote:
    Hi Harsha,

    I was able to figure the problem with help of another team member. The
    issue
    identified was that the /etc/hadoop/conf was not pointing to the right
    directory.

    The symlink to my configuration files was broken....it seems like CDH4
    packages have broken it based on the date and timestamp.

    Created a symlink to my configuration files
    /etc/hadoop-0.20/conf.my_cluster
    and it worked. I had to do that on all the other nodes in the cluster ie
    Secondary NN & Data Nodes.

    Thanks,
    Randhir

    On Wednesday, July 25, 2012 4:57:32 PM UTC-7, Randhir wrote:

    Hello,

    I am testing the CDH4 upgrade on a working CDH3 Update 1 cluster
    (hadoop-0.20-namenode-0.20.2+923.97-1.noarch.rpm). Trying a major
    version
    upgrade which is from CDH3-Update 1 --> CDH4

    Manual upgrade following these instructions and hitting an issue with
    metadata upgrade step -

    https://ccp.cloudera.com/display/CDH4DOC/Upgrading+from+CDH3+to+CDH4#UpgradingfromCDH3toCDH4-Step3%3AUninstallCDH3Hadoop

    I am at Step 6 #2 - "sudo service hadoop-hdfs-namenode upgrade" and
    getting the following error. I have already verified the core-site.xml
    and
    hdfs-site.xml files. Any pointers?

    Error:
    ===

    [root@pdevpdbos10p hadoop-hdfs]# cat
    hadoop-hdfs-namenode-pdevpdbos10p.xxx.xxx.net.log

    2012-07-23 17:16:58,460 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:

    /************************************************************

    STARTUP_MSG: Starting NameNode

    STARTUP_MSG: host = pdevpdbos10p.xxx.xxxx.net/10.136.240.199

    STARTUP_MSG: args = [-upgrade]

    STARTUP_MSG: version = 2.0.0-cdh4.0.1

    STARTUP_MSG: classpath =

    /etc/hadoop/conf:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop/lib/stax-api-1.0.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.cloudera.1.jar:/usr/lib/hadoop/lib/oro-2.0.8.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/zookeeper-3.4.3-cdh4.0.1.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.cloudera.1.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/jets3t-0.6.1.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/jersey-server-1.8.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/log4j-1.2.15.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/jersey-json-1.8.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/commons-logging-api-1.1.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/json-simple-1.1.jar:/usr/lib/hadoop/lib/avro-1.5.4.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/jline-0.9.94.jar:/usr/lib/hadoop/lib/commons-math-2.1.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/commons-lang-2.5.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jersey-core-1.8.jar:/usr/lib/hadoop/lib/kfs-0.3.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/commons-io-2.1.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar:/usr/lib/hadoop/lib/aspectjrt-1.6.5.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/core-3.1.1.jar:/usr/lib/hadoop/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//hadoop-annotations-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.0.1-tests.jar:/usr/lib/hadoop/.//hadoop-auth-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop-hdfs/lib/zookeeper-3.4.3-cdh4.0.1.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.15.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-hdfs/lib/paranamer-2.3.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop-hdfs/lib/avro-1.5.4.jar:/usr/lib/hadoop-hdfs/lib/jline-0.9.94.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.3.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.0.1-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-mapreduce/.//*

    STARTUP_MSG: build =

    file:///data/1/jenkins/workspace/generic-package-rhel64-6-0/topdir/BUILD/hadoop-2.0.0-cdh4.0.1/src/hadoop-common-project/hadoop-common
    -r 4d98eb718ec0cce78a00f292928c5ab6e1b84695; compiled by 'jenkins' on
    Thu
    Jun 28 17:39:22 PDT 2012

    ************************************************************/

    2012-07-23 17:16:58,719 INFO
    org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
    hadoop-metrics2.properties

    2012-07-23 17:16:58,811 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
    period
    at 10 second(s).

    2012-07-23 17:16:58,812 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
    system
    started

    2012-07-23 17:16:58,867 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode
    join

    java.lang.IllegalArgumentException: Invalid URI for NameNode address
    (check fs.defaultFS): file:/// has no authority.

    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:315)

    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:303)

    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.getRpcServerAddress(NameNode.java:356)

    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.loginAsNameNodeUser(NameNode.java:408)

    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:420)

    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:590)

    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:571)

    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1134)

    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1193)

    2012-07-23 17:16:58,869 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

    /************************************************************

    SHUTDOWN_MSG: Shutting down NameNode at
    pdevpdbos10p.xxx.xxx.net/10.136.240.199

    ************************************************************/


    Thanks,

    Randhir

    --



    --
    Harsh J
    --



    --
    Harsh J

    --
  • Anupam Ranjan at Apr 17, 2013 at 10:43 am
    Harsha,

    I tried several times through CM interface. In HDFS;
    1. Datanode is starting
    2. Secondarynode is starting
    3. Namenode is not starting.

    It is giving bad health status. After few chances I tried to start namenode
    through command line.

    For detail informations here I am attaching the full log file of CM
    interface of Namenode.

    Please check and let me know what is need to be done for this issue.

    Thanks
    *Anupam Ranjan*

    On 17 April 2013 16:00, Harsh J wrote:

    How exactly are you starting your NameNode if you are using Cloudera
    Manager? You shouldn't be using a local command for this. Randhir was
    not using Cloudera Manager, so your trouble is entirely different. Try
    starting NN from the CM interface at http://cmserver:7180

    On Wed, Apr 17, 2013 at 3:00 PM, Anupam Ranjan
    wrote:
    Hi Harsha,

    I am using Cloudera Manager 4.1 and I am having the same issue as Randhir
    had with starting namenode.

    13/04/17 14:45:00 FATAL namenode.NameNode: Exception in namenode join
    java.lang.IllegalArgumentException: Invalid URI for NameNode address (check
    fs.defaultFS): file:/// has no authority.

    In cat /etc/hadoop/conf/core-site.xml there is nothing.

    I tried to restart from browser but could not able so I tried;

    [root@clouderra bin]# hdfs namenode

    Please suggest any solution for this.

    Thanks in adv.


    Thanks & Regards,
    Anupam Ranjan
    Software Engineer
    TCube Solutions Pvt Ltd


    On Saturday, 28 July 2012 04:30:04 UTC+5:30, Harsh J wrote:

    Randhir,

    Good to know. Just as an add-on, if you use Cloudera Manager, it
    manages your alternatives-based symlinks for client configs well and
    keeps service configs separately managed, so you never run into
    problems as trivial (yet hard to find) such as this :)
    On Fri, Jul 27, 2012 at 1:10 AM, Randhir wrote:
    Hi Harsha,

    I was able to figure the problem with help of another team member. The
    issue
    identified was that the /etc/hadoop/conf was not pointing to the right
    directory.

    The symlink to my configuration files was broken....it seems like CDH4
    packages have broken it based on the date and timestamp.

    Created a symlink to my configuration files
    /etc/hadoop-0.20/conf.my_cluster
    and it worked. I had to do that on all the other nodes in the cluster
    ie
    Secondary NN & Data Nodes.

    Thanks,
    Randhir

    On Wednesday, July 25, 2012 4:57:32 PM UTC-7, Randhir wrote:

    Hello,

    I am testing the CDH4 upgrade on a working CDH3 Update 1 cluster
    (hadoop-0.20-namenode-0.20.2+923.97-1.noarch.rpm). Trying a major
    version
    upgrade which is from CDH3-Update 1 --> CDH4

    Manual upgrade following these instructions and hitting an issue with
    metadata upgrade step -
    https://ccp.cloudera.com/display/CDH4DOC/Upgrading+from+CDH3+to+CDH4#UpgradingfromCDH3toCDH4-Step3%3AUninstallCDH3Hadoop
    I am at Step 6 #2 - "sudo service hadoop-hdfs-namenode upgrade" and
    getting the following error. I have already verified the
    core-site.xml
    and
    hdfs-site.xml files. Any pointers?

    Error:
    ===

    [root@pdevpdbos10p hadoop-hdfs]# cat
    hadoop-hdfs-namenode-pdevpdbos10p.xxx.xxx.net.log

    2012-07-23 17:16:58,460 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:

    /************************************************************

    STARTUP_MSG: Starting NameNode

    STARTUP_MSG: host = pdevpdbos10p.xxx.xxxx.net/10.136.240.199

    STARTUP_MSG: args = [-upgrade]

    STARTUP_MSG: version = 2.0.0-cdh4.0.1

    STARTUP_MSG: classpath =
    /etc/hadoop/conf:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop/lib/stax-api-1.0.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.cloudera.1.jar:/usr/lib/hadoop/lib/oro-2.0.8.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/zookeeper-3.4.3-cdh4.0.1.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.cloudera.1.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/jets3t-0.6.1.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/jersey-server-1.8.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/log4j-1.2.15.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/jersey-json-1.8.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/commons-logging-api-1.1.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/json-simple-1.1.jar:/usr/lib/hadoop/lib/avro-1.5.4.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/jline-0.9.94.jar:/usr/lib/hadoop/lib/commons-math-2.1.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/commons-lang-2.5.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jersey-core-1.8.jar:/usr/lib/hadoop/lib/kfs-0.3.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/commons-io-2.1.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar:/usr/lib/hadoop/lib/aspectjrt-1.6.5.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/core-3.1.1.jar:/usr/lib/hadoop/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//hadoop-annotations-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.0.1-tests.jar:/usr/lib/hadoop/.//hadoop-auth-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop-hdfs/lib/zookeeper-3.4.3-cdh4.0.1.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.15.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-hdfs/lib/paranamer-2.3.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop-hdfs/lib/avro-1.5.4.jar:/usr/lib/hadoop-hdfs/lib/jline-0.9.94.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.3.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.0.1-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-mapreduce/.//*
    STARTUP_MSG: build =
    file:///data/1/jenkins/workspace/generic-package-rhel64-6-0/topdir/BUILD/hadoop-2.0.0-cdh4.0.1/src/hadoop-common-project/hadoop-common
    -r 4d98eb718ec0cce78a00f292928c5ab6e1b84695; compiled by 'jenkins' on
    Thu
    Jun 28 17:39:22 PDT 2012

    ************************************************************/

    2012-07-23 17:16:58,719 INFO
    org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
    hadoop-metrics2.properties

    2012-07-23 17:16:58,811 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
    period
    at 10 second(s).

    2012-07-23 17:16:58,812 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
    system
    started

    2012-07-23 17:16:58,867 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in
    namenode
    join

    java.lang.IllegalArgumentException: Invalid URI for NameNode address
    (check fs.defaultFS): file:/// has no authority.

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:315)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:303)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.getRpcServerAddress(NameNode.java:356)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.loginAsNameNodeUser(NameNode.java:408)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:420)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:590)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:571)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1134)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1193)
    2012-07-23 17:16:58,869 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

    /************************************************************

    SHUTDOWN_MSG: Shutting down NameNode at
    pdevpdbos10p.xxx.xxx.net/10.136.240.199

    ************************************************************/


    Thanks,

    Randhir

    --



    --
    Harsh J
    --



    --
    Harsh J

    --


    --
  • Harsh J at Apr 17, 2013 at 1:12 pm
    The below seems to be your issue:

    2013-04-17 14:43:31,335 FATAL
    org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode
    join
    java.io.IOException: Gap in transactions. Expected to be able to read
    up until at least txid 2647 but unable to find any edit logs
    containing txid 2647

    I also notice you seem to have two NN dirs configured. Can you
    paste/pastebin the outputs of the below commands from your NN?:

    ls -l /data/dfs/nn/current
    cat /data/dfs/nn/current/seen_txid

    ls -l /dfs/nn/current
    cat /dfs/nn/current/seen_txid

    On Wed, Apr 17, 2013 at 4:13 PM, Anupam Ranjan
    wrote:
    Harsha,

    I tried several times through CM interface. In HDFS;
    1. Datanode is starting
    2. Secondarynode is starting
    3. Namenode is not starting.

    It is giving bad health status. After few chances I tried to start namenode
    through command line.

    For detail informations here I am attaching the full log file of CM
    interface of Namenode.

    Please check and let me know what is need to be done for this issue.

    Thanks
    Anupam Ranjan

    On 17 April 2013 16:00, Harsh J wrote:

    How exactly are you starting your NameNode if you are using Cloudera
    Manager? You shouldn't be using a local command for this. Randhir was
    not using Cloudera Manager, so your trouble is entirely different. Try
    starting NN from the CM interface at http://cmserver:7180

    On Wed, Apr 17, 2013 at 3:00 PM, Anupam Ranjan
    wrote:
    Hi Harsha,

    I am using Cloudera Manager 4.1 and I am having the same issue as
    Randhir
    had with starting namenode.

    13/04/17 14:45:00 FATAL namenode.NameNode: Exception in namenode join
    java.lang.IllegalArgumentException: Invalid URI for NameNode address
    (check
    fs.defaultFS): file:/// has no authority.

    In cat /etc/hadoop/conf/core-site.xml there is nothing.

    I tried to restart from browser but could not able so I tried;

    [root@clouderra bin]# hdfs namenode

    Please suggest any solution for this.

    Thanks in adv.


    Thanks & Regards,
    Anupam Ranjan
    Software Engineer
    TCube Solutions Pvt Ltd


    On Saturday, 28 July 2012 04:30:04 UTC+5:30, Harsh J wrote:

    Randhir,

    Good to know. Just as an add-on, if you use Cloudera Manager, it
    manages your alternatives-based symlinks for client configs well and
    keeps service configs separately managed, so you never run into
    problems as trivial (yet hard to find) such as this :)
    On Fri, Jul 27, 2012 at 1:10 AM, Randhir wrote:
    Hi Harsha,

    I was able to figure the problem with help of another team member.
    The
    issue
    identified was that the /etc/hadoop/conf was not pointing to the
    right
    directory.

    The symlink to my configuration files was broken....it seems like
    CDH4
    packages have broken it based on the date and timestamp.

    Created a symlink to my configuration files
    /etc/hadoop-0.20/conf.my_cluster
    and it worked. I had to do that on all the other nodes in the cluster
    ie
    Secondary NN & Data Nodes.

    Thanks,
    Randhir

    On Wednesday, July 25, 2012 4:57:32 PM UTC-7, Randhir wrote:

    Hello,

    I am testing the CDH4 upgrade on a working CDH3 Update 1 cluster
    (hadoop-0.20-namenode-0.20.2+923.97-1.noarch.rpm). Trying a major
    version
    upgrade which is from CDH3-Update 1 --> CDH4

    Manual upgrade following these instructions and hitting an issue
    with
    metadata upgrade step -


    https://ccp.cloudera.com/display/CDH4DOC/Upgrading+from+CDH3+to+CDH4#UpgradingfromCDH3toCDH4-Step3%3AUninstallCDH3Hadoop

    I am at Step 6 #2 - "sudo service hadoop-hdfs-namenode upgrade" and
    getting the following error. I have already verified the
    core-site.xml
    and
    hdfs-site.xml files. Any pointers?

    Error:
    ===

    [root@pdevpdbos10p hadoop-hdfs]# cat
    hadoop-hdfs-namenode-pdevpdbos10p.xxx.xxx.net.log

    2012-07-23 17:16:58,460 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:

    /************************************************************

    STARTUP_MSG: Starting NameNode

    STARTUP_MSG: host = pdevpdbos10p.xxx.xxxx.net/10.136.240.199

    STARTUP_MSG: args = [-upgrade]

    STARTUP_MSG: version = 2.0.0-cdh4.0.1

    STARTUP_MSG: classpath =


    /etc/hadoop/conf:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop/lib/stax-api-1.0.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.cloudera.1.jar:/usr/lib/hadoop/lib/oro-2.0.8.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/zookeeper-3.4.3-cdh4.0.1.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.cloudera.1.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/jets3t-0.6.1.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/jersey-server-1.8.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/log4j-1.2.15.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/jersey-json-1.8.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/commons-logging-api-1.1.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/json-simple-1.1.jar:/usr/lib/hadoop/lib/avro-1.5.4.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/jline-0.9.94.jar:/usr/lib/hadoop/lib/commons-math-2.1.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/commons-lang-2.5.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jersey-core-1.8.jar:/usr/lib/hadoop/lib/kfs-0.3.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/commons-io-2.1.jar:/usr/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar:/usr/lib/hadoop/lib/aspectjrt-1.6.5.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/core-3.1.1.jar:/usr/lib/hadoop/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//hadoop-annotations-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.0.1-tests.jar:/usr/lib/hadoop/.//hadoop-auth-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/snappy-java-1.0.3.2.jar:/usr/lib/hadoop-hdfs/lib/zookeeper-3.4.3-cdh4.0.1.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.15.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-hdfs/lib/paranamer-2.3.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/slf4j-api-1.6.1.jar:/usr/lib/hadoop-hdfs/lib/avro-1.5.4.jar:/usr/lib/hadoop-hdfs/lib/jline-0.9.94.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.3.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.0.1-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.0.1.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-mapreduce/.//*

    STARTUP_MSG: build =


    file:///data/1/jenkins/workspace/generic-package-rhel64-6-0/topdir/BUILD/hadoop-2.0.0-cdh4.0.1/src/hadoop-common-project/hadoop-common
    -r 4d98eb718ec0cce78a00f292928c5ab6e1b84695; compiled by 'jenkins'
    on
    Thu
    Jun 28 17:39:22 PDT 2012

    ************************************************************/

    2012-07-23 17:16:58,719 INFO
    org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties
    from
    hadoop-metrics2.properties

    2012-07-23 17:16:58,811 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled
    snapshot
    period
    at 10 second(s).

    2012-07-23 17:16:58,812 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
    system
    started

    2012-07-23 17:16:58,867 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in
    namenode
    join

    java.lang.IllegalArgumentException: Invalid URI for NameNode address
    (check fs.defaultFS): file:/// has no authority.

    at


    org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:315)

    at


    org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:303)

    at


    org.apache.hadoop.hdfs.server.namenode.NameNode.getRpcServerAddress(NameNode.java:356)

    at


    org.apache.hadoop.hdfs.server.namenode.NameNode.loginAsNameNodeUser(NameNode.java:408)

    at


    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:420)

    at


    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:590)

    at


    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:571)

    at


    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1134)

    at


    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1193)

    2012-07-23 17:16:58,869 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

    /************************************************************

    SHUTDOWN_MSG: Shutting down NameNode at
    pdevpdbos10p.xxx.xxx.net/10.136.240.199

    ************************************************************/


    Thanks,

    Randhir

    --



    --
    Harsh J
    --



    --
    Harsh J

    --

    --



    --
    Harsh J

    --
  • Anupam Ranjan at Apr 18, 2013 at 10:54 am
    Hi Harsh,

    I executed the your commands and got the same output as you can see below.

    [root@clouderra ~]# cat /data/dfs/nn/current/seen_txid
    2647

    [root@clouderra ~]# cat /dfs/nn/current/seen_txid
    2647

    Yes it seems I have two NN dirs configured. But how to resolve this issue.

    Please help me on this.

    Thanks
    *Anupam Ranjan*

    --
  • Harsh J at Apr 18, 2013 at 11:41 am
    Can we have the ls as well, as I'd requested in my previous post?

    On Thu, Apr 18, 2013 at 4:24 PM, Anupam Ranjan
    wrote:
    Hi Harsh,

    I executed the your commands and got the same output as you can see below.

    [root@clouderra ~]# cat /data/dfs/nn/current/seen_txid
    2647

    [root@clouderra ~]# cat /dfs/nn/current/seen_txid
    2647

    Yes it seems I have two NN dirs configured. But how to resolve this issue.

    Please help me on this.

    Thanks
    Anupam Ranjan

    --



    --
    Harsh J

    --
  • Anupam Ranjan at Apr 18, 2013 at 12:19 pm
    As per your query here I am attaching the file which includes the output
    after executing all commands.

    Please check and let me know how I can proceed.

    Thanks
    *Anupam Ranjan*

    On 18 April 2013 17:11, Harsh J wrote:

    Can we have the ls as well, as I'd requested in my previous post?

    On Thu, Apr 18, 2013 at 4:24 PM, Anupam Ranjan
    wrote:
    Hi Harsh,

    I executed the your commands and got the same output as you can see below.
    [root@clouderra ~]# cat /data/dfs/nn/current/seen_txid
    2647

    [root@clouderra ~]# cat /dfs/nn/current/seen_txid
    2647

    Yes it seems I have two NN dirs configured. But how to resolve this issue.
    Please help me on this.

    Thanks
    Anupam Ranjan

    --



    --
    Harsh J

    --


    --
  • Anupam Ranjan at Apr 19, 2013 at 6:44 am
    Hi Harsha,

    Giving more information on NN. I have accessed CM in browser and in HDFS
    --> Configurations --> NameNode I got few informations.

    1. NameNode Data Directories
    dfs.name.dir : /dfs/nn
    dfs.namenode.name.dir: /data/dfs/nn

    2. NameNode Edits Directories
    dfs.namenode.edits.dir : Default value is empty. Click here to edit

    3. Shared Edits Directory
    dfs.namenode.shared.edits.dir: Default value is empty. Click here to edit



    Thanks
    *Anupam Ranjan*

    On 18 April 2013 17:49, Anupam Ranjan wrote:

    As per your query here I am attaching the file which includes the output
    after executing all commands.

    Please check and let me know how I can proceed.

    Thanks
    *Anupam Ranjan*

    On 18 April 2013 17:11, Harsh J wrote:

    Can we have the ls as well, as I'd requested in my previous post?

    On Thu, Apr 18, 2013 at 4:24 PM, Anupam Ranjan
    wrote:
    Hi Harsh,

    I executed the your commands and got the same output as you can see below.
    [root@clouderra ~]# cat /data/dfs/nn/current/seen_txid
    2647

    [root@clouderra ~]# cat /dfs/nn/current/seen_txid
    2647

    Yes it seems I have two NN dirs configured. But how to resolve this issue.
    Please help me on this.

    Thanks
    Anupam Ranjan

    --



    --
    Harsh J

    --


    --

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcdh-user @
categorieshadoop
postedJul 25, '12 at 11:57p
activeApr 19, '13 at 6:44a
posts12
users3
websitecloudera.com
irc#hadoop

3 users in discussion

Anupam Ranjan: 5 posts Harsh J: 5 posts Randhir: 2 posts

People

Translate

site design / logo © 2022 Grokbase