FAQ
I am running Hadoop on single server. The issue I am running into is that
start-all.sh script is not starting up NameNode.

Only way I can start NameNode is by formatting it and I end up losing data
in HDFS.



Does anyone have solution to this issue?



Kaushal

Search Discussions

  • Edmund Kohlwey at Nov 10, 2009 at 2:58 pm
    Is there error output from start-all.sh?
    On 11/9/09 11:10 PM, Kaushal Amin wrote:
    I am running Hadoop on single server. The issue I am running into is that
    start-all.sh script is not starting up NameNode.

    Only way I can start NameNode is by formatting it and I end up losing data
    in HDFS.



    Does anyone have solution to this issue?



    Kaushal



  • Stephen Watt at Nov 10, 2009 at 9:28 pm
    You need to go to your logs directory and have a look at what is going on
    in the namenode log. What version are you using ?

    I'm going to take a guess at your issue here and say that you used the
    /tmp as a path for some of your hadoop conf settings and you have rebooted
    lately. The /tmp dir is wiped out on reboot.

    Kind regards
    Steve Watt



    From:
    "Kaushal Amin" <kaushalamin@gmail.com>
    To:
    <common-user@hadoop.apache.org>
    Date:
    11/10/2009 08:47 AM
    Subject:
    Hadoop NameNode not starting up



    I am running Hadoop on single server. The issue I am running into is that
    start-all.sh script is not starting up NameNode.

    Only way I can start NameNode is by formatting it and I end up losing data
    in HDFS.



    Does anyone have solution to this issue?



    Kaushal
  • Sagar at Nov 10, 2009 at 9:30 pm
    did u format it for the first time
    another quick way to fugure out
    is
    ${HADOOP_HOME}/bin/hadoop start namenode

    see wht error it gives

    -Sagar

    Stephen Watt wrote:
    You need to go to your logs directory and have a look at what is going on
    in the namenode log. What version are you using ?

    I'm going to take a guess at your issue here and say that you used the
    /tmp as a path for some of your hadoop conf settings and you have rebooted
    lately. The /tmp dir is wiped out on reboot.

    Kind regards
    Steve Watt



    From:
    "Kaushal Amin" <kaushalamin@gmail.com>
    To:
    <common-user@hadoop.apache.org>
    Date:
    11/10/2009 08:47 AM
    Subject:
    Hadoop NameNode not starting up



    I am running Hadoop on single server. The issue I am running into is that
    start-all.sh script is not starting up NameNode.

    Only way I can start NameNode is by formatting it and I end up losing data
    in HDFS.



    Does anyone have solution to this issue?



    Kaushal





  • Kaushal Amin at Nov 11, 2009 at 7:37 pm
    I am seeing following error in my NameNode log file.

    2009-11-11 10:59:59,407 ERROR
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
    initialization failed.
    2009-11-11 10:59:59,449 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode:
    org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory
    /tmp/hadoop-root/dfs/name is in an inconsistent state: storage directory
    does not exist or is not accessible.

    Any idea?

    On Mon, Nov 9, 2009 at 10:10 PM, Kaushal Amin wrote:

    I am running Hadoop on single server. The issue I am running into is that
    start-all.sh script is not starting up NameNode.

    Only way I can start NameNode is by formatting it and I end up losing data
    in HDFS.



    Does anyone have solution to this issue?



    Kaushal

  • Edward Capriolo at Nov 11, 2009 at 7:50 pm
    Are you starting hadoop as a different user?
    Maybe first time you are starting as user hadoop, now this time you
    are starting as user root.

    Or as stated above something is cleaning out your /tmp. Use your
    configuration files to have namenode write to a permanent place.

    Edward
    On Wed, Nov 11, 2009 at 2:36 PM, Kaushal Amin wrote:
    I am seeing following error in my NameNode log file.

    2009-11-11 10:59:59,407 ERROR
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
    initialization failed.
    2009-11-11 10:59:59,449 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode:
    org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory
    /tmp/hadoop-root/dfs/name is in an inconsistent state: storage directory
    does not exist or is not accessible.

    Any idea?

    On Mon, Nov 9, 2009 at 10:10 PM, Kaushal Amin wrote:

    I am running Hadoop on single server. The issue I am running into is that
    start-all.sh script is not starting up NameNode.

    Only way I can start NameNode is by formatting it and I end up losing data
    in HDFS.



    Does anyone have solution to this issue?



    Kaushal

  • Kaushal Amin at Nov 11, 2009 at 7:56 pm
    which configuration file?
    On Wed, Nov 11, 2009 at 1:50 PM, Edward Capriolo wrote:

    Are you starting hadoop as a different user?
    Maybe first time you are starting as user hadoop, now this time you
    are starting as user root.

    Or as stated above something is cleaning out your /tmp. Use your
    configuration files to have namenode write to a permanent place.

    Edward
    On Wed, Nov 11, 2009 at 2:36 PM, Kaushal Amin wrote:
    I am seeing following error in my NameNode log file.

    2009-11-11 10:59:59,407 ERROR
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
    initialization failed.
    2009-11-11 10:59:59,449 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode:
    org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory
    /tmp/hadoop-root/dfs/name is in an inconsistent state: storage directory
    does not exist or is not accessible.

    Any idea?

    On Mon, Nov 9, 2009 at 10:10 PM, Kaushal Amin wrote:

    I am running Hadoop on single server. The issue I am running into is
    that
    start-all.sh script is not starting up NameNode.

    Only way I can start NameNode is by formatting it and I end up losing
    data
    in HDFS.



    Does anyone have solution to this issue?



    Kaushal

  • Edward Capriolo at Nov 11, 2009 at 8:04 pm
    The property you are going to need to set is

    <property>
    <name>dfs.name.dir</name>
    <value>${hadoop.tmp.dir}/dfs/name</value>
    <description>Determines where on the local filesystem the DFS name node
    should store the name table. If this is a comma-delimited list
    of directories then the name table is replicated in all of the
    directories, for redundancy. </description>
    </property>


    If you are running 0.20 and later information the information about
    the critical variables you need to setup to get running is here:
    (give these a good read through)

    http://hadoop.apache.org/common/docs/current/quickstart.html
    http://hadoop.apache.org/common/docs/current/cluster_setup.html

    If you are running a version older then 0.20 you can look in
    hadoop-default.xml and make changes to hadoop-site.xml.

    Edward
    On Wed, Nov 11, 2009 at 2:55 PM, Kaushal Amin wrote:
    which configuration file?
    On Wed, Nov 11, 2009 at 1:50 PM, Edward Capriolo wrote:

    Are you starting hadoop as a different user?
    Maybe first time you are starting as user hadoop, now this time you
    are starting as user root.

    Or as stated above something is cleaning out your /tmp. Use your
    configuration files to have namenode write to a permanent place.

    Edward

    On Wed, Nov 11, 2009 at 2:36 PM, Kaushal Amin <kaushalamin@gmail.com>
    wrote:
    I am seeing following error in my NameNode log file.

    2009-11-11 10:59:59,407 ERROR
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
    initialization failed.
    2009-11-11 10:59:59,449 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode:
    org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory
    /tmp/hadoop-root/dfs/name is in an inconsistent state: storage directory
    does not exist or is not accessible.

    Any idea?


    On Mon, Nov 9, 2009 at 10:10 PM, Kaushal Amin <kaushalamin@gmail.com>
    wrote:
    I am running Hadoop on single server. The issue I am running into is
    that
    start-all.sh script is not starting up NameNode.

    Only way I can start NameNode is by formatting it and I end up losing
    data
    in HDFS.



    Does anyone have solution to this issue?



    Kaushal

  • Starry SHI at Nov 14, 2009 at 5:51 am
    actually you can put the hadoop.tmp.dir to other place, e.g /opt/hadoop_tmp
    or /var/hadoop_tmp. first create the folder there, and assign the correct
    mode for the hadoop_tmp folder, (chmod 777 for all the user to use hadoop).
    then change the conf xml file accordingly, and run "hadoop namenode
    -format", then start it. hopefully it will work

    my experience is that put hadoop.tmp.dir in /tmp will make hadoop unstable,
    especially for long-running jobs.

    Best regards,
    Starry

    /* Tomorrow is another day. So is today. */

    On Thu, Nov 12, 2009 at 04:04, Edward Capriolo wrote:

    The property you are going to need to set is

    <property>
    <name>dfs.name.dir</name>
    <value>${hadoop.tmp.dir}/dfs/name</value>
    <description>Determines where on the local filesystem the DFS name node
    should store the name table. If this is a comma-delimited list
    of directories then the name table is replicated in all of the
    directories, for redundancy. </description>
    </property>


    If you are running 0.20 and later information the information about
    the critical variables you need to setup to get running is here:
    (give these a good read through)

    http://hadoop.apache.org/common/docs/current/quickstart.html
    http://hadoop.apache.org/common/docs/current/cluster_setup.html

    If you are running a version older then 0.20 you can look in
    hadoop-default.xml and make changes to hadoop-site.xml.

    Edward
    On Wed, Nov 11, 2009 at 2:55 PM, Kaushal Amin wrote:
    which configuration file?

    On Wed, Nov 11, 2009 at 1:50 PM, Edward Capriolo <edlinuxguru@gmail.com
    wrote:
    Are you starting hadoop as a different user?
    Maybe first time you are starting as user hadoop, now this time you
    are starting as user root.

    Or as stated above something is cleaning out your /tmp. Use your
    configuration files to have namenode write to a permanent place.

    Edward

    On Wed, Nov 11, 2009 at 2:36 PM, Kaushal Amin <kaushalamin@gmail.com>
    wrote:
    I am seeing following error in my NameNode log file.

    2009-11-11 10:59:59,407 ERROR
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
    initialization failed.
    2009-11-11 10:59:59,449 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode:
    org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory
    /tmp/hadoop-root/dfs/name is in an inconsistent state: storage
    directory
    does not exist or is not accessible.

    Any idea?


    On Mon, Nov 9, 2009 at 10:10 PM, Kaushal Amin <kaushalamin@gmail.com>
    wrote:
    I am running Hadoop on single server. The issue I am running into is
    that
    start-all.sh script is not starting up NameNode.

    Only way I can start NameNode is by formatting it and I end up losing
    data
    in HDFS.



    Does anyone have solution to this issue?



    Kaushal

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedNov 10, '09 at 2:47p
activeNov 14, '09 at 5:51a
posts9
users6
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase