FAQ
I'm trying to set up a basic Hadoop single-node cluster on a Red Hat
Enterprise Linux 5 system. The Hadoop version is 0.18.3-14.cloudera.CH0_3.
Unfortunately I am having problems getting Hadoop to read the configuration
properties from hadoop-site.xml. Specifically, when I try to run the
following command:

${HADOOP_HOME}/bin/hadoop namenode -format

...it seems that Hadoop is not using the storage directory that I have
configured for HDFS. I get the following output:

---------------------------------------------------------------------
---------------------------------------------------------------------

09/12/14 02:24:14 INFO dfs.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = #####/###.##.#.##
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.18.3-14.cloudera.CH0_3
STARTUP_MSG: build = -r HEAD; compiled by 'root' on Mon Jul 6 15:02:31
EDT 2009
************************************************************/
Re-format filesystem in /tmp/hadoop-myusername/dfs/name ? (Y or N) Y
09/12/14 02:24:17 INFO fs.FSNamesystem: fsOwner=myusername,ugrad
09/12/14 02:24:17 INFO fs.FSNamesystem: supergroup=supergroup
09/12/14 02:24:17 INFO fs.FSNamesystem: isPermissionEnabled=true
09/12/14 02:24:17 INFO dfs.Storage: Image file of size 82 saved in 0
seconds.
09/12/14 02:24:17 INFO dfs.Storage: Storage directory
/tmp/hadoop-myusername/dfs/name has been successfully formatted.
09/12/14 02:24:17 INFO dfs.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at #####/###.##.#.##
************************************************************/

---------------------------------------------------------------------

I do not want it to format /tmp/hadoop-myusername/dfs/name as the storage
directory. Here is what is in my hadoop-site.xml file. Notice that
hadoop.tmp.dir is set /home/u/fall06/myusername/hadoop_tmp:

---------------------------------------------------------------------
---------------------------------------------------------------------

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>
<name>hadoop.tmp.dir</name>
<value>/home/u/fall06/myusername/hadoop_tmp</value>
</property>

<property>
<name>fs.default.name</name>
<value>hdfs://localhost:50031</value>
</property>

<property>
<name>mapred.job.tracker</name>
<value>localhost:50032</value>
</property>

<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>

---------------------------------------------------------------------


Here are the relevent lines of my .bash_profile where I set some of the
Hadoop environment variables:

---------------------------------------------------------------------
---------------------------------------------------------------------

export HADOOP_HOME=/home/u/fall06/myusername/Desktop/hadoop_project/hadoop
export
HADOOP_CONF_DIR=/home/u/fall06/myusername/Desktop/hadoop_project/hadoop/conf

---------------------------------------------------------------------

A few more details on the system setup:

- I did not install Hadoop on the system myself; it was installed by a
system administrator.
- Because the default Hadoop directory is read/execute only, I copied the
Hadoop directory to a folder that I own and ran in from there. My
HADOOP_HOME enviromnent variable is set to this duplicate directory.
- hadoop_site.xml is in my ${HADOOP_HOME}/conf directory.
- I did try to echo what ${HADOOP_CONF_DIR} was being set to while
'{HADOOP_HOME}/bin/hadoop namenode -format' was running. The result was
this:

---------------------------------------------------------------------
---------------------------------------------------------------------
/home/u/fall06/myusername/Desktop/hadoop_project/conf
---------------------------------------------------------------------

That's what I would expect, so I'm stumped. Does anyone have any ideas what
I might be doing wrong, or know of more tests I can run to figure out the
problem? Any and all advice is appreciated.

Search Discussions

  • Jeff Zhang at Dec 14, 2009 at 8:58 am
    David,

    You should set dfs.data.dir and dfs.name.dir for HDFS. These two
    directories is for the real data and meta data of file system.


    Jeff Zhang

    On Mon, Dec 14, 2009 at 4:38 PM, David Stemmer wrote:


    I'm trying to set up a basic Hadoop single-node cluster on a Red Hat
    Enterprise Linux 5 system. The Hadoop version is 0.18.3-14.cloudera.CH0_3.
    Unfortunately I am having problems getting Hadoop to read the configuration
    properties from hadoop-site.xml. Specifically, when I try to run the
    following command:

    ${HADOOP_HOME}/bin/hadoop namenode -format

    ...it seems that Hadoop is not using the storage directory that I have
    configured for HDFS. I get the following output:

    ---------------------------------------------------------------------
    ---------------------------------------------------------------------

    09/12/14 02:24:14 INFO dfs.NameNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG: host = #####/###.##.#.##
    STARTUP_MSG: args = [-format]
    STARTUP_MSG: version = 0.18.3-14.cloudera.CH0_3
    STARTUP_MSG: build = -r HEAD; compiled by 'root' on Mon Jul 6 15:02:31
    EDT 2009
    ************************************************************/
    Re-format filesystem in /tmp/hadoop-myusername/dfs/name ? (Y or N) Y
    09/12/14 02:24:17 INFO fs.FSNamesystem: fsOwner=myusername,ugrad
    09/12/14 02:24:17 INFO fs.FSNamesystem: supergroup=supergroup
    09/12/14 02:24:17 INFO fs.FSNamesystem: isPermissionEnabled=true
    09/12/14 02:24:17 INFO dfs.Storage: Image file of size 82 saved in 0
    seconds.
    09/12/14 02:24:17 INFO dfs.Storage: Storage directory
    /tmp/hadoop-myusername/dfs/name has been successfully formatted.
    09/12/14 02:24:17 INFO dfs.NameNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode at #####/###.##.#.##
    ************************************************************/

    ---------------------------------------------------------------------

    I do not want it to format /tmp/hadoop-myusername/dfs/name as the storage
    directory. Here is what is in my hadoop-site.xml file. Notice that
    hadoop.tmp.dir is set /home/u/fall06/myusername/hadoop_tmp:

    ---------------------------------------------------------------------
    ---------------------------------------------------------------------

    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

    <!-- Put site-specific property overrides in this file. -->

    <configuration>

    <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/u/fall06/myusername/hadoop_tmp</value>
    </property>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:50031</value>
    </property>

    <property>
    <name>mapred.job.tracker</name>
    <value>localhost:50032</value>
    </property>

    <property>
    <name>dfs.replication</name>
    <value>1</value>
    </property>
    </configuration>

    ---------------------------------------------------------------------


    Here are the relevent lines of my .bash_profile where I set some of the
    Hadoop environment variables:

    ---------------------------------------------------------------------
    ---------------------------------------------------------------------

    export HADOOP_HOME=/home/u/fall06/myusername/Desktop/hadoop_project/hadoop
    export

    HADOOP_CONF_DIR=/home/u/fall06/myusername/Desktop/hadoop_project/hadoop/conf

    ---------------------------------------------------------------------

    A few more details on the system setup:

    - I did not install Hadoop on the system myself; it was installed by a
    system administrator.
    - Because the default Hadoop directory is read/execute only, I copied the
    Hadoop directory to a folder that I own and ran in from there. My
    HADOOP_HOME enviromnent variable is set to this duplicate directory.
    - hadoop_site.xml is in my ${HADOOP_HOME}/conf directory.
    - I did try to echo what ${HADOOP_CONF_DIR} was being set to while
    '{HADOOP_HOME}/bin/hadoop namenode -format' was running. The result was
    this:

    ---------------------------------------------------------------------
    ---------------------------------------------------------------------
    /home/u/fall06/myusername/Desktop/hadoop_project/conf
    ---------------------------------------------------------------------

    That's what I would expect, so I'm stumped. Does anyone have any ideas what
    I might be doing wrong, or know of more tests I can run to figure out the
    problem? Any and all advice is appreciated.
    --
    View this message in context:
    http://old.nabble.com/0.18.3-14.cloudera.CH0_3-RHEL5%3A-hadoop-site.xml-properties-not-being-read-when-formatting-the-Hadoop-namenode-tp26774461p26774461.html
    Sent from the Hadoop core-user mailing list archive at Nabble.com.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedDec 14, '09 at 8:39a
activeDec 14, '09 at 8:58a
posts2
users2
websitehadoop.apache.org...
irc#hadoop

2 users in discussion

Jeff Zhang: 1 post David Stemmer: 1 post

People

Translate

site design / logo © 2022 Grokbase