FAQ
Hello everybody



I have a problem with adding a new data node to the currently running system
without rebooting.



I've found the following solution in the web :



1. configure conf/slaves and *.xml files on master machine

2. configure conf/master and *.xml files on slave machine

3. run ${HADOOP}/bin/hadoop datanode



But when I ran the commands on the master node, the master node was
recognized as a data node.

When I ran the commands on the data node which I want to add, the data node
was not properly added.(The number of total data node didn't show any
change)





Does anybody knows what could I do to solve this problem?

I'm using Hadoop version 0.20.2.



Kind regards,

Henny Ahn (Ahneuigun@gmail.com)

Search Discussions

  • Harsh J at Feb 7, 2011 at 12:18 pm

    On Mon, Feb 7, 2011 at 5:16 PM, ahn wrote:
    Hello everybody
    1. configure conf/slaves and *.xml files on master machine

    2. configure conf/master and *.xml files on slave machine
    'slaves' and 'masters' file are generally only required in the master
    machine, and only if you are using the start-* scripts supplied with
    Hadoop for use with SSH (FAQ has an entry on this) from master.
    3. run ${HADOOP}/bin/hadoop datanode
    But when I ran the commands on the master node, the master node was
    recognized as a data node.
    3. wasn't a valid command in this case. start-dfs.sh
    When I ran the commands on the data node which I want to add, the data node
    was not properly added.(The number of total data node didn't show any
    change)
    What do the logs say for the DataNode on the slave? Does it start
    successfully? If fs.default.name is set properly in slave's
    core-site.xml it should be able to communicate properly if started
    (and if the version is not mismatched).

    --
    Harsh J
    www.harshj.com
  • Jun Young Kim at Feb 8, 2011 at 12:57 am
    how about to use to reset for your new network topology?

    $> hadoop dfsadmin -refreshNodes

    Junyoung Kim (juneng603@gmail.com)

    On 02/07/2011 09:16 PM, Harsh J wrote:
    On Mon, Feb 7, 2011 at 5:16 PM, ahnwrote:
    Hello everybody
    1. configure conf/slaves and *.xml files on master machine

    2. configure conf/master and *.xml files on slave machine
    'slaves' and 'masters' file are generally only required in the master
    machine, and only if you are using the start-* scripts supplied with
    Hadoop for use with SSH (FAQ has an entry on this) from master.
    3. run ${HADOOP}/bin/hadoop datanode
    But when I ran the commands on the master node, the master node was
    recognized as a data node.
    3. wasn't a valid command in this case. start-dfs.sh
    When I ran the commands on the data node which I want to add, the data node
    was not properly added.(The number of total data node didn't show any
    change)
    What do the logs say for the DataNode on the slave? Does it start
    successfully? If fs.default.name is set properly in slave's
    core-site.xml it should be able to communicate properly if started
    (and if the version is not mismatched).
  • 안의건 at Feb 10, 2011 at 1:46 pm
    Dear Harsh,

    Your advice gave me insight, and I finally solved my problem.

    I'm not sure this is the correct way, but anyway it worked in my situation.

    I hope it would be helpful to someone else who has similar problem with me.

    ------------------------------------------------------------

    hadoop/conf
    slaves update
    *.xml update

    hadoop/bin> start-dfs.sh
    hadoop/bin> start-maperd.sh

    --------------------------------------------------------------


    Regards,
    Henny (ahneuigun@gmail.com)

    2011/2/7 Harsh J <qwertymaniac@gmail.com>
    On Mon, Feb 7, 2011 at 5:16 PM, ahn wrote:
    Hello everybody
    1. configure conf/slaves and *.xml files on master machine

    2. configure conf/master and *.xml files on slave machine
    'slaves' and 'masters' file are generally only required in the master
    machine, and only if you are using the start-* scripts supplied with
    Hadoop for use with SSH (FAQ has an entry on this) from master.
    3. run ${HADOOP}/bin/hadoop datanode
    But when I ran the commands on the master node, the master node was
    recognized as a data node.
    3. wasn't a valid command in this case. start-dfs.sh
    When I ran the commands on the data node which I want to add, the data node
    was not properly added.(The number of total data node didn't show any
    change)
    What do the logs say for the DataNode on the slave? Does it start
    successfully? If fs.default.name is set properly in slave's
    core-site.xml it should be able to communicate properly if started
    (and if the version is not mismatched).

    --
    Harsh J
    www.harshj.com

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedFeb 7, '11 at 11:47a
activeFeb 10, '11 at 1:46p
posts4
users3
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase