FAQ
Hello All:

I used a combination of tutorials to setup hadoop but most seems to be using either an old version of hadoop or only using 2 machines for the cluster which isn't really a cluster. Does anyone know of a good tutorial which setups multiple nodes for a cluster?? I already looked at the Apache website but it does not give sample values for the conf files. Also each set of tutorials seem to have a different set of parameters which they indicate should be changed so now its a bit confusing. For example, my configuration sets a dedicate namenode, secondary namenode and 8 slave nodes but when I run the start command it gives an error. Should I install hadoop to my user directory or on the root? I have it in my directory but all the nodes have a central file system as opposed to distributed so whatever I do on one node in my user folder it affect all the others so how do i set the paths to ensure that it uses a distributed system?

For the errors below, I checked the directories and the files are there. Am I not sure what went wrong and how to set the conf to not have central file system. Thank you.

Error message
CODE
w1153435@n51:~/hadoop-0.20.2_cluster> bin/start-dfs.sh
bin/start-dfs.sh: line 28: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-config.sh: No such file or directory
bin/start-dfs.sh: line 50: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemon.sh: No such file or directory
bin/start-dfs.sh: line 51: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh: No such file or directory
bin/start-dfs.sh: line 52: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh: No such file or directory
CODE

I had tried running this command below earlier but also got problems:
CODE
w1153435@ngs:~/hadoop-0.20.2_cluster> export HADOOP_CONF_DIR=${HADOOP_HOME}/conf
w1153435@ngs:~/hadoop-0.20.2_cluster> export HADOOP_SLAVES=${HADOOP_CONF_DIR}/slaves
w1153435@ngs:~/hadoop-0.20.2_cluster> ${HADOOP_HOME}/bin/slaves.sh "mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop"
-bash: /bin/slaves.sh: No such file or directory
w1153435@ngs:~/hadoop-0.20.2_cluster> export HADOOP_HOME=/home/w1153435/hadoop-0.20.2_cluster
w1153435@ngs:~/hadoop-0.20.2_cluster> ${HADOOP_HOME}/bin/slaves.sh "mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop"
cat: /conf/slaves: No such file or directory
CODE


Cheers,
A Df

Search Discussions

  • Steve Loughran at Aug 16, 2011 at 10:10 am

    On 16/08/11 11:02, A Df wrote:
    Hello All:

    I used a combination of tutorials to setup hadoop but most seems to be using either an old version of hadoop or only using 2 machines for the cluster which isn't really a cluster. Does anyone know of a good tutorial which setups multiple nodes for a cluster?? I already looked at the Apache website but it does not give sample values for the conf files. Also each set of tutorials seem to have a different set of parameters which they indicate should be changed so now its a bit confusing. For example, my configuration sets a dedicate namenode, secondary namenode and 8 slave nodes but when I run the start command it gives an error. Should I install hadoop to my user directory or on the root? I have it in my directory but all the nodes have a central file system as opposed to distributed so whatever I do on one node in my user folder it affect all the others so how do i set the paths to ensure that it uses a distributed system?

    For the errors below, I checked the directories and the files are there. Am I not sure what went wrong and how to set the conf to not have central file system. Thank you.

    Error message
    CODE
    w1153435@n51:~/hadoop-0.20.2_cluster> bin/start-dfs.sh
    bin/start-dfs.sh: line 28: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-config.sh: No such file or directory
    bin/start-dfs.sh: line 50: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemon.sh: No such file or directory
    bin/start-dfs.sh: line 51: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh: No such file or directory
    bin/start-dfs.sh: line 52: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh: No such file or directory
    CODE
    there's No such file or directory as
    /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh
    I had tried running this command below earlier but also got problems:
    CODE
    w1153435@ngs:~/hadoop-0.20.2_cluster> export HADOOP_CONF_DIR=${HADOOP_HOME}/conf
    w1153435@ngs:~/hadoop-0.20.2_cluster> export HADOOP_SLAVES=${HADOOP_CONF_DIR}/slaves
    w1153435@ngs:~/hadoop-0.20.2_cluster> ${HADOOP_HOME}/bin/slaves.sh "mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop"
    -bash: /bin/slaves.sh: No such file or directory
    w1153435@ngs:~/hadoop-0.20.2_cluster> export HADOOP_HOME=/home/w1153435/hadoop-0.20.2_cluster
    w1153435@ngs:~/hadoop-0.20.2_cluster> ${HADOOP_HOME}/bin/slaves.sh "mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop"
    cat: /conf/slaves: No such file or directory
    CODE
    there's No such file or directory as /conf/slaves because you set
    HADOOP_HOME after setting the other env variables, which are expanded at
    set-time, not run-time.
  • A Df at Aug 16, 2011 at 10:20 am
    See inline


    ________________________________
    From: Steve Loughran <stevel@apache.org>
    To: common-user@hadoop.apache.org
    Sent: Tuesday, 16 August 2011, 11:08
    Subject: Re: hadoop cluster mode not starting up
    On 16/08/11 11:02, A Df wrote:
    Hello All:

    I used a combination of tutorials to setup hadoop but most seems to be using either an old version of hadoop or only using 2 machines for the cluster which isn't really a cluster. Does anyone know of a good tutorial which setups multiple nodes for a cluster?? I already looked at the Apache website but it does not give sample values for the conf files. Also each set of tutorials seem to have a different set of parameters which they indicate should be changed so now its a bit confusing. For example, my configuration sets a dedicate namenode, secondary namenode and 8 slave nodes but when I run the start command it gives an error. Should I install hadoop to my user directory or on the root? I have it in my directory but all the nodes have a central file system as opposed to distributed so whatever I do on one node in my user folder it affect all the others so how do i set the paths to ensure that it uses a distributed system?

    For the errors below, I checked the directories and the files are there. Am I not sure what went wrong and how to set the conf to not have central file system. Thank you.

    Error message
    CODE
    w1153435@n51:~/hadoop-0.20.2_cluster>  bin/start-dfs.sh
    bin/start-dfs.sh: line 28: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-config.sh: No such file or directory
    bin/start-dfs.sh: line 50: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemon.sh: No such file or directory
    bin/start-dfs.sh: line 51: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh: No such file or directory
    bin/start-dfs.sh: line 52: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh: No such file or directory
    CODE
    there's  No such file or directory as
    /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh


    There is, I checked as shown
    w1153435@n51:~/hadoop-0.20.2_cluster> ls bin
    hadoop             rcc                start-dfs.sh      stop-dfs.sh
    hadoop-config.sh   slaves.sh          start-mapred.sh   stop-mapred.sh
    hadoop-daemon.sh   start-all.sh       stop-all.sh
    hadoop-daemons.sh  start-balancer.sh  stop-balancer.sh



    I had tried running this command below earlier but also got problems:
    CODE
    w1153435@ngs:~/hadoop-0.20.2_cluster>  export HADOOP_CONF_DIR=${HADOOP_HOME}/conf
    w1153435@ngs:~/hadoop-0.20.2_cluster>  export HADOOP_SLAVES=${HADOOP_CONF_DIR}/slaves
    w1153435@ngs:~/hadoop-0.20.2_cluster>  ${HADOOP_HOME}/bin/slaves.sh "mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop"
    -bash: /bin/slaves.sh: No such file or directory
    w1153435@ngs:~/hadoop-0.20.2_cluster>  export HADOOP_HOME=/home/w1153435/hadoop-0.20.2_cluster
    w1153435@ngs:~/hadoop-0.20.2_cluster>  ${HADOOP_HOME}/bin/slaves.sh "mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop"
    cat: /conf/slaves: No such file or directory
    CODE
    there's  No such file or directory as /conf/slaves because you set
    HADOOP_HOME after setting the other env variables, which are expanded at
    set-time, not run-time.

    I redid the command but still have errors on the slaves


    w1153435@n51:~/hadoop-0.20.2_cluster> export HADOOP_HOME=/home/w1153435/hadoop-0.20.2_cluster
    w1153435@n51:~/hadoop-0.20.2_cluster> export HADOOP_CONF_DIR=${HADOOP_HOME}/conf
    w1153435@n51:~/hadoop-0.20.2_cluster> export HADOOP_SLAVES=${HADOOP_CONF_DIR}/slaves
    w1153435@n51:~/hadoop-0.20.2_cluster> ${HADOOP_HOME}/bin/slaves.sh "mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop"
    privn51: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    privn58: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    privn52: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    privn55: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    privn57: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    privn54: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    privn53: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    privn56: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
  • Shahnawaz Saifi at Aug 16, 2011 at 10:37 am
    Hi Df, Can you get : echo $HADOOP_HOME
    On Tue, Aug 16, 2011 at 3:49 PM, A Df wrote:

    See inline


    ________________________________
    From: Steve Loughran <stevel@apache.org>
    To: common-user@hadoop.apache.org
    Sent: Tuesday, 16 August 2011, 11:08
    Subject: Re: hadoop cluster mode not starting up
    On 16/08/11 11:02, A Df wrote:
    Hello All:

    I used a combination of tutorials to setup hadoop but most seems to be
    using either an old version of hadoop or only using 2 machines for the
    cluster which isn't really a cluster. Does anyone know of a good tutorial
    which setups multiple nodes for a cluster?? I already looked at the Apache
    website but it does not give sample values for the conf files. Also each set
    of tutorials seem to have a different set of parameters which they indicate
    should be changed so now its a bit confusing. For example, my configuration
    sets a dedicate namenode, secondary namenode and 8 slave nodes but when I
    run the start command it gives an error. Should I install hadoop to my user
    directory or on the root? I have it in my directory but all the nodes have a
    central file system as opposed to distributed so whatever I do on one node
    in my user folder it affect all the others so how do i set the paths to
    ensure that it uses a distributed system?
    For the errors below, I checked the directories and the files are there.
    Am I not sure what went wrong and how to set the conf to not have central
    file system. Thank you.
    Error message
    CODE
    w1153435@n51:~/hadoop-0.20.2_cluster> bin/start-dfs.sh
    bin/start-dfs.sh: line 28:
    /w1153435/hadoop-0.20.2_cluster/bin/hadoop-config.sh: No such file or
    directory
    bin/start-dfs.sh: line 50:
    /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemon.sh: No such file or
    directory
    bin/start-dfs.sh: line 51:
    /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh: No such file or
    directory
    bin/start-dfs.sh: line 52:
    /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh: No such file or
    directory
    CODE
    there's No such file or directory as
    /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh


    There is, I checked as shown
    w1153435@n51:~/hadoop-0.20.2_cluster> ls bin
    hadoop rcc start-dfs.sh stop-dfs.sh
    hadoop-config.sh slaves.sh start-mapred.sh stop-mapred.sh
    hadoop-daemon.sh start-all.sh stop-all.sh
    hadoop-daemons.sh start-balancer.sh stop-balancer.sh



    I had tried running this command below earlier but also got problems:
    CODE
    w1153435@ngs:~/hadoop-0.20.2_cluster> export
    HADOOP_CONF_DIR=${HADOOP_HOME}/conf
    w1153435@ngs:~/hadoop-0.20.2_cluster> export
    HADOOP_SLAVES=${HADOOP_CONF_DIR}/slaves
    w1153435@ngs:~/hadoop-0.20.2_cluster> ${HADOOP_HOME}/bin/slaves.sh
    "mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop"
    -bash: /bin/slaves.sh: No such file or directory
    w1153435@ngs:~/hadoop-0.20.2_cluster> export
    HADOOP_HOME=/home/w1153435/hadoop-0.20.2_cluster
    w1153435@ngs:~/hadoop-0.20.2_cluster> ${HADOOP_HOME}/bin/slaves.sh
    "mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop"
    cat: /conf/slaves: No such file or directory
    CODE
    there's No such file or directory as /conf/slaves because you set
    HADOOP_HOME after setting the other env variables, which are expanded at
    set-time, not run-time.

    I redid the command but still have errors on the slaves


    w1153435@n51:~/hadoop-0.20.2_cluster> export
    HADOOP_HOME=/home/w1153435/hadoop-0.20.2_cluster
    w1153435@n51:~/hadoop-0.20.2_cluster> export
    HADOOP_CONF_DIR=${HADOOP_HOME}/conf
    w1153435@n51:~/hadoop-0.20.2_cluster> export
    HADOOP_SLAVES=${HADOOP_CONF_DIR}/slaves
    w1153435@n51:~/hadoop-0.20.2_cluster> ${HADOOP_HOME}/bin/slaves.sh "mkdir
    -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop"
    privn51: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop:
    No such file or directory
    privn58: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop:
    No such file or directory
    privn52: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop:
    No such file or directory
    privn55: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop:
    No such file or directory
    privn57: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop:
    No such file or directory
    privn54: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop:
    No such file or directory
    privn53: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop:
    No such file or directory
    privn56: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop:
    No such file or directory



    --
    Thanks,
    Shah
  • Steve Loughran at Aug 16, 2011 at 11:09 am

    On 16/08/11 11:19, A Df wrote:
    See inline


    ________________________________
    From: Steve Loughran<stevel@apache.org>
    To: common-user@hadoop.apache.org
    Sent: Tuesday, 16 August 2011, 11:08
    Subject: Re: hadoop cluster mode not starting up
    On 16/08/11 11:02, A Df wrote:
    Hello All:

    I used a combination of tutorials to setup hadoop but most seems to be using either an old version of hadoop or only using 2 machines for the cluster which isn't really a cluster. Does anyone know of a good tutorial which setups multiple nodes for a cluster?? I already looked at the Apache website but it does not give sample values for the conf files. Also each set of tutorials seem to have a different set of parameters which they indicate should be changed so now its a bit confusing. For example, my configuration sets a dedicate namenode, secondary namenode and 8 slave nodes but when I run the start command it gives an error. Should I install hadoop to my user directory or on the root? I have it in my directory but all the nodes have a central file system as opposed to distributed so whatever I do on one node in my user folder it affect all the others so how do i set the paths to ensure that it uses a distributed system?

    For the errors below, I checked the directories and the files are there. Am I not sure what went wrong and how to set the conf to not have central file system. Thank you.

    Error message
    CODE
    w1153435@n51:~/hadoop-0.20.2_cluster> bin/start-dfs.sh
    bin/start-dfs.sh: line 28: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-config.sh: No such file or directory
    bin/start-dfs.sh: line 50: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemon.sh: No such file or directory
    bin/start-dfs.sh: line 51: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh: No such file or directory
    bin/start-dfs.sh: line 52: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh: No such file or directory
    CODE
    there's No such file or directory as
    /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh


    There is, I checked as shown
    w1153435@n51:~/hadoop-0.20.2_cluster> ls bin
    hadoop rcc start-dfs.sh stop-dfs.sh
    hadoop-config.sh slaves.sh start-mapred.sh stop-mapred.sh
    hadoop-daemon.sh start-all.sh stop-all.sh
    hadoop-daemons.sh start-balancer.sh stop-balancer.sh
    try "pwd" to print out where the OS thinks you are, as it doesn't seem
    to be where you think you are


    I had tried running this command below earlier but also got problems:
    CODE
    w1153435@ngs:~/hadoop-0.20.2_cluster> export HADOOP_CONF_DIR=${HADOOP_HOME}/conf
    w1153435@ngs:~/hadoop-0.20.2_cluster> export HADOOP_SLAVES=${HADOOP_CONF_DIR}/slaves
    w1153435@ngs:~/hadoop-0.20.2_cluster> ${HADOOP_HOME}/bin/slaves.sh "mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop"
    -bash: /bin/slaves.sh: No such file or directory
    w1153435@ngs:~/hadoop-0.20.2_cluster> export HADOOP_HOME=/home/w1153435/hadoop-0.20.2_cluster
    w1153435@ngs:~/hadoop-0.20.2_cluster> ${HADOOP_HOME}/bin/slaves.sh "mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop"
    cat: /conf/slaves: No such file or directory
    CODE
    there's No such file or directory as /conf/slaves because you set
    HADOOP_HOME after setting the other env variables, which are expanded at
    set-time, not run-time.

    I redid the command but still have errors on the slaves


    w1153435@n51:~/hadoop-0.20.2_cluster> export HADOOP_HOME=/home/w1153435/hadoop-0.20.2_cluster
    w1153435@n51:~/hadoop-0.20.2_cluster> export HADOOP_CONF_DIR=${HADOOP_HOME}/conf
    w1153435@n51:~/hadoop-0.20.2_cluster> export HADOOP_SLAVES=${HADOOP_CONF_DIR}/slaves
    w1153435@n51:~/hadoop-0.20.2_cluster> ${HADOOP_HOME}/bin/slaves.sh "mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop"
    privn51: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    privn58: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    privn52: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    privn55: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    privn57: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    privn54: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    privn53: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    privn56: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    try ssh-ing in, do it by hand, make sure you have the right permissions etc
  • A Df at Aug 16, 2011 at 11:36 am
    I already used a few tutorials as follows:
    * Hadoop Tutorial on Yahoo Developer network which uses an old hadoop and thus older conf files.

    * http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/ which only has two nodes and the master acts as namenode and secondary namenode. I need one with more than that.


    Is there a way to prevent the node from using the central file system because I don't have root permission and my user folder is in a central file system which is replicated on all the nodes?

    See inline too for my responses


    ________________________________
    From: Steve Loughran <stevel@apache.org>
    To: common-user@hadoop.apache.org
    Sent: Tuesday, 16 August 2011, 12:08
    Subject: Re: hadoop cluster mode not starting up
    On 16/08/11 11:19, A Df wrote:
    See inline


    ________________________________
    From: Steve Loughran<stevel@apache.org>
    To: common-user@hadoop.apache.org
    Sent: Tuesday, 16 August 2011, 11:08
    Subject: Re: hadoop cluster mode not starting up
    On 16/08/11 11:02, A Df wrote:
    Hello All:

    I used a combination of tutorials to setup hadoop but most seems to be using either an old version of hadoop or only using 2 machines for the cluster which isn't really a cluster. Does anyone know of a good tutorial which setups multiple nodes for a cluster?? I already looked at the Apache website but it does not give sample values for the conf files. Also each set of tutorials seem to have a different set of parameters which they indicate should be changed so now its a bit confusing. For example, my configuration sets a dedicate namenode, secondary namenode and 8 slave nodes but when I run the start command it gives an error. Should I install hadoop to my user directory or on the root? I have it in my directory but all the nodes have a central file system as opposed to distributed so whatever I do on one node in my user folder it affect all the others so how do i set the paths to ensure that it uses a distributed system?

    For the errors below, I checked the directories and the files are there. Am I not sure what went wrong and how to set the conf to not have central file system. Thank you.

    Error message
    CODE
    w1153435@n51:~/hadoop-0.20.2_cluster>  bin/start-dfs.sh
    bin/start-dfs.sh: line 28: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-config.sh: No such file or directory
    bin/start-dfs.sh: line 50: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemon.sh: No such file or directory
    bin/start-dfs.sh: line 51: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh: No such file or directory
    bin/start-dfs.sh: line 52: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh: No such file or directory
    CODE
    there's  No such file or directory as
    /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh


    There is, I checked as shown
    w1153435@n51:~/hadoop-0.20.2_cluster>  ls bin
    hadoop            rcc                start-dfs.sh      stop-dfs.sh
    hadoop-config.sh  slaves.sh          start-mapred.sh  stop-mapred.sh
    hadoop-daemon.sh  start-all.sh      stop-all.sh
    hadoop-daemons.sh  start-balancer.sh  stop-balancer.sh
    try "pwd" to print out where the OS thinks you are, as it doesn't seem
    to be where you think you are


    w1153435@ngs:~/hadoop-0.20.2_cluster> pwd
    /home/w1153435/hadoop-0.20.2_cluster


    w1153435@ngs:~/hadoop-0.20.2_cluster/bin> pwd
    /home/w1153435/hadoop-0.20.2_cluster/bin


    I had tried running this command below earlier but also got problems:
    CODE
    w1153435@ngs:~/hadoop-0.20.2_cluster>  export HADOOP_CONF_DIR=${HADOOP_HOME}/conf
    w1153435@ngs:~/hadoop-0.20.2_cluster>  export HADOOP_SLAVES=${HADOOP_CONF_DIR}/slaves
    w1153435@ngs:~/hadoop-0.20.2_cluster>  ${HADOOP_HOME}/bin/slaves.sh "mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop"
    -bash: /bin/slaves.sh: No such file or directory
    w1153435@ngs:~/hadoop-0.20.2_cluster>  export HADOOP_HOME=/home/w1153435/hadoop-0.20.2_cluster
    w1153435@ngs:~/hadoop-0.20.2_cluster>  ${HADOOP_HOME}/bin/slaves.sh "mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop"
    cat: /conf/slaves: No such file or directory
    CODE
    there's  No such file or directory as /conf/slaves because you set
    HADOOP_HOME after setting the other env variables, which are expanded at
    set-time, not run-time.

    I redid the command but still have errors on the slaves


    w1153435@n51:~/hadoop-0.20.2_cluster>  export HADOOP_HOME=/home/w1153435/hadoop-0.20.2_cluster
    w1153435@n51:~/hadoop-0.20.2_cluster>  export HADOOP_CONF_DIR=${HADOOP_HOME}/conf
    w1153435@n51:~/hadoop-0.20.2_cluster>  export HADOOP_SLAVES=${HADOOP_CONF_DIR}/slaves
    w1153435@n51:~/hadoop-0.20.2_cluster>  ${HADOOP_HOME}/bin/slaves.sh "mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop"
    privn51: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    privn58: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    privn52: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    privn55: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    privn57: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    privn54: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    privn53: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    privn56: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    try ssh-ing in, do it by hand, make sure you have the right permissions etc


    I reset the above path variables again and checked that they existed and tried the command above but same error. I used ssh with no problems and no password request so that is fine. What else could be wrong?
    w1153435@ngs:~/hadoop-0.20.2_cluster> echo $HADOOP_HOME                         /home/w1153435/hadoop-0.20.2_cluster
    w1153435@ngs:~/hadoop-0.20.2_cluster> echo $HADOOP_CONF_DIR                     /home/w1153435/hadoop-0.20.2_cluster/conf
    w1153435@ngs:~/hadoop-0.20.2_cluster> echo $HADOOP_SLAVES                       /home/w1153435/hadoop-0.20.2_cluster/conf/slaves
    w1153435@ngs:~/hadoop-0.20.2_cluster>



  • Shanmuganathan.r at Aug 16, 2011 at 12:36 pm
    Hi Df,

    Are you use the IP instead of names in conf/masters and conf/slaves . For running the secondary namenode in separate machine refer the following link



    http://www.hadoop-blog.com/2010/12/secondarynamenode-process-is-starting.html


    Regards,

    Shanmuganathan



    ---- On Tue, 16 Aug 2011 17:06:04 +0530 A Df&lt;abbey_dragonforest@yahoo.com&gt; wrote ----


    I already used a few tutorials as follows:
    * Hadoop Tutorial on Yahoo Developer network which uses an old hadoop and thus older conf files.

    * http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/ which only has two nodes and the master acts as namenode and secondary namenode. I need one with more than that.


    Is there a way to prevent the node from using the central file system because I don't have root permission and my user folder is in a central file system which is replicated on all the nodes?

    See inline too for my responses



    &gt;________________________________
    &gt;From: Steve Loughran &lt;stevel@apache.org&gt;
    &gt;To: common-user@hadoop.apache.org
    &gt;Sent: Tuesday, 16 August 2011, 12:08
    &gt;Subject: Re: hadoop cluster mode not starting up
    &gt;
    &gt;On 16/08/11 11:19, A Df wrote:
    &gt;&gt; See inline
    &gt;&gt;
    &gt;&gt;
    &gt;&gt;
    &gt;&gt;&gt; ________________________________
    &gt;&gt;&gt; From: Steve Loughran&lt;stevel@apache.org&gt;
    &gt;&gt;&gt; To: common-user@hadoop.apache.org
    &gt;&gt;&gt; Sent: Tuesday, 16 August 2011, 11:08
    &gt;&gt;&gt; Subject: Re: hadoop cluster mode not starting up
    &gt;&gt;&gt;
    &gt;&gt;&gt; On 16/08/11 11:02, A Df wrote:
    &gt;&gt;&gt;&gt; Hello All:
    &gt;&gt;&gt;&gt;
    &gt;&gt;&gt;&gt; I used a combination of tutorials to setup hadoop but most seems to be using either an old version of hadoop or only using 2 machines for the cluster which isn't really a cluster. Does anyone know of a good tutorial which setups multiple nodes for a cluster?? I already looked at the Apache website but it does not give sample values for the conf files. Also each set of tutorials seem to have a different set of parameters which they indicate should be changed so now its a bit confusing. For example, my configuration sets a dedicate namenode, secondary namenode and 8 slave nodes but when I run the start command it gives an error. Should I install hadoop to my user directory or on the root? I have it in my directory but all the nodes have a central file system as opposed to distributed so whatever I do on one node in my user folder it affect all the others so how do i set the paths to ensure that it uses a distributed system?
    &gt;&gt;&gt;&gt;
    &gt;&gt;&gt;&gt; For the errors below, I checked the directories and the files are there. Am I not sure what went wrong and how to set the conf to not have central file system. Thank you.
    &gt;&gt;&gt;&gt;
    &gt;&gt;&gt;&gt; Error message
    &gt;&gt;&gt;&gt; CODE
    &gt;&gt;&gt;&gt; w1153435@n51:~/hadoop-0.20.2_cluster&gt; bin/start-dfs.sh
    &gt;&gt;&gt;&gt; bin/start-dfs.sh: line 28: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-config.sh: No such file or directory
    &gt;&gt;&gt;&gt; bin/start-dfs.sh: line 50: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemon.sh: No such file or directory
    &gt;&gt;&gt;&gt; bin/start-dfs.sh: line 51: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh: No such file or directory
    &gt;&gt;&gt;&gt; bin/start-dfs.sh: line 52: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh: No such file or directory
    &gt;&gt;&gt;&gt; CODE
    &gt;&gt;&gt;
    &gt;&gt;&gt; there's No such file or directory as
    &gt;&gt;&gt; /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh
    &gt;&gt;&gt;
    &gt;&gt;&gt;
    &gt;&gt;&gt; There is, I checked as shown
    &gt;&gt;&gt; w1153435@n51:~/hadoop-0.20.2_cluster&gt; ls bin
    &gt;&gt;&gt; hadoop rcc start-dfs.sh stop-dfs.sh
    &gt;&gt;&gt; hadoop-config.sh slaves.sh start-mapred.sh stop-mapred.sh
    &gt;&gt;&gt; hadoop-daemon.sh start-all.sh stop-all.sh
    &gt;&gt;&gt; hadoop-daemons.sh start-balancer.sh stop-balancer.sh
    &gt;
    &gt;try "pwd" to print out where the OS thinks you are, as it doesn't seem
    &gt;to be where you think you are
    &gt;
    &gt;
    &gt;w1153435@ngs:~/hadoop-0.20.2_cluster&gt; pwd
    &gt;/home/w1153435/hadoop-0.20.2_cluster
    &gt;
    &gt;
    &gt;w1153435@ngs:~/hadoop-0.20.2_cluster/bin&gt; pwd
    &gt;/home/w1153435/hadoop-0.20.2_cluster/bin
    &gt;
    &gt;&gt;&gt;
    &gt;&gt;&gt;
    &gt;&gt;&gt;
    &gt;&gt;&gt;
    &gt;&gt;&gt;&gt;
    &gt;&gt;&gt;&gt; I had tried running this command below earlier but also got problems:
    &gt;&gt;&gt;&gt; CODE
    &gt;&gt;&gt;&gt; w1153435@ngs:~/hadoop-0.20.2_cluster&gt; export HADOOP_CONF_DIR=${HADOOP_HOME}/conf
    &gt;&gt;&gt;&gt; w1153435@ngs:~/hadoop-0.20.2_cluster&gt; export HADOOP_SLAVES=${HADOOP_CONF_DIR}/slaves
    &gt;&gt;&gt;&gt; w1153435@ngs:~/hadoop-0.20.2_cluster&gt; ${HADOOP_HOME}/bin/slaves.sh "mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop"
    &gt;&gt;&gt;&gt; -bash: /bin/slaves.sh: No such file or directory
    &gt;&gt;&gt;&gt; w1153435@ngs:~/hadoop-0.20.2_cluster&gt; export HADOOP_HOME=/home/w1153435/hadoop-0.20.2_cluster
    &gt;&gt;&gt;&gt; w1153435@ngs:~/hadoop-0.20.2_cluster&gt; ${HADOOP_HOME}/bin/slaves.sh "mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop"
    &gt;&gt;&gt;&gt; cat: /conf/slaves: No such file or directory
    &gt;&gt;&gt;&gt; CODE
    &gt;&gt;&gt;&gt;
    &gt;&gt;&gt; there's No such file or directory as /conf/slaves because you set
    &gt;&gt;&gt; HADOOP_HOME after setting the other env variables, which are expanded at
    &gt;&gt;&gt; set-time, not run-time.
    &gt;&gt;&gt;
    &gt;&gt;&gt; I redid the command but still have errors on the slaves
    &gt;&gt;&gt;
    &gt;&gt;&gt;
    &gt;&gt;&gt; w1153435@n51:~/hadoop-0.20.2_cluster&gt; export HADOOP_HOME=/home/w1153435/hadoop-0.20.2_cluster
    &gt;&gt;&gt; w1153435@n51:~/hadoop-0.20.2_cluster&gt; export HADOOP_CONF_DIR=${HADOOP_HOME}/conf
    &gt;&gt;&gt; w1153435@n51:~/hadoop-0.20.2_cluster&gt; export HADOOP_SLAVES=${HADOOP_CONF_DIR}/slaves
    &gt;&gt;&gt; w1153435@n51:~/hadoop-0.20.2_cluster&gt; ${HADOOP_HOME}/bin/slaves.sh "mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop"
    &gt;&gt;&gt; privn51: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    &gt;&gt;&gt; privn58: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    &gt;&gt;&gt; privn52: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    &gt;&gt;&gt; privn55: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    &gt;&gt;&gt; privn57: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    &gt;&gt;&gt; privn54: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    &gt;&gt;&gt; privn53: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    &gt;&gt;&gt; privn56: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    &gt;
    &gt;try ssh-ing in, do it by hand, make sure you have the right permissions etc
    &gt;
    &gt;
    &gt;I reset the above path variables again and checked that they existed and tried the command above but same error. I used ssh with no problems and no password request so that is fine. What else could be wrong?
    &gt;w1153435@ngs:~/hadoop-0.20.2_cluster&gt; echo $HADOOP_HOME /home/w1153435/hadoop-0.20.2_cluster
    &gt;w1153435@ngs:~/hadoop-0.20.2_cluster&gt; echo $HADOOP_CONF_DIR /home/w1153435/hadoop-0.20.2_cluster/conf
    &gt;w1153435@ngs:~/hadoop-0.20.2_cluster&gt; echo $HADOOP_SLAVES /home/w1153435/hadoop-0.20.2_cluster/conf/slaves
    &gt;w1153435@ngs:~/hadoop-0.20.2_cluster&gt;
    &gt;
    &gt;
    &gt;
    &gt;
    &gt;
  • A Df at Aug 16, 2011 at 3:21 pm
    See inline:

    ________________________________
    From: shanmuganathan.r <shanmuganathan.r@zohocorp.com>
    To: common-user@hadoop.apache.org
    Sent: Tuesday, 16 August 2011, 13:35
    Subject: Re: hadoop cluster mode not starting up

    Hi Df,

    Are you use the IP instead of names in conf/masters and conf/slaves . For running the secondary namenode in separate machine refer the following link


    =Yes, I use the names in those files but the ip address are mapped to the names in the /extras/hosts file. Does this cause problems?


    http://www.hadoop-blog.com/2010/12/secondarynamenode-process-is-starting.html


    =I want to making too many changes so I will stick to having the master be both namenode and secondarynamenode. I tried starting up the hdfs and mapreduce but the jobtracker is not running on the master and their is still errors regarding the datanodes because only 5 of 7 datanodes have tasktracker. I ran both commands for to start the hdfs and mapreduce so why is the jobtracker missing?

    Regards,

    Shanmuganathan



    ---- On Tue, 16 Aug 2011 17:06:04 +0530 A Df&lt;abbey_dragonforest@yahoo.com&gt; wrote ----


    I already used a few tutorials as follows:
    * Hadoop Tutorial on Yahoo Developer network which uses an old hadoop and thus older conf files.

    * http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/ which only has two nodes and the master acts as namenode and secondary namenode. I need one with more than that.


    Is there a way to prevent the node from using the central file system because I don't have root permission and my user folder is in a central file system which is replicated on all the nodes?

    See inline too for my responses



    &gt;________________________________
    &gt;From: Steve Loughran &lt;stevel@apache.org&gt;
    &gt;To: common-user@hadoop.apache.org
    &gt;Sent: Tuesday, 16 August 2011, 12:08
    &gt;Subject: Re: hadoop cluster mode not starting up
    &gt;
    &gt;On 16/08/11 11:19, A Df wrote:
    &gt;&gt; See inline
    &gt;&gt;
    &gt;&gt;
    &gt;&gt;
    &gt;&gt;&gt; ________________________________
    &gt;&gt;&gt; From: Steve Loughran&lt;stevel@apache.org&gt;
    &gt;&gt;&gt; To: common-user@hadoop.apache.org
    &gt;&gt;&gt; Sent: Tuesday, 16 August 2011, 11:08
    &gt;&gt;&gt; Subject: Re: hadoop cluster mode not starting up
    &gt;&gt;&gt;
    &gt;&gt;&gt; On 16/08/11 11:02, A Df wrote:
    &gt;&gt;&gt;&gt; Hello All:
    &gt;&gt;&gt;&gt;
    &gt;&gt;&gt;&gt; I used a combination of tutorials to setup hadoop but most seems to be using either an old version of hadoop or only using 2 machines for the cluster which isn't really a cluster. Does anyone know of a good tutorial which setups multiple nodes for a cluster?? I already looked at the Apache website but it does not give sample values for the conf files. Also each set of tutorials seem to have a different set of parameters which they indicate should be changed so now its a bit confusing. For example, my configuration sets a dedicate namenode, secondary namenode and 8 slave nodes but when I run the start command it gives an error. Should I install hadoop to my user directory or on the root? I have it in my directory but all the nodes have a central file system as opposed to distributed so whatever I do on one node in my user folder it affect all the others so how do i set the paths to ensure that it uses a distributed system?
    &gt;&gt;&gt;&gt;
    &gt;&gt;&gt;&gt; For the errors below, I checked the directories and the files are there. Am I not sure what went wrong and how to set the conf to not have central file system. Thank you.
    &gt;&gt;&gt;&gt;
    &gt;&gt;&gt;&gt; Error message
    &gt;&gt;&gt;&gt; CODE
    &gt;&gt;&gt;&gt; w1153435@n51:~/hadoop-0.20.2_cluster&gt;  bin/start-dfs.sh
    &gt;&gt;&gt;&gt; bin/start-dfs.sh: line 28: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-config.sh: No such file or directory
    &gt;&gt;&gt;&gt; bin/start-dfs.sh: line 50: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemon.sh: No such file or directory
    &gt;&gt;&gt;&gt; bin/start-dfs.sh: line 51: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh: No such file or directory
    &gt;&gt;&gt;&gt; bin/start-dfs.sh: line 52: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh: No such file or directory
    &gt;&gt;&gt;&gt; CODE
    &gt;&gt;&gt;
    &gt;&gt;&gt; there's  No such file or directory as
    &gt;&gt;&gt; /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh
    &gt;&gt;&gt;
    &gt;&gt;&gt;
    &gt;&gt;&gt; There is, I checked as shown
    &gt;&gt;&gt; w1153435@n51:~/hadoop-0.20.2_cluster&gt;  ls bin
    &gt;&gt;&gt; hadoop            rcc                start-dfs.sh      stop-dfs.sh
    &gt;&gt;&gt; hadoop-config.sh  slaves.sh          start-mapred.sh  stop-mapred.sh
    &gt;&gt;&gt; hadoop-daemon.sh  start-all.sh      stop-all.sh
    &gt;&gt;&gt; hadoop-daemons.sh  start-balancer.sh  stop-balancer.sh
    &gt;
    &gt;try "pwd" to print out where the OS thinks you are, as it doesn't seem
    &gt;to be where you think you are
    &gt;
    &gt;
    &gt;w1153435@ngs:~/hadoop-0.20.2_cluster&gt; pwd
    &gt;/home/w1153435/hadoop-0.20.2_cluster
    &gt;
    &gt;
    &gt;w1153435@ngs:~/hadoop-0.20.2_cluster/bin&gt; pwd
    &gt;/home/w1153435/hadoop-0.20.2_cluster/bin
    &gt;
    &gt;&gt;&gt;
    &gt;&gt;&gt;
    &gt;&gt;&gt;
    &gt;&gt;&gt;
    &gt;&gt;&gt;&gt;
    &gt;&gt;&gt;&gt; I had tried running this command below earlier but also got problems:
    &gt;&gt;&gt;&gt; CODE
    &gt;&gt;&gt;&gt; w1153435@ngs:~/hadoop-0.20.2_cluster&gt;  export HADOOP_CONF_DIR=${HADOOP_HOME}/conf
    &gt;&gt;&gt;&gt; w1153435@ngs:~/hadoop-0.20.2_cluster&gt;  export HADOOP_SLAVES=${HADOOP_CONF_DIR}/slaves
    &gt;&gt;&gt;&gt; w1153435@ngs:~/hadoop-0.20.2_cluster&gt;  ${HADOOP_HOME}/bin/slaves.sh "mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop"
    &gt;&gt;&gt;&gt; -bash: /bin/slaves.sh: No such file or directory
    &gt;&gt;&gt;&gt; w1153435@ngs:~/hadoop-0.20.2_cluster&gt;  export HADOOP_HOME=/home/w1153435/hadoop-0.20.2_cluster
    &gt;&gt;&gt;&gt; w1153435@ngs:~/hadoop-0.20.2_cluster&gt;  ${HADOOP_HOME}/bin/slaves.sh "mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop"
    &gt;&gt;&gt;&gt; cat: /conf/slaves: No such file or directory
    &gt;&gt;&gt;&gt; CODE
    &gt;&gt;&gt;&gt;
    &gt;&gt;&gt; there's  No such file or directory as /conf/slaves because you set
    &gt;&gt;&gt; HADOOP_HOME after setting the other env variables, which are expanded at
    &gt;&gt;&gt; set-time, not run-time.
    &gt;&gt;&gt;
    &gt;&gt;&gt; I redid the command but still have errors on the slaves
    &gt;&gt;&gt;
    &gt;&gt;&gt;
    &gt;&gt;&gt; w1153435@n51:~/hadoop-0.20.2_cluster&gt;  export HADOOP_HOME=/home/w1153435/hadoop-0.20.2_cluster
    &gt;&gt;&gt; w1153435@n51:~/hadoop-0.20.2_cluster&gt;  export HADOOP_CONF_DIR=${HADOOP_HOME}/conf
    &gt;&gt;&gt; w1153435@n51:~/hadoop-0.20.2_cluster&gt;  export HADOOP_SLAVES=${HADOOP_CONF_DIR}/slaves
    &gt;&gt;&gt; w1153435@n51:~/hadoop-0.20.2_cluster&gt;  ${HADOOP_HOME}/bin/slaves.sh "mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop"
    &gt;&gt;&gt; privn51: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    &gt;&gt;&gt; privn58: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    &gt;&gt;&gt; privn52: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    &gt;&gt;&gt; privn55: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    &gt;&gt;&gt; privn57: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    &gt;&gt;&gt; privn54: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    &gt;&gt;&gt; privn53: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    &gt;&gt;&gt; privn56: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    &gt;
    &gt;try ssh-ing in, do it by hand, make sure you have the right permissions etc
    &gt;
    &gt;
    &gt;I reset the above path variables again and checked that they existed and tried the command above but same error. I used ssh with no problems and no password request so that is fine. What else could be wrong?
    &gt;w1153435@ngs:~/hadoop-0.20.2_cluster&gt; echo $HADOOP_HOME                        /home/w1153435/hadoop-0.20.2_cluster
    &gt;w1153435@ngs:~/hadoop-0.20.2_cluster&gt; echo $HADOOP_CONF_DIR                    /home/w1153435/hadoop-0.20.2_cluster/conf
    &gt;w1153435@ngs:~/hadoop-0.20.2_cluster&gt; echo $HADOOP_SLAVES                      /home/w1153435/hadoop-0.20.2_cluster/conf/slaves
    &gt;w1153435@ngs:~/hadoop-0.20.2_cluster&gt;
    &gt;
    &gt;
    &gt;
    &gt;
    &gt;


  • A Df at Aug 17, 2011 at 11:43 am
    Hello Everyone:

    I am adding the contents of my config file in the hopes that someone will be able to help. See inline for the discussions. I really don't understand why it works in pseudo-mode but gives so much problems in cluster. I have tried the instructions from the Apache cluster setup, Yahoo Development Network and from Michael Noll's tutorial.

    w1153435@ngs:~/hadoop-0.20.2_cluster/conf> cat core-site.xml
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

    <!-- Put site-specific property overrides in this file. -->


    <configuration>
    <property>
    <name>fs.default.name</name>
    <value>hdfs://ngs.uni.ac.uk:3000</value>
    </property>
    <property>
    <name>HADOOP_LOG_DIR</name>
    <value>/home/w1153435/hadoop-0.20.2_cluster/var/log/hadoop</value>
    </property>
    <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop</value>
    </property>
    </configuration>

    w1153435@ngs:~/hadoop-0.20.2_cluster/conf> cat hdfs-site.xml
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

    <!-- Put site-specific property overrides in this file. -->

    <configuration>
    <property>
    <name>dfs.replication</name>
    <value>3</value>
    </property>
    <property>
    <name>dfs.http.address</name>
    <value>0.0.0.0:3500</value>
    </property>
    <property>
    <name>dfs.data.dir</name>
    <value>/home/w1153435/hadoop-0.20.2_cluster/dfs/data</value>
    <final>true</final>
    </property>
    <property>
    <name>dfs.name.dir</name>
    <value>/home/w1153435/hadoop-0.20.2_cluster/dfs/name</value>
    <final>true</final>
    </property>
    </configuration>

    w1153435@ngs:~/hadoop-0.20.2_cluster/conf> cat mapred-site.xml
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

    <!-- Put site-specific property overrides in this file. -->

    <configuration>
    <property>
    <name>mapred.job.tracker</name>
    <value>ngs.uni.ac.uk:3001</value>
    </property>
    <property>
    <name>mapred.system.dir</name>
    <value>/home/w1153435/hadoop-0.20.2_cluster/mapred/system</value>
    </property>
    <property>
    <name>mapred.map.tasks</name>
    <value>80</value>
    </property>
    <property>
    <name>mapred.reduce.tasks</name>
    <value>16</value>
    </property>

    </configuration>

    In addition:

    w1153435@ngs:~/hadoop-0.20.2_cluster> bin/hadoop dfsadmin -report
    Configured Capacity: 0 (0 KB)
    Present Capacity: 0 (0 KB)
    DFS Remaining: 0 (0 KB)
    DFS Used: 0 (0 KB)
    DFS Used%: �%
    Under replicated blocks: 0
    Blocks with corrupt replicas: 0
    Missing blocks: 0

    -------------------------------------------------
    Datanodes available: 1 (1 total, 0 dead)

    Name: 161.74.12.36:50010
    Decommission Status : Normal
    Configured Capacity: 0 (0 KB)
    DFS Used: 0 (0 KB)
    Non DFS Used: 0 (0 KB)
    DFS Remaining: 0(0 KB)
    DFS Used%: 100%
    DFS Remaining%: 0%
    Last contact: Wed Aug 17 12:40:17 BST 2011

    Cheers,
    A Df
    ________________________________
    From: A Df <abbey_dragonforest@yahoo.com>
    To: "common-user@hadoop.apache.org" <common-user@hadoop.apache.org>; "shanmuganathan.r@zohocorp.com" <shanmuganathan.r@zohocorp.com>
    Sent: Tuesday, 16 August 2011, 16:20
    Subject: Re: hadoop cluster mode not starting up



    See inline:

    ________________________________
    From: shanmuganathan.r <shanmuganathan.r@zohocorp.com>
    To: common-user@hadoop.apache.org
    Sent: Tuesday, 16 August 2011, 13:35
    Subject: Re: hadoop cluster mode not starting up

    Hi Df,

    Are you use the IP instead of names in conf/masters and conf/slaves . For running the secondary namenode in separate machine refer the following link


    =Yes, I use the names in those files but the ip address are mapped to the names in the /extras/hosts file. Does this cause problems?


    http://www.hadoop-blog.com/2010/12/secondarynamenode-process-is-starting.html


    =I want to making too many changes so I will stick to having the master be both namenode and secondarynamenode. I tried starting up the hdfs and mapreduce but the jobtracker is not running on the master and their is still errors regarding the datanodes because only 5 of 7 datanodes have tasktracker. I ran both commands for to start the hdfs and mapreduce so why is the jobtracker missing?

    Regards,

    Shanmuganathan



    ---- On Tue, 16 Aug 2011 17:06:04 +0530 A Df&lt;abbey_dragonforest@yahoo.com&gt; wrote ----


    I already used a few tutorials as follows:
    * Hadoop Tutorial on Yahoo Developer network which uses an old hadoop and thus older conf files.

    * http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/ which only has two nodes and the master acts as namenode and secondary namenode. I need one with more than that.


    Is there a way to prevent the node from using the central file system because I don't have root permission and my user folder is in a central file system which is replicated on all the nodes?

    See inline too for my responses



    &gt;________________________________
    &gt;From: Steve Loughran &lt;stevel@apache.org&gt;
    &gt;To: common-user@hadoop.apache.org
    &gt;Sent: Tuesday, 16 August 2011, 12:08
    &gt;Subject: Re: hadoop cluster mode not starting up
    &gt;
    &gt;On 16/08/11 11:19, A Df wrote:
    &gt;&gt; See inline
    &gt;&gt;
    &gt;&gt;
    &gt;&gt;
    &gt;&gt;&gt; ________________________________
    &gt;&gt;&gt; From: Steve Loughran&lt;stevel@apache.org&gt;
    &gt;&gt;&gt; To: common-user@hadoop.apache.org
    &gt;&gt;&gt; Sent: Tuesday, 16 August 2011, 11:08
    &gt;&gt;&gt; Subject: Re: hadoop cluster mode not starting up
    &gt;&gt;&gt;
    &gt;&gt;&gt; On 16/08/11 11:02, A Df wrote:
    &gt;&gt;&gt;&gt; Hello All:
    &gt;&gt;&gt;&gt;
    &gt;&gt;&gt;&gt; I used a combination of tutorials to setup hadoop but most seems to be using either an old version of hadoop or only using 2 machines for the cluster which isn't really a cluster. Does anyone know of a good tutorial which setups multiple nodes for a cluster?? I already looked at the Apache website but it does not give sample values for the conf files. Also each set of tutorials seem to have a different set of parameters which they indicate should be changed so now its a bit confusing. For example, my configuration sets a dedicate namenode, secondary namenode and 8 slave nodes but when I run the start command it gives an error. Should I install hadoop to my user directory or on the root? I have it in my directory but all the nodes have a central file system as opposed to distributed so whatever I do on one node in my user folder it affect all the others so how do i set the paths to ensure that it uses a distributed system?
    &gt;&gt;&gt;&gt;
    &gt;&gt;&gt;&gt; For the errors below, I checked the directories and the files are there. Am I not sure what went wrong and how to set the conf to not have central file system. Thank you.
    &gt;&gt;&gt;&gt;
    &gt;&gt;&gt;&gt; Error message
    &gt;&gt;&gt;&gt; CODE
    &gt;&gt;&gt;&gt; w1153435@n51:~/hadoop-0.20.2_cluster&gt;  bin/start-dfs.sh
    &gt;&gt;&gt;&gt; bin/start-dfs.sh: line 28: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-config.sh: No such file or directory
    &gt;&gt;&gt;&gt; bin/start-dfs.sh: line 50: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemon.sh: No such file or directory
    &gt;&gt;&gt;&gt; bin/start-dfs.sh: line 51: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh: No such file or directory
    &gt;&gt;&gt;&gt; bin/start-dfs.sh: line 52: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh: No such file or directory
    &gt;&gt;&gt;&gt; CODE
    &gt;&gt;&gt;
    &gt;&gt;&gt; there's  No such file or directory as
    &gt;&gt;&gt; /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh
    &gt;&gt;&gt;
    &gt;&gt;&gt;
    &gt;&gt;&gt; There is, I checked as shown
    &gt;&gt;&gt; w1153435@n51:~/hadoop-0.20.2_cluster&gt;  ls bin
    &gt;&gt;&gt; hadoop            rcc                start-dfs.sh      stop-dfs.sh
    &gt;&gt;&gt; hadoop-config.sh  slaves.sh          start-mapred.sh  stop-mapred.sh
    &gt;&gt;&gt; hadoop-daemon.sh  start-all.sh      stop-all.sh
    &gt;&gt;&gt; hadoop-daemons.sh  start-balancer.sh  stop-balancer.sh
    &gt;
    &gt;try "pwd" to print out where the OS thinks you are, as it doesn't seem
    &gt;to be where you think you are
    &gt;
    &gt;
    &gt;w1153435@ngs:~/hadoop-0.20.2_cluster&gt; pwd
    &gt;/home/w1153435/hadoop-0.20.2_cluster
    &gt;
    &gt;
    &gt;w1153435@ngs:~/hadoop-0.20.2_cluster/bin&gt; pwd
    &gt;/home/w1153435/hadoop-0.20.2_cluster/bin
    &gt;
    &gt;&gt;&gt;
    &gt;&gt;&gt;
    &gt;&gt;&gt;
    &gt;&gt;&gt;
    &gt;&gt;&gt;&gt;
    &gt;&gt;&gt;&gt; I had tried running this command below earlier but also got problems:
    &gt;&gt;&gt;&gt; CODE
    &gt;&gt;&gt;&gt; w1153435@ngs:~/hadoop-0.20.2_cluster&gt;  export HADOOP_CONF_DIR=${HADOOP_HOME}/conf
    &gt;&gt;&gt;&gt; w1153435@ngs:~/hadoop-0.20.2_cluster&gt;  export HADOOP_SLAVES=${HADOOP_CONF_DIR}/slaves
    &gt;&gt;&gt;&gt; w1153435@ngs:~/hadoop-0.20.2_cluster&gt;  ${HADOOP_HOME}/bin/slaves.sh "mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop"
    &gt;&gt;&gt;&gt; -bash: /bin/slaves.sh: No such file or directory
    &gt;&gt;&gt;&gt; w1153435@ngs:~/hadoop-0.20.2_cluster&gt;  export HADOOP_HOME=/home/w1153435/hadoop-0.20.2_cluster
    &gt;&gt;&gt;&gt; w1153435@ngs:~/hadoop-0.20.2_cluster&gt;  ${HADOOP_HOME}/bin/slaves.sh "mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop"
    &gt;&gt;&gt;&gt; cat: /conf/slaves: No such file or directory
    &gt;&gt;&gt;&gt; CODE
    &gt;&gt;&gt;&gt;
    &gt;&gt;&gt; there's  No such file or directory as /conf/slaves because you set
    &gt;&gt;&gt; HADOOP_HOME after setting the other env variables, which are expanded at
    &gt;&gt;&gt; set-time, not run-time.
    &gt;&gt;&gt;
    &gt;&gt;&gt; I redid the command but still have errors on the slaves
    &gt;&gt;&gt;
    &gt;&gt;&gt;
    &gt;&gt;&gt; w1153435@n51:~/hadoop-0.20.2_cluster&gt;  export HADOOP_HOME=/home/w1153435/hadoop-0.20.2_cluster
    &gt;&gt;&gt; w1153435@n51:~/hadoop-0.20.2_cluster&gt;  export HADOOP_CONF_DIR=${HADOOP_HOME}/conf
    &gt;&gt;&gt; w1153435@n51:~/hadoop-0.20.2_cluster&gt;  export HADOOP_SLAVES=${HADOOP_CONF_DIR}/slaves
    &gt;&gt;&gt; w1153435@n51:~/hadoop-0.20.2_cluster&gt;  ${HADOOP_HOME}/bin/slaves.sh "mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop"
    &gt;&gt;&gt; privn51: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    &gt;&gt;&gt; privn58: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    &gt;&gt;&gt; privn52: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    &gt;&gt;&gt; privn55: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    &gt;&gt;&gt; privn57: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    &gt;&gt;&gt; privn54: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    &gt;&gt;&gt; privn53: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    &gt;&gt;&gt; privn56: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    &gt;
    &gt;try ssh-ing in, do it by hand, make sure you have the right permissions etc
    &gt;
    &gt;
    &gt;I reset the above path variables again and checked that they existed and tried the command above but same error. I used ssh with no problems and no password request so that is fine. What else could be wrong?
    &gt;w1153435@ngs:~/hadoop-0.20.2_cluster&gt; echo $HADOOP_HOME                         /home/w1153435/hadoop-0.20.2_cluster
    &gt;w1153435@ngs:~/hadoop-0.20.2_cluster&gt; echo $HADOOP_CONF_DIR                     /home/w1153435/hadoop-0.20.2_cluster/conf
    &gt;w1153435@ngs:~/hadoop-0.20.2_cluster&gt; echo $HADOOP_SLAVES                       /home/w1153435/hadoop-0.20.2_cluster/conf/slaves
    &gt;w1153435@ngs:~/hadoop-0.20.2_cluster&gt;
    &gt;
    &gt;
    &gt;
    &gt;
    &gt;


  • Harsh J at Aug 17, 2011 at 11:59 am
    A Df,

    Setting up a proper cluster on a sane network environment is as easy
    as setting up a pseudo-distributed one.

    Some questions:
    - What OS are you deploying hadoop here on?
    - Do you have bash? What version of bash is available?
    - What user/group are you running hadoop as? Is it consistent across
    all slaves+master?

    What I usually used to do to run a fresh cluster is:

    - Ensure that I can ssh from my master to any slave, without a
    password (as hadoop's scripts require).
    - Place Hadoop at a common location across all the machines (Be wary
    of NFS mounts here, you don't want datanode's dfs.data.dir directories
    on NFS mounts for example)
    - Write out a configuration set and pass it to all nodes.
    - Issue a namenode format, and then start-all.sh from the master.

    Perhaps, if your environment supports it, you can ease things out with
    the use of the free tool SCM Express [1] and the likes. These tools
    have a wizard-like interface and point out common issues as you go
    about setting up and running your cluster.

    [1] - http://www.cloudera.com/products-services/scm-express/
    On Wed, Aug 17, 2011 at 5:12 PM, A Df wrote:
    Hello Everyone:

    I am adding the contents of my config file in the hopes that someone will be able to help. See inline for the discussions. I really don't understand why it works in pseudo-mode but gives so much problems in cluster. I have tried the instructions from the Apache cluster setup, Yahoo Development Network and from Michael Noll's tutorial.

    w1153435@ngs:~/hadoop-0.20.2_cluster/conf> cat core-site.xml
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

    <!-- Put site-specific property overrides in this file. -->


    <configuration>
    <property>
    <name>fs.default.name</name>
    <value>hdfs://ngs.uni.ac.uk:3000</value>
    </property>
    <property>
    <name>HADOOP_LOG_DIR</name>
    <value>/home/w1153435/hadoop-0.20.2_cluster/var/log/hadoop</value>
    </property>
    <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop</value>
    </property>
    </configuration>

    w1153435@ngs:~/hadoop-0.20.2_cluster/conf> cat hdfs-site.xml
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

    <!-- Put site-specific property overrides in this file. -->

    <configuration>
    <property>
    <name>dfs.replication</name>
    <value>3</value>
    </property>
    <property>
    <name>dfs.http.address</name>
    <value>0.0.0.0:3500</value>
    </property>
    <property>
    <name>dfs.data.dir</name>
    <value>/home/w1153435/hadoop-0.20.2_cluster/dfs/data</value>
    <final>true</final>
    </property>
    <property>
    <name>dfs.name.dir</name>
    <value>/home/w1153435/hadoop-0.20.2_cluster/dfs/name</value>
    <final>true</final>
    </property>
    </configuration>

    w1153435@ngs:~/hadoop-0.20.2_cluster/conf> cat mapred-site.xml
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

    <!-- Put site-specific property overrides in this file. -->

    <configuration>
    <property>
    <name>mapred.job.tracker</name>
    <value>ngs.uni.ac.uk:3001</value>
    </property>
    <property>
    <name>mapred.system.dir</name>
    <value>/home/w1153435/hadoop-0.20.2_cluster/mapred/system</value>
    </property>
    <property>
    <name>mapred.map.tasks</name>
    <value>80</value>
    </property>
    <property>
    <name>mapred.reduce.tasks</name>
    <value>16</value>
    </property>

    </configuration>

    In addition:

    w1153435@ngs:~/hadoop-0.20.2_cluster> bin/hadoop dfsadmin -report
    Configured Capacity: 0 (0 KB)
    Present Capacity: 0 (0 KB)
    DFS Remaining: 0 (0 KB)
    DFS Used: 0 (0 KB)
    DFS Used%: �%
    Under replicated blocks: 0
    Blocks with corrupt replicas: 0
    Missing blocks: 0

    -------------------------------------------------
    Datanodes available: 1 (1 total, 0 dead)

    Name: 161.74.12.36:50010
    Decommission Status : Normal
    Configured Capacity: 0 (0 KB)
    DFS Used: 0 (0 KB)
    Non DFS Used: 0 (0 KB)
    DFS Remaining: 0(0 KB)
    DFS Used%: 100%
    DFS Remaining%: 0%
    Last contact: Wed Aug 17 12:40:17 BST 2011

    Cheers,
    A Df
    ________________________________
    From: A Df <abbey_dragonforest@yahoo.com>
    To: "common-user@hadoop.apache.org" <common-user@hadoop.apache.org>; "shanmuganathan.r@zohocorp.com" <shanmuganathan.r@zohocorp.com>
    Sent: Tuesday, 16 August 2011, 16:20
    Subject: Re: hadoop cluster mode not starting up



    See inline:

    ________________________________
    From: shanmuganathan.r <shanmuganathan.r@zohocorp.com>
    To: common-user@hadoop.apache.org
    Sent: Tuesday, 16 August 2011, 13:35
    Subject: Re: hadoop cluster mode not starting up

    Hi Df,

    Are you use the IP instead of names in conf/masters and conf/slaves . For running the secondary namenode in separate machine refer the following link


    =Yes, I use the names in those files but the ip address are mapped to the names in the /extras/hosts file. Does this cause problems?


    http://www.hadoop-blog.com/2010/12/secondarynamenode-process-is-starting.html


    =I want to making too many changes so I will stick to having the master be both namenode and secondarynamenode. I tried starting up the hdfs and mapreduce but the jobtracker is not running on the master and their is still errors regarding the datanodes because only 5 of 7 datanodes have tasktracker. I ran both commands for to start the hdfs and mapreduce so why is the jobtracker missing?

    Regards,

    Shanmuganathan



    ---- On Tue, 16 Aug 2011 17:06:04 +0530 A Df<abbey_dragonforest@yahoo.com> wrote ----


    I already used a few tutorials as follows:
    * Hadoop Tutorial on Yahoo Developer network which uses an old hadoop and thus older conf files.

    * http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/ which only has two nodes and the master acts as namenode and secondary namenode. I need one with more than that.


    Is there a way to prevent the node from using the central file system because I don't have root permission and my user folder is in a central file system which is replicated on all the nodes?

    See inline too for my responses


    ________________________________
    From: Steve Loughran <stevel@apache.org>
    To: common-user@hadoop.apache.org
    Sent: Tuesday, 16 August 2011, 12:08
    Subject: Re: hadoop cluster mode not starting up
    On 16/08/11 11:19, A Df wrote:
    See inline


    ________________________________
    From: Steve Loughran<stevel@apache.org>
    To: common-user@hadoop.apache.org
    Sent: Tuesday, 16 August 2011, 11:08
    Subject: Re: hadoop cluster mode not starting up
    On 16/08/11 11:02, A Df wrote:
    Hello All:

    I used a combination of tutorials to setup hadoop but most seems to be using either an old version of hadoop or only using 2 machines for the cluster which isn't really a cluster. Does anyone know of a good tutorial which setups multiple nodes for a cluster?? I already looked at the Apache website but it does not give sample values for the conf files. Also each set of tutorials seem to have a different set of parameters which they indicate should be changed so now its a bit confusing. For example, my configuration sets a dedicate namenode, secondary namenode and 8 slave nodes but when I run the start command it gives an error. Should I install hadoop to my user directory or on the root? I have it in my directory but all the nodes have a central file system as opposed to distributed so whatever I do on one node in my user folder it affect all the others so how do i set the paths to ensure that it uses a distributed system?

    For the errors below, I checked the directories and the files are there. Am I not sure what went wrong and how to set the conf to not have central file system. Thank you.

    Error message
    CODE
    w1153435@n51:~/hadoop-0.20.2_cluster>  bin/start-dfs.sh
    bin/start-dfs.sh: line 28: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-config.sh: No such file or directory
    bin/start-dfs.sh: line 50: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemon.sh: No such file or directory
    bin/start-dfs.sh: line 51: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh: No such file or directory
    bin/start-dfs.sh: line 52: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh: No such file or directory
    CODE
    there's  No such file or directory as
    /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh


    There is, I checked as shown
    w1153435@n51:~/hadoop-0.20.2_cluster>  ls bin
    hadoop            rcc                start-dfs.sh      stop-dfs.sh
    hadoop-config.sh  slaves.sh          start-mapred.sh  stop-mapred.sh
    hadoop-daemon.sh  start-all.sh      stop-all.sh
    hadoop-daemons.sh  start-balancer.sh  stop-balancer.sh
    try "pwd" to print out where the OS thinks you are, as it doesn't seem
    to be where you think you are


    w1153435@ngs:~/hadoop-0.20.2_cluster> pwd
    /home/w1153435/hadoop-0.20.2_cluster


    w1153435@ngs:~/hadoop-0.20.2_cluster/bin> pwd
    /home/w1153435/hadoop-0.20.2_cluster/bin


    I had tried running this command below earlier but also got problems:
    CODE
    w1153435@ngs:~/hadoop-0.20.2_cluster>  export HADOOP_CONF_DIR=${HADOOP_HOME}/conf
    w1153435@ngs:~/hadoop-0.20.2_cluster>  export HADOOP_SLAVES=${HADOOP_CONF_DIR}/slaves
    w1153435@ngs:~/hadoop-0.20.2_cluster>  ${HADOOP_HOME}/bin/slaves.sh "mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop"
    -bash: /bin/slaves.sh: No such file or directory
    w1153435@ngs:~/hadoop-0.20.2_cluster>  export HADOOP_HOME=/home/w1153435/hadoop-0.20.2_cluster
    w1153435@ngs:~/hadoop-0.20.2_cluster>  ${HADOOP_HOME}/bin/slaves.sh "mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop"
    cat: /conf/slaves: No such file or directory
    CODE
    there's  No such file or directory as /conf/slaves because you set
    HADOOP_HOME after setting the other env variables, which are expanded at
    set-time, not run-time.

    I redid the command but still have errors on the slaves


    w1153435@n51:~/hadoop-0.20.2_cluster>  export HADOOP_HOME=/home/w1153435/hadoop-0.20.2_cluster
    w1153435@n51:~/hadoop-0.20.2_cluster>  export HADOOP_CONF_DIR=${HADOOP_HOME}/conf
    w1153435@n51:~/hadoop-0.20.2_cluster>  export HADOOP_SLAVES=${HADOOP_CONF_DIR}/slaves
    w1153435@n51:~/hadoop-0.20.2_cluster>  ${HADOOP_HOME}/bin/slaves.sh "mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop"
    privn51: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    privn58: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    privn52: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    privn55: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    privn57: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    privn54: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    privn53: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    privn56: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    try ssh-ing in, do it by hand, make sure you have the right permissions etc


    I reset the above path variables again and checked that they existed and tried the command above but same error. I used ssh with no problems and no password request so that is fine. What else could be wrong?
    w1153435@ngs:~/hadoop-0.20.2_cluster> echo $HADOOP_HOME                         /home/w1153435/hadoop-0.20.2_cluster
    w1153435@ngs:~/hadoop-0.20.2_cluster> echo $HADOOP_CONF_DIR                     /home/w1153435/hadoop-0.20.2_cluster/conf
    w1153435@ngs:~/hadoop-0.20.2_cluster> echo $HADOOP_SLAVES                       /home/w1153435/hadoop-0.20.2_cluster/conf/slaves
    w1153435@ngs:~/hadoop-0.20.2_cluster>






    --
    Harsh J
  • Shanmuganathan.r at Aug 17, 2011 at 12:14 pm
    Hi Df,

    check that you have w1153435 user in all machines in the cluster




    and use the same configuration for all machines. use IP instead of name. (already you said that you didn't have the root permission )



    &lt;name&gt;fs.default.name&lt;/name&gt;
    &lt;value&gt;hdfs://109.9.3.101(ex):3000&lt;/value&gt;


    &lt;name&gt;mapred.job.tracker&lt;/name&gt;
    &lt;value&gt;109.9.3.101(ex):3001&lt;/value&gt;


    check that ssh passwordless login

    Regards,

    Shanmuganathan



    ---- On Wed, 17 Aug 2011 17:12:25 +0530 A Df &lt;abbey_dragonforest@yahoo.com&gt; wrote ----


    Hello Everyone:

    I am adding the contents of my config file in the hopes that someone will be able to help. See inline for the discussions. I really don't understand why it works in pseudo-mode but gives so much problems in cluster. I have tried the instructions from the Apache cluster setup, Yahoo Development Network and from Michael Noll's tutorial.

    w1153435@ngs:~/hadoop-0.20.2_cluster/conf&gt; cat core-site.xml
    &lt;?xml version="1.0"?&gt;
    &lt;?xml-stylesheet type="text/xsl" href="configuration.xsl"?&gt;

    &lt;!-- Put site-specific property overrides in this file. --&gt;


    &lt;configuration&gt;
    &lt;property&gt;
    &lt;name&gt;fs.default.name&lt;/name&gt;
    &lt;value&gt;hdfs://ngs.uni.ac.uk:3000&lt;/value&gt;
    &lt;/property&gt;
    &lt;property&gt;
    &lt;name&gt;HADOOP_LOG_DIR&lt;/name&gt;
    &lt;value&gt;/home/w1153435/hadoop-0.20.2_cluster/var/log/hadoop&lt;/value&gt;
    &lt;/property&gt;
    &lt;property&gt;
    &lt;name&gt;hadoop.tmp.dir&lt;/name&gt;
    &lt;value&gt;/home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop&lt;/value&gt;
    &lt;/property&gt;
    &lt;/configuration&gt;

    w1153435@ngs:~/hadoop-0.20.2_cluster/conf&gt; cat hdfs-site.xml
    &lt;?xml version="1.0"?&gt;
    &lt;?xml-stylesheet type="text/xsl" href="configuration.xsl"?&gt;

    &lt;!-- Put site-specific property overrides in this file. --&gt;

    &lt;configuration&gt;
    &lt;property&gt;
    &lt;name&gt;dfs.replication&lt;/name&gt;
    &lt;value&gt;3&lt;/value&gt;
    &lt;/property&gt;
    &lt;property&gt;
    &lt;name&gt;dfs.http.address&lt;/name&gt;
    &lt;value&gt;0.0.0.0:3500&lt;/value&gt;
    &lt;/property&gt;
    &lt;property&gt;
    &lt;name&gt;dfs.data.dir&lt;/name&gt;
    &lt;value&gt;/home/w1153435/hadoop-0.20.2_cluster/dfs/data&lt;/value&gt;
    &lt;final&gt;true&lt;/final&gt;
    &lt;/property&gt;
    &lt;property&gt;
    &lt;name&gt;dfs.name.dir&lt;/name&gt;
    &lt;value&gt;/home/w1153435/hadoop-0.20.2_cluster/dfs/name&lt;/value&gt;
    &lt;final&gt;true&lt;/final&gt;
    &lt;/property&gt;
    &lt;/configuration&gt;

    w1153435@ngs:~/hadoop-0.20.2_cluster/conf&gt; cat mapred-site.xml
    &lt;?xml version="1.0"?&gt;
    &lt;?xml-stylesheet type="text/xsl" href="configuration.xsl"?&gt;

    &lt;!-- Put site-specific property overrides in this file. --&gt;

    &lt;configuration&gt;
    &lt;property&gt;
    &lt;name&gt;mapred.job.tracker&lt;/name&gt;
    &lt;value&gt;ngs.uni.ac.uk:3001&lt;/value&gt;
    &lt;/property&gt;
    &lt;property&gt;
    &lt;name&gt;mapred.system.dir&lt;/name&gt;
    &lt;value&gt;/home/w1153435/hadoop-0.20.2_cluster/mapred/system&lt;/value&gt;
    &lt;/property&gt;
    &lt;property&gt;
    &lt;name&gt;mapred.map.tasks&lt;/name&gt;
    &lt;value&gt;80&lt;/value&gt;
    &lt;/property&gt;
    &lt;property&gt;
    &lt;name&gt;mapred.reduce.tasks&lt;/name&gt;
    &lt;value&gt;16&lt;/value&gt;
    &lt;/property&gt;

    &lt;/configuration&gt;

    In addition:

    w1153435@ngs:~/hadoop-0.20.2_cluster&gt; bin/hadoop dfsadmin -report
    Configured Capacity: 0 (0 KB)
    Present Capacity: 0 (0 KB)
    DFS Remaining: 0 (0 KB)
    DFS Used: 0 (0 KB)
    DFS Used%: �%
    Under replicated blocks: 0
    Blocks with corrupt replicas: 0
    Missing blocks: 0

    -------------------------------------------------
    Datanodes available: 1 (1 total, 0 dead)

    Name: 161.74.12.36:50010
    Decommission Status : Normal
    Configured Capacity: 0 (0 KB)
    DFS Used: 0 (0 KB)
    Non DFS Used: 0 (0 KB)
    DFS Remaining: 0(0 KB)
    DFS Used%: 100%
    DFS Remaining%: 0%
    Last contact: Wed Aug 17 12:40:17 BST 2011

    Cheers,
    A Df

    &gt;________________________________
    &gt;From: A Df &lt;abbey_dragonforest@yahoo.com&gt;
    &gt;To: "common-user@hadoop.apache.org" &lt;common-user@hadoop.apache.org&gt;; "shanmuganathan.r@zohocorp.com" &lt;shanmuganathan.r@zohocorp.com&gt;
    &gt;Sent: Tuesday, 16 August 2011, 16:20
    &gt;Subject: Re: hadoop cluster mode not starting up
    &gt;
    &gt;
    &gt;
    &gt;See inline:
    &gt;
    &gt;
    &gt;&gt;________________________________
    &gt;&gt;From: shanmuganathan.r &lt;shanmuganathan.r@zohocorp.com&gt;
    &gt;&gt;To: common-user@hadoop.apache.org
    &gt;&gt;Sent: Tuesday, 16 August 2011, 13:35
    &gt;&gt;Subject: Re: hadoop cluster mode not starting up
    &gt;&gt;
    &gt;&gt;Hi Df,
    &gt;&gt;
    &gt;&gt; Are you use the IP instead of names in conf/masters and conf/slaves . For running the secondary namenode in separate machine refer the following link
    &gt;&gt;
    &gt;&gt;
    &gt;&gt;=Yes, I use the names in those files but the ip address are mapped to the names in the /extras/hosts file. Does this cause problems?
    &gt;&gt;
    &gt;&gt;
    &gt;&gt;http://www.hadoop-blog.com/2010/12/secondarynamenode-process-is-starting.html
    &gt;&gt;
    &gt;&gt;
    &gt;&gt;=I want to making too many changes so I will stick to having the master be both namenode and secondarynamenode. I tried starting up the hdfs and mapreduce but the jobtracker is not running on the master and their is still errors regarding the datanodes because only 5 of 7 datanodes have tasktracker. I ran both commands for to start the hdfs and mapreduce so why is the jobtracker missing?
    &gt;&gt;
    &gt;&gt;Regards,
    &gt;&gt;
    &gt;&gt;Shanmuganathan
    &gt;&gt;
    &gt;&gt;
    &gt;&gt;
    &gt;&gt;---- On Tue, 16 Aug 2011 17:06:04 +0530 A Df&amp;lt;abbey_dragonforest@yahoo.com&amp;gt; wrote ----
    &gt;&gt;
    &gt;&gt;
    &gt;&gt;I already used a few tutorials as follows:
    &gt;&gt; * Hadoop Tutorial on Yahoo Developer network which uses an old hadoop and thus older conf files.
    &gt;&gt;
    &gt;&gt; * http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/ which only has two nodes and the master acts as namenode and secondary namenode. I need one with more than that.
    &gt;&gt;
    &gt;&gt;
    &gt;&gt;Is there a way to prevent the node from using the central file system because I don't have root permission and my user folder is in a central file system which is replicated on all the nodes?
    &gt;&gt;
    &gt;&gt;See inline too for my responses
    &gt;&gt;
    &gt;&gt;
    &gt;&gt;
    &gt;&gt;&amp;gt;________________________________
    &gt;&gt;&amp;gt;From: Steve Loughran &amp;lt;stevel@apache.org&amp;gt;
    &gt;&gt;&amp;gt;To: common-user@hadoop.apache.org
    &gt;&gt;&amp;gt;Sent: Tuesday, 16 August 2011, 12:08
    &gt;&gt;&amp;gt;Subject: Re: hadoop cluster mode not starting up
    &gt;&gt;&amp;gt;
    &gt;&gt;&amp;gt;On 16/08/11 11:19, A Df wrote:
    &gt;&gt;&amp;gt;&amp;gt; See inline
    &gt;&gt;&amp;gt;&amp;gt;
    &gt;&gt;&amp;gt;&amp;gt;
    &gt;&gt;&amp;gt;&amp;gt;
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt; ________________________________
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt; From: Steve Loughran&amp;lt;stevel@apache.org&amp;gt;
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt; To: common-user@hadoop.apache.org
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt; Sent: Tuesday, 16 August 2011, 11:08
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt; Subject: Re: hadoop cluster mode not starting up
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt; On 16/08/11 11:02, A Df wrote:
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Hello All:
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt; I used a combination of tutorials to setup hadoop but most seems to be using either an old version of hadoop or only using 2 machines for the cluster which isn't really a cluster. Does anyone know of a good tutorial which setups multiple nodes for a cluster?? I already looked at the Apache website but it does not give sample values for the conf files. Also each set of tutorials seem to have a different set of parameters which they indicate should be changed so now its a bit confusing. For example, my configuration sets a dedicate namenode, secondary namenode and 8 slave nodes but when I run the start command it gives an error. Should I install hadoop to my user directory or on the root? I have it in my directory but all the nodes have a central file system as opposed to distributed so whatever I do on one node in my user folder it affect all the others so how do i set the paths to ensure that it uses a distributed system?
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt; For the errors below, I checked the directories and the files are there. Am I not sure what went wrong and how to set the conf to not have central file system. Thank you.
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt; Error message
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt; CODE
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt; w1153435@n51:~/hadoop-0.20.2_cluster&amp;gt; bin/start-dfs.sh
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt; bin/start-dfs.sh: line 28: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-config.sh: No such file or directory
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt; bin/start-dfs.sh: line 50: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemon.sh: No such file or directory
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt; bin/start-dfs.sh: line 51: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh: No such file or directory
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt; bin/start-dfs.sh: line 52: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh: No such file or directory
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt; CODE
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt; there's No such file or directory as
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt; /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt; There is, I checked as shown
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt; w1153435@n51:~/hadoop-0.20.2_cluster&amp;gt; ls bin
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt; hadoop rcc start-dfs.sh stop-dfs.sh
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt; hadoop-config.sh slaves.sh start-mapred.sh stop-mapred.sh
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt; hadoop-daemon.sh start-all.sh stop-all.sh
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt; hadoop-daemons.sh start-balancer.sh stop-balancer.sh
    &gt;&gt;&amp;gt;
    &gt;&gt;&amp;gt;try "pwd" to print out where the OS thinks you are, as it doesn't seem
    &gt;&gt;&amp;gt;to be where you think you are
    &gt;&gt;&amp;gt;
    &gt;&gt;&amp;gt;
    &gt;&gt;&amp;gt;w1153435@ngs:~/hadoop-0.20.2_cluster&amp;gt; pwd
    &gt;&gt;&amp;gt;/home/w1153435/hadoop-0.20.2_cluster
    &gt;&gt;&amp;gt;
    &gt;&gt;&amp;gt;
    &gt;&gt;&amp;gt;w1153435@ngs:~/hadoop-0.20.2_cluster/bin&amp;gt; pwd
    &gt;&gt;&amp;gt;/home/w1153435/hadoop-0.20.2_cluster/bin
    &gt;&gt;&amp;gt;
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt; I had tried running this command below earlier but also got problems:
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt; CODE
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt; w1153435@ngs:~/hadoop-0.20.2_cluster&amp;gt; export HADOOP_CONF_DIR=${HADOOP_HOME}/conf
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt; w1153435@ngs:~/hadoop-0.20.2_cluster&amp;gt; export HADOOP_SLAVES=${HADOOP_CONF_DIR}/slaves
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt; w1153435@ngs:~/hadoop-0.20.2_cluster&amp;gt; ${HADOOP_HOME}/bin/slaves.sh "mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop"
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt; -bash: /bin/slaves.sh: No such file or directory
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt; w1153435@ngs:~/hadoop-0.20.2_cluster&amp;gt; export HADOOP_HOME=/home/w1153435/hadoop-0.20.2_cluster
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt; w1153435@ngs:~/hadoop-0.20.2_cluster&amp;gt; ${HADOOP_HOME}/bin/slaves.sh "mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop"
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt; cat: /conf/slaves: No such file or directory
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt; CODE
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt; there's No such file or directory as /conf/slaves because you set
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt; HADOOP_HOME after setting the other env variables, which are expanded at
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt; set-time, not run-time.
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt; I redid the command but still have errors on the slaves
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt;
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt; w1153435@n51:~/hadoop-0.20.2_cluster&amp;gt; export HADOOP_HOME=/home/w1153435/hadoop-0.20.2_cluster
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt; w1153435@n51:~/hadoop-0.20.2_cluster&amp;gt; export HADOOP_CONF_DIR=${HADOOP_HOME}/conf
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt; w1153435@n51:~/hadoop-0.20.2_cluster&amp;gt; export HADOOP_SLAVES=${HADOOP_CONF_DIR}/slaves
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt; w1153435@n51:~/hadoop-0.20.2_cluster&amp;gt; ${HADOOP_HOME}/bin/slaves.sh "mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop"
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt; privn51: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt; privn58: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt; privn52: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt; privn55: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt; privn57: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt; privn54: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt; privn53: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    &gt;&gt;&amp;gt;&amp;gt;&amp;gt; privn56: bash: mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop: No such file or directory
    &gt;&gt;&amp;gt;
    &gt;&gt;&amp;gt;try ssh-ing in, do it by hand, make sure you have the right permissions etc
    &gt;&gt;&amp;gt;
    &gt;&gt;&amp;gt;
    &gt;&gt;&amp;gt;I reset the above path variables again and checked that they existed and tried the command above but same error. I used ssh with no problems and no password request so that is fine. What else could be wrong?
    &gt;&gt;&amp;gt;w1153435@ngs:~/hadoop-0.20.2_cluster&amp;gt; echo $HADOOP_HOME /home/w1153435/hadoop-0.20.2_cluster
    &gt;&gt;&amp;gt;w1153435@ngs:~/hadoop-0.20.2_cluster&amp;gt; echo $HADOOP_CONF_DIR /home/w1153435/hadoop-0.20.2_cluster/conf
    &gt;&gt;&amp;gt;w1153435@ngs:~/hadoop-0.20.2_cluster&amp;gt; echo $HADOOP_SLAVES /home/w1153435/hadoop-0.20.2_cluster/conf/slaves
    &gt;&gt;&amp;gt;w1153435@ngs:~/hadoop-0.20.2_cluster&amp;gt;
    &gt;&gt;&amp;gt;
    &gt;&gt;&amp;gt;
    &gt;&gt;&amp;gt;
    &gt;&gt;&amp;gt;
    &gt;&gt;&amp;gt;
    &gt;&gt;
    &gt;&gt;
    &gt;&gt;
    &gt;&gt;
    &gt;
    &gt;
  • Shanmuganathan.r at Aug 16, 2011 at 10:17 am
    Hi Df,

    I think you didn,t set the conf/slave files in hadoop and bin/* (* - files you specified are not present ). Verified these files in bin directory.

    The following link is very useful to configure the hadoop in multinode.


    http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/

    Regards,

    Shanmuganathan



    ---- On Tue, 16 Aug 2011 15:32:57 +0530 A Df&lt;abbey_dragonforest@yahoo.com&gt; wrote ----


    Hello All:

    I used a combination of tutorials to setup hadoop but most seems to be using either an old version of hadoop or only using 2 machines for the cluster which isn't really a cluster. Does anyone know of a good tutorial which setups multiple nodes for a cluster?? I already looked at the Apache website but it does not give sample values for the conf files. Also each set of tutorials seem to have a different set of parameters which they indicate should be changed so now its a bit confusing. For example, my configuration sets a dedicate namenode, secondary namenode and 8 slave nodes but when I run the start command it gives an error. Should I install hadoop to my user directory or on the root? I have it in my directory but all the nodes have a central file system as opposed to distributed so whatever I do on one node in my user folder it affect all the others so how do i set the paths to ensure that it uses a distributed system?

    For the errors below, I checked the directories and the files are there. Am I not sure what went wrong and how to set the conf to not have central file system. Thank you.

    Error message
    CODE
    w1153435@n51:~/hadoop-0.20.2_cluster&gt; bin/start-dfs.sh
    bin/start-dfs.sh: line 28: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-config.sh: No such file or directory
    bin/start-dfs.sh: line 50: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemon.sh: No such file or directory
    bin/start-dfs.sh: line 51: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh: No such file or directory
    bin/start-dfs.sh: line 52: /w1153435/hadoop-0.20.2_cluster/bin/hadoop-daemons.sh: No such file or directory
    CODE

    I had tried running this command below earlier but also got problems:
    CODE
    w1153435@ngs:~/hadoop-0.20.2_cluster&gt; export HADOOP_CONF_DIR=${HADOOP_HOME}/conf
    w1153435@ngs:~/hadoop-0.20.2_cluster&gt; export HADOOP_SLAVES=${HADOOP_CONF_DIR}/slaves
    w1153435@ngs:~/hadoop-0.20.2_cluster&gt; ${HADOOP_HOME}/bin/slaves.sh "mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop"
    -bash: /bin/slaves.sh: No such file or directory
    w1153435@ngs:~/hadoop-0.20.2_cluster&gt; export HADOOP_HOME=/home/w1153435/hadoop-0.20.2_cluster
    w1153435@ngs:~/hadoop-0.20.2_cluster&gt; ${HADOOP_HOME}/bin/slaves.sh "mkdir -p /home/w1153435/hadoop-0.20.2_cluster/tmp/hadoop"
    cat: /conf/slaves: No such file or directory
    CODE


    Cheers,
    A Df

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedAug 16, '11 at 10:03a
activeAug 17, '11 at 12:14p
posts12
users5
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase