FAQ
Hi,

I am using a cluster of two machines one master and one slave. When i
try to stop the cluster using stop-all.sh it is displaying as below. the
task tracker and datanode are also not stopped in the slave. Please help me
in solving this.

stopping jobtracker
160.110.150.29: no tasktracker to stop
stopping namenode
160.110.150.29: no datanode to stop
localhost: stopping secondarynamenode


--
With Regards,
Karthik

Search Discussions

  • Ken Goodhope at Jul 15, 2010 at 4:20 pm
    Inside hadoop-env.sh, you will see a property that sets the directory for
    pids to be written too. Check which directory it is and then investigate
    the possibility that some other process is deleting, or overwriting those
    files. If you are using NFS, with all nodes pointing at the same directory,
    then it might be a matter of each node overwriting the same file.

    Either way, the stop scripts look for those pid files, and used them to stop
    the correct daemon. If they are not found, or if the file contains the
    wrong pid, the script will echo no process to stop.
    On Thu, Jul 15, 2010 at 4:51 AM, Karthik Kumar wrote:

    Hi,

    I am using a cluster of two machines one master and one slave. When i
    try to stop the cluster using stop-all.sh it is displaying as below. the
    task tracker and datanode are also not stopped in the slave. Please help me
    in solving this.

    stopping jobtracker
    160.110.150.29: no tasktracker to stop
    stopping namenode
    160.110.150.29: no datanode to stop
    localhost: stopping secondarynamenode


    --
    With Regards,
    Karthik
  • Karthik Kumar at Jul 20, 2010 at 3:05 am
    Hi Ken,

    Thank you for your quick reply. I dont know how to find the process
    which is overwriting those files. Anyhow i re-installed Cygwin from the
    scratch and the problem is solved.
    On Thu, Jul 15, 2010 at 9:49 PM, Ken Goodhope wrote:

    Inside hadoop-env.sh, you will see a property that sets the directory for
    pids to be written too. Check which directory it is and then investigate
    the possibility that some other process is deleting, or overwriting those
    files. If you are using NFS, with all nodes pointing at the same
    directory,
    then it might be a matter of each node overwriting the same file.

    Either way, the stop scripts look for those pid files, and used them to
    stop
    the correct daemon. If they are not found, or if the file contains the
    wrong pid, the script will echo no process to stop.

    On Thu, Jul 15, 2010 at 4:51 AM, Karthik Kumar <karthik84kumar@gmail.com
    wrote:
    Hi,

    I am using a cluster of two machines one master and one slave. When i
    try to stop the cluster using stop-all.sh it is displaying as below. the
    task tracker and datanode are also not stopped in the slave. Please help me
    in solving this.

    stopping jobtracker
    160.110.150.29: no tasktracker to stop
    stopping namenode
    160.110.150.29: no datanode to stop
    localhost: stopping secondarynamenode


    --
    With Regards,
    Karthik


    --
    With Regards,
    Karthik

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedJul 15, '10 at 11:52a
activeJul 20, '10 at 3:05a
posts3
users2
websitehadoop.apache.org...
irc#hadoop

2 users in discussion

Karthik Kumar: 2 posts Ken Goodhope: 1 post

People

Translate

site design / logo © 2022 Grokbase