FAQ
Hi Ken,

Thank you for your quick reply. I dont know how to find the process
which is overwriting those files. Anyhow i re-installed Cygwin from the
scratch and the problem is solved.
On Thu, Jul 15, 2010 at 9:49 PM, Ken Goodhope wrote:

Inside hadoop-env.sh, you will see a property that sets the directory for
pids to be written too. Check which directory it is and then investigate
the possibility that some other process is deleting, or overwriting those
files. If you are using NFS, with all nodes pointing at the same
directory,
then it might be a matter of each node overwriting the same file.

Either way, the stop scripts look for those pid files, and used them to
stop
the correct daemon. If they are not found, or if the file contains the
wrong pid, the script will echo no process to stop.

On Thu, Jul 15, 2010 at 4:51 AM, Karthik Kumar <karthik84kumar@gmail.com
wrote:
Hi,

I am using a cluster of two machines one master and one slave. When i
try to stop the cluster using stop-all.sh it is displaying as below. the
task tracker and datanode are also not stopped in the slave. Please help me
in solving this.

stopping jobtracker
160.110.150.29: no tasktracker to stop
stopping namenode
160.110.150.29: no datanode to stop
localhost: stopping secondarynamenode


--
With Regards,
Karthik


--
With Regards,
Karthik

Search Discussions

Discussion Posts

Previous

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 3 of 3 | next ›
Discussion Overview
groupcommon-user @
categorieshadoop
postedJul 15, '10 at 11:52a
activeJul 20, '10 at 3:05a
posts3
users2
websitehadoop.apache.org...
irc#hadoop

2 users in discussion

Karthik Kumar: 2 posts Ken Goodhope: 1 post

People

Translate

site design / logo © 2022 Grokbase