|
Ken Goodhope |
at Jul 15, 2010 at 4:20 pm
|
⇧ |
| |
Inside hadoop-env.sh, you will see a property that sets the directory for
pids to be written too. Check which directory it is and then investigate
the possibility that some other process is deleting, or overwriting those
files. If you are using NFS, with all nodes pointing at the same directory,
then it might be a matter of each node overwriting the same file.
Either way, the stop scripts look for those pid files, and used them to stop
the correct daemon. If they are not found, or if the file contains the
wrong pid, the script will echo no process to stop.
On Thu, Jul 15, 2010 at 4:51 AM, Karthik Kumar wrote:
Hi,
I am using a cluster of two machines one master and one slave. When i
try to stop the cluster using stop-all.sh it is displaying as below. the
task tracker and datanode are also not stopped in the slave. Please help me
in solving this.
stopping jobtracker
160.110.150.29: no tasktracker to stop
stopping namenode
160.110.150.29: no datanode to stop
localhost: stopping secondarynamenode
--
With Regards,
Karthik