FAQ
Has anyone else had a problem with hadoop not stopping processes? I am
running 0.16.4 and when I issue bin/stop-all.sh I get "no <process> to
stop" for every node. But a quick look at the processes on the system
says otherwise. One thing to note is that they don't show up when I run
"jps" either.

Thanks,

-Xavier

Search Discussions

  • Miles Osborne at Jun 12, 2008 at 7:01 pm
    (for 16.4), I've noticed that stop-all.sh sometimes doesn't work when the
    corresponding start-all was done as a cron job at system boot. also, if
    your USER differs, it won't work

    eg as root, with USER=root, start-all.sh needs a corresponding stop-all.sh,
    but also as root; if instead, you su to root from a non-root account,
    stop-all.sh will fail.

    Miles

    2008/6/12 Xavier Stevens <Xavier.Stevens@fox.com>:
    Has anyone else had a problem with hadoop not stopping processes? I am
    running 0.16.4 and when I issue bin/stop-all.sh I get "no <process> to
    stop" for every node. But a quick look at the processes on the system
    says otherwise. One thing to note is that they don't show up when I run
    "jps" either.

    Thanks,

    -Xavier


    --
    The University of Edinburgh is a charitable body, registered in Scotland,
    with registration number SC005336.
  • Xavier Stevens at Jun 12, 2008 at 7:10 pm
    I was using su from a different account. Now when I am trying to start back up I am seeing this stack trace for the NameNode when it fails to startup. Is this a bug?

    2008-06-12 12:04:27,948 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=NameNode, port=4000
    2008-06-12 12:04:27,953 INFO org.apache.hadoop.dfs.NameNode: Namenode up at: <namenode name/ip here>
    2008-06-12 12:04:27,957 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=NameNode, sessionId=null
    2008-06-12 12:04:27,959 INFO org.apache.hadoop.dfs.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
    2008-06-12 12:04:28,015 INFO org.apache.hadoop.fs.FSNamesystem: fsOwner=hadoop,hadoop,wheel
    2008-06-12 12:04:28,016 INFO org.apache.hadoop.fs.FSNamesystem: supergroup=hadoop
    2008-06-12 12:04:28,016 INFO org.apache.hadoop.fs.FSNamesystem: isPermissionEnabled=true
    2008-06-12 12:04:29,365 ERROR org.apache.hadoop.dfs.NameNode: java.lang.NullPointerException
    at org.apache.hadoop.dfs.INodeDirectory.getExistingPathINodes(INode.java:408)
    at org.apache.hadoop.dfs.INodeDirectory.getNode(INode.java:357)
    at org.apache.hadoop.dfs.INodeDirectory.getNode(INode.java:365)
    at org.apache.hadoop.dfs.FSDirectory.unprotectedDelete(FSDirectory.java:458)
    at org.apache.hadoop.dfs.FSEditLog.loadFSEdits(FSEditLog.java:537)
    at org.apache.hadoop.dfs.FSImage.loadFSEdits(FSImage.java:756)
    at org.apache.hadoop.dfs.FSImage.loadFSImage(FSImage.java:639)
    at org.apache.hadoop.dfs.FSImage.recoverTransitionRead(FSImage.java:222)
    at org.apache.hadoop.dfs.FSDirectory.loadFSImage(FSDirectory.java:79)
    at org.apache.hadoop.dfs.FSNamesystem.initialize(FSNamesystem.java:254)
    at org.apache.hadoop.dfs.FSNamesystem.(NameNode.java:131)
    at org.apache.hadoop.dfs.NameNode.(NameNode.java:162)
    at org.apache.hadoop.dfs.NameNode.createNameNode(NameNode.java:846)
    at org.apache.hadoop.dfs.NameNode.main(NameNode.java:855)


    -----Original Message-----
    From: milesosb@gmail.com on behalf of Miles Osborne
    Sent: Thu 6/12/2008 12:00 PM
    To: core-user@hadoop.apache.org
    Subject: Re: Hadoop not stopping processes

    (for 16.4), I've noticed that stop-all.sh sometimes doesn't work when the
    corresponding start-all was done as a cron job at system boot. also, if
    your USER differs, it won't work

    eg as root, with USER=root, start-all.sh needs a corresponding stop-all.sh,
    but also as root; if instead, you su to root from a non-root account,
    stop-all.sh will fail.

    Miles

    2008/6/12 Xavier Stevens <Xavier.Stevens@fox.com>:
    Has anyone else had a problem with hadoop not stopping processes? I am
    running 0.16.4 and when I issue bin/stop-all.sh I get "no <process> to
    stop" for every node. But a quick look at the processes on the system
    says otherwise. One thing to note is that they don't show up when I run
    "jps" either.

    Thanks,

    -Xavier


    --
    The University of Edinburgh is a charitable body, registered in Scotland,
    with registration number SC005336.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedJun 12, '08 at 6:37p
activeJun 12, '08 at 7:10p
posts3
users2
websitehadoop.apache.org...
irc#hadoop

2 users in discussion

Xavier Stevens: 2 posts Miles Osborne: 1 post

People

Translate

site design / logo © 2022 Grokbase