FAQ
Dear All,
I have a 7 node cluster with a master. When I try to start it using start
all I get the processes running as below. when I run the command again but
exactly after that it says that the processes are up. However when i try to
shut it down using stop-all.sh it says no datannode and no job tracker to
stop. It is clear to me that the processes die. I am not sure why but I am
attaching an error that I found on one of the slaves node 1. Even if i use
start-mapred.sh or start-dfs.sh it does not work Please advise any ideas?
Thanks in advance

[email protected]:~/HadoopStandalone/hadoop-0.21.0/bin$ ./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-mapred.sh
namenode running as process 28763. Stop it first.
n02: datanode running as process 21531. Stop it first.
n01: datanode running as process 21480. Stop it first.
n06: datanode running as process 21515. Stop it first.
n03: datanode running as process 21197. Stop it first.
n07: datanode running as process 21554. Stop it first.
n05: datanode running as process 20794. Stop it first.
n04: datanode running as process 21159. Stop it first.
localhost: secondarynamenode running as process 28959. Stop it first.
jobtracker running as process 29055. Stop it first.
n01: tasktracker running as process 21560. Stop it first.
n03: tasktracker running as process 21278. Stop it first.
n02: tasktracker running as process 21613. Stop it first.
n04: tasktracker running as process 21239. Stop it first.
n07: tasktracker running as process 21635. Stop it first.
n05: tasktracker running as process 20875. Stop it first.
n06: tasktracker running as process 21597. Stop it first.
[email protected]:~/HadoopStandalone/hadoop-0.21.0/bin$ ./stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-mapred.sh
stopping namenode
n04: no datanode to stop
n01: no datanode to stop
n02: no datanode to stop
n03: no datanode to stop
n05: no datanode to stop
n06: no datanode to stop
n07: no datanode to stop
localhost: no secondarynamenode to stop
stopping jobtracker
n01: stopping tasktracker
n05: stopping tasktracker
n06: stopping tasktracker
n02: stopping tasktracker
n07: stopping tasktracker
n03: stopping tasktracker
n04: stopping tasktracker
[email protected]:~/HadoopStandalone/hadoop-0.21.0/bin$



2011-02-05 03:04:33,465 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = n01/192.168.0.1
STARTUP_MSG: args = []
STARTUP_MSG: version = 0.21.0
STARTUP_MSG: classpath =
/home/ahmednagy/HadoopStandalone/hadoop-0.21.0/bin/../conf:/usr/lib/jvm/java-6-sun/lib/tools.jar:/home/ahmednagy/HadoopStandalone$
STARTUP_MSG: build =
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.21 -r
985326; compiled by 'tomwhite' on Tue Aug 17 01:02:28 EDT 2010
************************************************************/
2011-02-05 03:04:33,655 WARN org.apache.hadoop.hdfs.server.common.Util: Path
/tmp/mylocal/ should be specified as a URI in configuration files. Please
updat$
2011-02-05 03:04:33,888 INFO org.apache.hadoop.security.Groups: Group
mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping;
cacheTimeout=3000$
2011-02-05 03:04:34,394 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
Incompatible namespaceIDs in /tmp/mylocal: namenode name$
at
org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:237)
at
org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:152)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:336)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:237)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1440)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1393)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1407)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1552)

2011-02-05 03:04:34,395 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at n01/192.168.0.1
************************************************************/








ah[email protected]:~/HadoopStandalone/hadoop-0.21.0/logs$ tail
hadoop-ahmednagy-namenode-cannonau.isti.cnr.it.log
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1346)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:742)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1344)
2011-02-05 03:20:53,008 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ahmednagy
ip=/146.48.82.190 cmd=listStatus src=/system/mapred dst=null
perm=null
2011-02-05 03:20:53,021 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions:
1 Total time for transactions(ms): 1Number of transactions batched in Syncs:
0 Number of syncs: 0 SyncTimes(ms): 0
2011-02-05 03:20:53,037 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ahmednagy
ip=/146.48.82.190 cmd=delete src=/system/mapred dst=null
perm=null
2011-02-05 03:20:53,048 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ahmednagy
ip=/146.48.82.190 cmd=mkdirs src=/system/mapred dst=null
perm=ahmednagy:supergroup:rwxr-xr-x
2011-02-05 03:20:53,052 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ahmednagy
ip=/146.48.82.190 cmd=setPermission src=/system/mapred
dst=null perm=ahmednagy:supergroup:rwx------




ah[email protected]:~/HadoopStandalone/hadoop-0.21.0/logs$ tail
hadoop-ahmednagy-tasktracker-n01.log
2011-02-05 03:21:11,919 ERROR org.apache.hadoop.mapred.TaskTracker: Can not
start task tracker because
org.apache.hadoop.util.DiskChecker$DiskErrorException: all local directories
are not writable
at
org.apache.hadoop.mapred.TaskTracker.checkLocalDirs(TaskTracker.java:3435)
at
org.apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:617)
at
org.apache.hadoop.mapred.TaskTracker.(TaskTracker.java:3461)

2011-02-05 03:21:11,922 INFO org.apache.hadoop.mapred.TaskTracker:
SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down TaskTracker at n01/192.168.0.1
************************************************************/

--
View this message in context: http://old.nabble.com/Start-all.sh-does-not-start-the-mapred-or-the-dfs-tp30849697p30849697.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.

Search Discussions

  • Edson Ramiro at Feb 7, 2011 at 11:43 am
    Hi ahmednagy,

    You can see if there is any java proccess running using slaves.sh.

    ./slaves.sh jps

    if there is any job running yet, you can kill with ./slaves.sh pkill -9
    java,
    but it is a very dirty solution. :)

    --
    Edson Ramiro Lucas Filho
    {skype, twitter, gtalk}: erlfilho
    http://www.inf.ufpr.br/erlf07/

    On Sat, Feb 5, 2011 at 12:23 AM, ahmednagy wrote:


    Dear All,
    I have a 7 node cluster with a master. When I try to start it using start
    all I get the processes running as below. when I run the command again but
    exactly after that it says that the processes are up. However when i try
    to
    shut it down using stop-all.sh it says no datannode and no job tracker to
    stop. It is clear to me that the processes die. I am not sure why but I am
    attaching an error that I found on one of the slaves node 1. Even if i use
    start-mapred.sh or start-dfs.sh it does not work Please advise any ideas?
    Thanks in advance

    [email protected]:~/HadoopStandalone/hadoop-0.21.0/bin$ ./start-all.sh
    This script is Deprecated. Instead use start-dfs.sh and start-mapred.sh
    namenode running as process 28763. Stop it first.
    n02: datanode running as process 21531. Stop it first.
    n01: datanode running as process 21480. Stop it first.
    n06: datanode running as process 21515. Stop it first.
    n03: datanode running as process 21197. Stop it first.
    n07: datanode running as process 21554. Stop it first.
    n05: datanode running as process 20794. Stop it first.
    n04: datanode running as process 21159. Stop it first.
    localhost: secondarynamenode running as process 28959. Stop it first.
    jobtracker running as process 29055. Stop it first.
    n01: tasktracker running as process 21560. Stop it first.
    n03: tasktracker running as process 21278. Stop it first.
    n02: tasktracker running as process 21613. Stop it first.
    n04: tasktracker running as process 21239. Stop it first.
    n07: tasktracker running as process 21635. Stop it first.
    n05: tasktracker running as process 20875. Stop it first.
    n06: tasktracker running as process 21597. Stop it first.
    [email protected]:~/HadoopStandalone/hadoop-0.21.0/bin$ ./stop-all.sh
    This script is Deprecated. Instead use stop-dfs.sh and stop-mapred.sh
    stopping namenode
    n04: no datanode to stop
    n01: no datanode to stop
    n02: no datanode to stop
    n03: no datanode to stop
    n05: no datanode to stop
    n06: no datanode to stop
    n07: no datanode to stop
    localhost: no secondarynamenode to stop
    stopping jobtracker
    n01: stopping tasktracker
    n05: stopping tasktracker
    n06: stopping tasktracker
    n02: stopping tasktracker
    n07: stopping tasktracker
    n03: stopping tasktracker
    n04: stopping tasktracker
    [email protected]:~/HadoopStandalone/hadoop-0.21.0/bin$



    2011-02-05 03:04:33,465 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting DataNode
    STARTUP_MSG: host = n01/192.168.0.1
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 0.21.0
    STARTUP_MSG: classpath =

    /home/ahmednagy/HadoopStandalone/hadoop-0.21.0/bin/../conf:/usr/lib/jvm/java-6-sun/lib/tools.jar:/home/ahmednagy/HadoopStandalone$
    STARTUP_MSG: build =
    https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.21 -r
    985326; compiled by 'tomwhite' on Tue Aug 17 01:02:28 EDT 2010
    ************************************************************/
    2011-02-05 03:04:33,655 WARN org.apache.hadoop.hdfs.server.common.Util:
    Path
    /tmp/mylocal/ should be specified as a URI in configuration files. Please
    updat$
    2011-02-05 03:04:33,888 INFO org.apache.hadoop.security.Groups: Group
    mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping;
    cacheTimeout=3000$
    2011-02-05 03:04:34,394 ERROR
    org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException:
    Incompatible namespaceIDs in /tmp/mylocal: namenode name$
    at

    org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:237)
    at

    org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:152)
    at

    org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:336)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:260)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:237)
    at

    org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1440)
    at

    org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1393)
    at

    org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1407)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1552)

    2011-02-05 03:04:34,395 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down DataNode at n01/192.168.0.1
    ************************************************************/








    ah[email protected]:~/HadoopStandalone/hadoop-0.21.0/logs$ tail
    hadoop-ahmednagy-namenode-cannonau.isti.cnr.it.log
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1346)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at

    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:742)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1344)
    2011-02-05 03:20:53,008 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ahmednagy
    ip=/146.48.82.190 cmd=listStatus src=/system/mapred dst=null
    perm=null
    2011-02-05 03:20:53,021 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    transactions:
    1 Total time for transactions(ms): 1Number of transactions batched in
    Syncs:
    0 Number of syncs: 0 SyncTimes(ms): 0
    2011-02-05 03:20:53,037 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ahmednagy
    ip=/146.48.82.190 cmd=delete src=/system/mapred dst=null
    perm=null
    2011-02-05 03:20:53,048 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ahmednagy
    ip=/146.48.82.190 cmd=mkdirs src=/system/mapred dst=null
    perm=ahmednagy:supergroup:rwxr-xr-x
    2011-02-05 03:20:53,052 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ahmednagy
    ip=/146.48.82.190 cmd=setPermission src=/system/mapred
    dst=null perm=ahmednagy:supergroup:rwx------




    ah[email protected]:~/HadoopStandalone/hadoop-0.21.0/logs$ tail
    hadoop-ahmednagy-tasktracker-n01.log
    2011-02-05 03:21:11,919 ERROR org.apache.hadoop.mapred.TaskTracker: Can not
    start task tracker because
    org.apache.hadoop.util.DiskChecker$DiskErrorException: all local
    directories
    are not writable
    at
    org.apache.hadoop.mapred.TaskTracker.checkLocalDirs(TaskTracker.java:3435)
    at
    org.apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:617)
    at
    org.apache.hadoop.mapred.TaskTracker.<init>(TaskTracker.java:1299)
    at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:3461)

    2011-02-05 03:21:11,922 INFO org.apache.hadoop.mapred.TaskTracker:
    SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down TaskTracker at n01/192.168.0.1
    ************************************************************/

    --
    View this message in context:
    http://old.nabble.com/Start-all.sh-does-not-start-the-mapred-or-the-dfs-tp30849697p30849697.html
    Sent from the Hadoop core-user mailing list archive at Nabble.com.
  • Cavus,M.,Fa. Post Direkt at Feb 7, 2011 at 11:47 am
    Did anyone know the critical performance and reliability issues of
    Hadoop and Hbase?

    Regards
    Musa

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedFeb 5, '11 at 2:24a
activeFeb 7, '11 at 11:47a
posts3
users3
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2023 Grokbase