FAQ
Hi All,

We had set up 5 node hadoop[0.20.2] hbase[0.90.1] cluster, the cluster was
idle for many days [a week] today we were unable to stop it, it seems pid
files are deleted, now my question is i had set temp folder as different
still how come pid files might have deleted? . Since i have 5 nodes I can
stop it manually by killing the process but was wondering what to do in
case of bigger cluster as nodes more than 100, manually killing is
cumbersome.


core-site.xml

<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://dpev004.innovate.ibm.com:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/dpeuser/sandbox/hadoop_cluster/temp</value>
</property>
</configuration>

Search Discussions

  • Marcos M Rubinelli at Apr 13, 2011 at 9:43 am
    Sumeet,

    To create your pids in another directory, you can set HADOOP_PID_DIR in
    your bin/hadoop-env.sh. There's an open issue about it:
    https://issues.apache.org/jira/browse/HADOOP-6606

    Regards,
    Marcos
    Em 13-04-2011 04:19, Sumeet M Nikam escreveu:
    Hi All,

    We had set up 5 node hadoop[0.20.2] hbase[0.90.1] cluster, the cluster was
    idle for many days [a week] today we were unable to stop it, it seems pid
    files are deleted, now my question is i had set temp folder as different
    still how come pid files might have deleted? . Since i have 5 nodes I can
    stop it manually by killing the process but was wondering what to do in
    case of bigger cluster as nodes more than 100, manually killing is
    cumbersome.


    core-site.xml

    <configuration>
    <property>
    <name>fs.default.name</name>
    <value>hdfs://dpev004.innovate.ibm.com:9000</value>
    </property>
    <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/dpeuser/sandbox/hadoop_cluster/temp</value>
    </property>
    </configuration>

    --
    ------------------------------------------------------------------------
    Marcos Medrado Rubinelli
    Estatística - Comparison Shopping - BuscaPé
  • Sumeet M Nikam at Apr 13, 2011 at 10:34 am
    Hi Marcos,

    Thanks, will do this. but I am still not clear if hadoop.tmp.dir has
    different directory than the default /tmp then how can pid files get
    deleted, are there any other threads/process running which are deleting
    inactive pid files?

    Regards,
    Sumeet




    Marcos M
    Rubinelli
    <marcosm@buscape- To
    inc.com> <common-user@hadoop.apache.org>
    cc
    04/13/2011 03:13
    PM Subject
    Re: cluster does not stop

    Please respond to
    common-user@hadoo
    p.apache.org





    Sumeet,

    To create your pids in another directory, you can set HADOOP_PID_DIR in
    your bin/hadoop-env.sh. There's an open issue about it:
    https://issues.apache.org/jira/browse/HADOOP-6606

    Regards,
    Marcos
    Em 13-04-2011 04:19, Sumeet M Nikam escreveu:
    Hi All,

    We had set up 5 node hadoop[0.20.2] hbase[0.90.1] cluster, the cluster was
    idle for many days [a week] today we were unable to stop it, it seems pid
    files are deleted, now my question is i had set temp folder as different
    still how come pid files might have deleted? . Since i have 5 nodes I can
    stop it manually by killing the process but was wondering what to do in
    case of bigger cluster as nodes more than 100, manually killing is
    cumbersome.


    core-site.xml

    <configuration>
    <property>
    <name>fs.default.name</name>
    <value>hdfs://dpev004.innovate.ibm.com:9000</value>
    </property>
    <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/dpeuser/sandbox/hadoop_cluster/temp</value>
    </property>
    </configuration>

    --
    ------------------------------------------------------------------------
    Marcos Medrado Rubinelli
    Estatística - Comparison Shopping - BuscaPé

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedApr 13, '11 at 7:21a
activeApr 13, '11 at 10:34a
posts3
users2
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase