FAQ
Hi.

Any pointer on what could be the problem ?

Regards,
Sourav
________________________________________
From: souravm
Sent: Tuesday, September 16, 2008 1:07 AM
To: 'core-user@hadoop.apache.org'
Subject: Re: Need help in hdfs configuration fully distributed way in Mac OSX...

Hi,

I tried the way u suggested. I setup ssh without password. So now namenode can connect to datanode without password - the start-dfs.sh script does not ask for any password. However, even with this fix I still face the same problem.

Regards,
Sourav

----- Original Message -----
From: Mafish Liu <mafish@gmail.com>
To: core-user@hadoop.apache.org <core-user@hadoop.apache.org>
Sent: Mon Sep 15 23:26:10 2008
Subject: Re: Need help in hdfs configuration fully distributed way in Mac OSX...

Hi:
You need to configure your nodes to ensure that node 1 can connect to node
2 without password.
On Tue, Sep 16, 2008 at 2:04 PM, souravm wrote:

Hi All,

I'm facing a problem in configuring hdfs in a fully distributed way in Mac
OSX.

Here is the topology -

1. The namenode is in machine 1
2. There is 1 datanode in machine 2

Now when I execute start-dfs.sh from machine 1, it connects to machine 2
(after it asks for password for connecting to machine 2) and starts datanode
in machine 2 (as the console message says).

However -
1. When I go to http://machine1:50070 - it does not show the data node at
all. It says 0 data node configured
2. In the log file in machine 2 what I see is -
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = rc0902b-dhcp169.apple.com/17.229.22.169
STARTUP_MSG: args = []
STARTUP_MSG: version = 0.17.2.1
STARTUP_MSG: build =
https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.17 -r
684969; compiled by 'oom' on Wed Aug 20 22:29:32 UTC 2008
************************************************************/
2008-09-15 18:54:44,626 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: /17.229.23.77:9000. Already tried 1 time(s).
2008-09-15 18:54:45,627 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: /17.229.23.77:9000. Already tried 2 time(s).
2008-09-15 18:54:46,628 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: /17.229.23.77:9000. Already tried 3 time(s).
2008-09-15 18:54:47,629 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: /17.229.23.77:9000. Already tried 4 time(s).
2008-09-15 18:54:48,630 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: /17.229.23.77:9000. Already tried 5 time(s).
2008-09-15 18:54:49,631 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: /17.229.23.77:9000. Already tried 6 time(s).
2008-09-15 18:54:50,632 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: /17.229.23.77:9000. Already tried 7 time(s).
2008-09-15 18:54:51,633 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: /17.229.23.77:9000. Already tried 8 time(s).
2008-09-15 18:54:52,635 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: /17.229.23.77:9000. Already tried 9 time(s).
2008-09-15 18:54:53,640 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: /17.229.23.77:9000. Already tried 10 time(s).
2008-09-15 18:54:54,641 INFO org.apache.hadoop.ipc.RPC: Server at /
17.229.23.77:9000 not available yet, Zzzzz...

....... and this retyring gets on repeating


The hadoop-site.xmls are like this -

1. In machine 1
-
<configuration>

<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>

<property>
<name>dfs.name.dir</name>
<value>/Users/souravm/hdpn</value>
</property>

<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>


2. In machine 2

<configuration>

<property>
<name>fs.default.name</name>
<value>hdfs://<machine1 ip>:9000</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/Users/nirdosh/hdfsd1</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>

The slaves file in machine 1 has single entry - <user name>@<ip of
machine2>

The exact steps I did -

1. Reformat the namenode in machine 1
2. execute start-dfs.sh in machine 1
3. Then I try to see whether the datanode is created through http://<machine
1 ip>:50070

Any pointer to resolve this issue would be appreciated.

Regards,
Sourav



**************** CAUTION - Disclaimer *****************
This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended
solely
for the use of the addressee(s). If you are not the intended recipient,
please
notify the sender by e-mail and delete the original message. Further, you
are not
to copy, disclose, or distribute this e-mail or its contents to any other
person and
any such actions are unlawful. This e-mail may contain viruses. Infosys has
taken
every reasonable precaution to minimize this risk, but is not liable for
any damage
you may sustain as a result of any virus in this e-mail. You should carry
out your
own virus checks before opening the e-mail or attachment. Infosys reserves
the
right to monitor and review the content of all messages sent to or from
this e-mail
address. Messages sent to or from this e-mail address may be stored on the
Infosys e-mail system.
***INFOSYS******** End of Disclaimer ********INFOSYS***


--
Mafish@gmail.com
Institute of Computing Technology, Chinese Academy of Sciences, Beijing.

Search Discussions

  • Raghu Angadi at Sep 17, 2008 at 3:06 am
    Did you try using same config file (used on machine 2) on all the nodes?

    You can make the configs you have to work, with more effort, I don't
    think that is necessary.

    Raghu.

    souravm wrote:
    Hi.

    Any pointer on what could be the problem ?

    Regards,
    Sourav
    ________________________________________
    From: souravm
    Sent: Tuesday, September 16, 2008 1:07 AM
    To: 'core-user@hadoop.apache.org'
    Subject: Re: Need help in hdfs configuration fully distributed way in Mac OSX...

    Hi,

    I tried the way u suggested. I setup ssh without password. So now namenode can connect to datanode without password - the start-dfs.sh script does not ask for any password. However, even with this fix I still face the same problem.

    Regards,
    Sourav

    ----- Original Message -----
    From: Mafish Liu <mafish@gmail.com>
    To: core-user@hadoop.apache.org <core-user@hadoop.apache.org>
    Sent: Mon Sep 15 23:26:10 2008
    Subject: Re: Need help in hdfs configuration fully distributed way in Mac OSX...

    Hi:
    You need to configure your nodes to ensure that node 1 can connect to node
    2 without password.
    On Tue, Sep 16, 2008 at 2:04 PM, souravm wrote:

    Hi All,

    I'm facing a problem in configuring hdfs in a fully distributed way in Mac
    OSX.

    Here is the topology -

    1. The namenode is in machine 1
    2. There is 1 datanode in machine 2

    Now when I execute start-dfs.sh from machine 1, it connects to machine 2
    (after it asks for password for connecting to machine 2) and starts datanode
    in machine 2 (as the console message says).

    However -
    1. When I go to http://machine1:50070 - it does not show the data node at
    all. It says 0 data node configured
    2. In the log file in machine 2 what I see is -
    /************************************************************
    STARTUP_MSG: Starting DataNode
    STARTUP_MSG: host = rc0902b-dhcp169.apple.com/17.229.22.169
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 0.17.2.1
    STARTUP_MSG: build =
    https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.17 -r
    684969; compiled by 'oom' on Wed Aug 20 22:29:32 UTC 2008
    ************************************************************/
    2008-09-15 18:54:44,626 INFO org.apache.hadoop.ipc.Client: Retrying connect
    to server: /17.229.23.77:9000. Already tried 1 time(s).
    2008-09-15 18:54:45,627 INFO org.apache.hadoop.ipc.Client: Retrying connect
    to server: /17.229.23.77:9000. Already tried 2 time(s).
    2008-09-15 18:54:46,628 INFO org.apache.hadoop.ipc.Client: Retrying connect
    to server: /17.229.23.77:9000. Already tried 3 time(s).
    2008-09-15 18:54:47,629 INFO org.apache.hadoop.ipc.Client: Retrying connect
    to server: /17.229.23.77:9000. Already tried 4 time(s).
    2008-09-15 18:54:48,630 INFO org.apache.hadoop.ipc.Client: Retrying connect
    to server: /17.229.23.77:9000. Already tried 5 time(s).
    2008-09-15 18:54:49,631 INFO org.apache.hadoop.ipc.Client: Retrying connect
    to server: /17.229.23.77:9000. Already tried 6 time(s).
    2008-09-15 18:54:50,632 INFO org.apache.hadoop.ipc.Client: Retrying connect
    to server: /17.229.23.77:9000. Already tried 7 time(s).
    2008-09-15 18:54:51,633 INFO org.apache.hadoop.ipc.Client: Retrying connect
    to server: /17.229.23.77:9000. Already tried 8 time(s).
    2008-09-15 18:54:52,635 INFO org.apache.hadoop.ipc.Client: Retrying connect
    to server: /17.229.23.77:9000. Already tried 9 time(s).
    2008-09-15 18:54:53,640 INFO org.apache.hadoop.ipc.Client: Retrying connect
    to server: /17.229.23.77:9000. Already tried 10 time(s).
    2008-09-15 18:54:54,641 INFO org.apache.hadoop.ipc.RPC: Server at /
    17.229.23.77:9000 not available yet, Zzzzz...

    ....... and this retyring gets on repeating


    The hadoop-site.xmls are like this -

    1. In machine 1
    -
    <configuration>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:9000</value>
    </property>

    <property>
    <name>dfs.name.dir</name>
    <value>/Users/souravm/hdpn</value>
    </property>

    <property>
    <name>mapred.job.tracker</name>
    <value>localhost:9001</value>
    </property>
    <property>
    <name>dfs.replication</name>
    <value>1</value>
    </property>
    </configuration>


    2. In machine 2

    <configuration>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://<machine1 ip>:9000</value>
    </property>
    <property>
    <name>dfs.data.dir</name>
    <value>/Users/nirdosh/hdfsd1</value>
    </property>
    <property>
    <name>dfs.replication</name>
    <value>1</value>
    </property>
    </configuration>

    The slaves file in machine 1 has single entry - <user name>@<ip of
    machine2>

    The exact steps I did -

    1. Reformat the namenode in machine 1
    2. execute start-dfs.sh in machine 1
    3. Then I try to see whether the datanode is created through http://<machine
    1 ip>:50070

    Any pointer to resolve this issue would be appreciated.

    Regards,
    Sourav



    **************** CAUTION - Disclaimer *****************
    This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended
    solely
    for the use of the addressee(s). If you are not the intended recipient,
    please
    notify the sender by e-mail and delete the original message. Further, you
    are not
    to copy, disclose, or distribute this e-mail or its contents to any other
    person and
    any such actions are unlawful. This e-mail may contain viruses. Infosys has
    taken
    every reasonable precaution to minimize this risk, but is not liable for
    any damage
    you may sustain as a result of any virus in this e-mail. You should carry
    out your
    own virus checks before opening the e-mail or attachment. Infosys reserves
    the
    right to monitor and review the content of all messages sent to or from
    this e-mail
    address. Messages sent to or from this e-mail address may be stored on the
    Infosys e-mail system.
    ***INFOSYS******** End of Disclaimer ********INFOSYS***


    --
    Mafish@gmail.com
    Institute of Computing Technology, Chinese Academy of Sciences, Beijing.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedSep 16, '08 at 11:45p
activeSep 17, '08 at 3:06a
posts2
users2
websitehadoop.apache.org...
irc#hadoop

2 users in discussion

Souravm: 1 post Raghu Angadi: 1 post

People

Translate

site design / logo © 2022 Grokbase