FAQ
Hi I am trying to set up a testcluster in VMware fusion (MacOS X).
The Cluster Nodes uses Ubuntu 12.04 and CDH 4. The Network Devices are
Bridged that I can use my Windows Server 2008 for DNS Services and DHCP.
I did a DHCP Reservation for the Mac Adresses of the 4 Nodes I want to
install. I have 4 hosts on the first one I installed the cloudera Manager.
Later I choos all four houst for the Cluster Services.

The Host Inspector shows me the following errors (Ubuntu and Debian) see
Screenshot:
I tried to use /etc/hosts instead of DNS because I thought maybe there is
the failure but same issues again.

If I continue anyway the initalisation Process Stopps like here with the
HDFS Service see Sceenshot.
I went a Snapshot back and it stopped at initalisation of MapReduce so
sometimes on a other Service its not always HDFS.
After I while it was possible to put all Services Up.

So the Cluster was OK. When I execute the Command "hadoop fs -ls /" on the
1 Node where the Cloudera Manager is running the output is ok.
When I do the same on a Data Node Connection is refused Port 8020 to the 1
Node (which is the namenode). But in the Cloudera Manager all hosts looks
ok.
Ping and SSH Workes but not a Telnet 8020.

I am using SSH with root and the password. I have everything standard no
Firewall installed or anything else.

Can anyone tell me where the Failure is. I spend hours but I have no idea
where the problem is.
Is it any nameresolution problem ? Can anyone tell me how to fix the Issues
of the inspector and the Connection problems????

Search Discussions

  • Keamas at Jul 6, 2012 at 6:57 am
    This is how it looks on when I do the whole thing with debian....
  • Sumit Maitra at Jul 6, 2012 at 4:00 pm
    Are all the datanodes having IP Address 127.0.0.1? That looks dicey! Try
    adding the hostnames and IP addresses to the /etc/hosts file of each
    machine. So basically each VM should have the hostname/IP Addresses of the
    other VMs. If it works it's a hack, next step is to figure out why the
    Reverse DNS lookup is failing.

    Hope this helps,
    Sumit.

    From: keamas <marcel.miersebach@gmail.com>
    Date: Friday, July 6, 2012 12:27 PM
    To: <scm-users@cloudera.org>
    Subject: Re: Ubuntu Connection issue with Nodes

    This is how it looks on when I do the whole thing with debian....
  • Asif90988 at Jul 7, 2012 at 6:57 pm
    same error got resolved by opening port 22 - nc -l 22 ( as root )
    On Friday, July 6, 2012 12:01:04 PM UTC-4, SKM wrote:

    Are all the datanodes having IP Address 127.0.0.1? That looks dicey! Try
    adding the hostnames and IP addresses to the /etc/hosts file of each
    machine. So basically each VM should have the hostname/IP Addresses of the
    other VMs. If it works it's a hack, next step is to figure out why the
    Reverse DNS lookup is failing.

    Hope this helps,
    Sumit.

    From: keamas <marcel.miersebach@gmail.com>
    Date: Friday, July 6, 2012 12:27 PM
    To: <scm-users@cloudera.org>
    Subject: Re: Ubuntu Connection issue with Nodes

    This is how it looks on when I do the whole thing with debian....
  • Tobr at Jul 27, 2012 at 12:37 pm
    Same problem on a local installation on a ubuntu 12.04 system.
    "Datanode denied communication with namenode"

    I installed the cloudera manager 4 with the installation package.
    Everything works fine.
    I added my local machine with the name zim to the hosts list.
    The inspector completes without any errors:

    All hosts resolved localhost to 127.0.0.1.
    All hosts checked resolved each other's hostnames correctly.

    But when the hdfs service starts, I get the following error:

    2012-07-27 14:07:37,585 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool
    BP-2107899600-127.0.1.1-1343291057154 (storage id
    DS-579888815-127.0.1.1-50010-1343291061912) service to
    zim.local/127.0.1.1:8020 beginning handshake with NN
    2012-07-27 14:07:37,594 FATAL
    org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
    block pool Block pool BP-2107899600-127.0.1.1-1343291057154 (storage id
    DS-579888815-127.0.1.1-50010-1343291061912) service to
    zim.local/127.0.1.1:8020
    org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
    Datanode denied communication with namenode:
    DatanodeRegistration(127.0.0.1,
    storageID=DS-579888815-127.0.1.1-50010-1343291061912, infoPort=50075,
    ipcPort=50020, storageInfo=lv=-40;cid=cluster8;nsid=1012294301;c=0)
    at
    org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:563)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3089)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:846)

    In my /etc/hosts I have the following entries:

    127.0.0.1 localhost
    127.0.1.1 zim.local zim

    Why does the service try to connect to 127.0.0.1, instead of 127.0.1.1?
    Can I bind the services to a specific ip address instead of the server name?

    I also tried to connect with telnet and there you can see, that the service
    is just available for the hostname, not for localhost:
    tobr@zim:~$ telnet 127.0.1.1 8020
    Trying 127.0.1.1...
    Connected to 127.0.1.1.
    ...
    tobr@zim:~$ telnet 127.0.0.1 8020
    Trying 127.0.0.1...
    telnet: Unable to connect to remote host: Connection refused

    Tobias
    On Friday, July 6, 2012 8:54:47 AM UTC+2, keamas wrote:

    Hi I am trying to set up a testcluster in VMware fusion (MacOS X).
    The Cluster Nodes uses Ubuntu 12.04 and CDH 4. The Network Devices are
    Bridged that I can use my Windows Server 2008 for DNS Services and DHCP.
    I did a DHCP Reservation for the Mac Adresses of the 4 Nodes I want to
    install. I have 4 hosts on the first one I installed the cloudera Manager.
    Later I choos all four houst for the Cluster Services.

    The Host Inspector shows me the following errors (Ubuntu and Debian) see
    Screenshot:
    I tried to use /etc/hosts instead of DNS because I thought maybe there is
    the failure but same issues again.

    If I continue anyway the initalisation Process Stopps like here with the
    HDFS Service see Sceenshot.
    I went a Snapshot back and it stopped at initalisation of MapReduce so
    sometimes on a other Service its not always HDFS.
    After I while it was possible to put all Services Up.

    So the Cluster was OK. When I execute the Command "hadoop fs -ls /" on the
    1 Node where the Cloudera Manager is running the output is ok.
    When I do the same on a Data Node Connection is refused Port 8020 to the 1
    Node (which is the namenode). But in the Cloudera Manager all hosts looks
    ok.
    Ping and SSH Workes but not a Telnet 8020.

    I am using SSH with root and the password. I have everything standard no
    Firewall installed or anything else.

    Can anyone tell me where the Failure is. I spend hours but I have no idea
    where the problem is.
    Is it any nameresolution problem ? Can anyone tell me how to fix the
    Issues of the inspector and the Connection problems????
  • Philip Langdale at Jul 27, 2012 at 4:32 pm
    Hi Tobias,

    If your /etc/hosts really looks like that, then I'm afraid your
    configuration
    isn't correct, and I'm concerned that the tests didn't detect it.

    See this discussion for how your /etc/hosts should look:

    https://groups.google.com/a/cloudera.org/d/topic/scm-users/tZ-htopamfg/discussion

    --phil


    On 27 July 2012 05:37, tobr wrote:

    Same problem on a local installation on a ubuntu 12.04 system.
    "Datanode denied communication with namenode"

    I installed the cloudera manager 4 with the installation package.
    Everything works fine.
    I added my local machine with the name zim to the hosts list.
    The inspector completes without any errors:

    All hosts resolved localhost to 127.0.0.1.
    All hosts checked resolved each other's hostnames correctly.

    But when the hdfs service starts, I get the following error:

    2012-07-27 14:07:37,585 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool
    BP-2107899600-127.0.1.1-1343291057154 (storage id
    DS-579888815-127.0.1.1-50010-1343291061912) service to zim.local/
    127.0.1.1:8020 beginning handshake with NN
    2012-07-27 14:07:37,594 FATAL
    org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
    block pool Block pool BP-2107899600-127.0.1.1-1343291057154 (storage id
    DS-579888815-127.0.1.1-50010-1343291061912) service to zim.local/
    127.0.1.1:8020
    org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
    Datanode denied communication with namenode:
    DatanodeRegistration(127.0.0.1,
    storageID=DS-579888815-127.0.1.1-50010-1343291061912, infoPort=50075,
    ipcPort=50020, storageInfo=lv=-40;cid=cluster8;nsid=1012294301;c=0)
    at
    org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:563)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3089)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:846)

    In my /etc/hosts I have the following entries:

    127.0.0.1 localhost
    127.0.1.1 zim.local zim

    Why does the service try to connect to 127.0.0.1, instead of 127.0.1.1?
    Can I bind the services to a specific ip address instead of the server
    name?

    I also tried to connect with telnet and there you can see, that the
    service is just available for the hostname, not for localhost:
    tobr@zim:~$ telnet 127.0.1.1 8020
    Trying 127.0.1.1...
    Connected to 127.0.1.1.
    ...
    tobr@zim:~$ telnet 127.0.0.1 8020
    Trying 127.0.0.1...
    telnet: Unable to connect to remote host: Connection refused

    Tobias
    On Friday, July 6, 2012 8:54:47 AM UTC+2, keamas wrote:

    Hi I am trying to set up a testcluster in VMware fusion (MacOS X).
    The Cluster Nodes uses Ubuntu 12.04 and CDH 4. The Network Devices are
    Bridged that I can use my Windows Server 2008 for DNS Services and DHCP.
    I did a DHCP Reservation for the Mac Adresses of the 4 Nodes I want to
    install. I have 4 hosts on the first one I installed the cloudera Manager.
    Later I choos all four houst for the Cluster Services.

    The Host Inspector shows me the following errors (Ubuntu and Debian) see
    Screenshot:
    I tried to use /etc/hosts instead of DNS because I thought maybe there is
    the failure but same issues again.

    If I continue anyway the initalisation Process Stopps like here with the
    HDFS Service see Sceenshot.
    I went a Snapshot back and it stopped at initalisation of MapReduce so
    sometimes on a other Service its not always HDFS.
    After I while it was possible to put all Services Up.

    So the Cluster was OK. When I execute the Command "hadoop fs -ls /" on
    the 1 Node where the Cloudera Manager is running the output is ok.
    When I do the same on a Data Node Connection is refused Port 8020 to the
    1 Node (which is the namenode). But in the Cloudera Manager all hosts looks
    ok.
    Ping and SSH Workes but not a Telnet 8020.

    I am using SSH with root and the password. I have everything standard no
    Firewall installed or anything else.

    Can anyone tell me where the Failure is. I spend hours but I have no idea
    where the problem is.
    Is it any nameresolution problem ? Can anyone tell me how to fix the
    Issues of the inspector and the Connection problems????

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupscm-users @
categorieshadoop
postedJul 6, '12 at 6:54a
activeJul 27, '12 at 4:32p
posts6
users5
websitecloudera.com
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase