FAQ
Hello Clodera Manager Users,

I am new to Cloudera Manager or even hadoop. I installed the CDH 4.4 using
Cloudera manager using path A installation method and facing some BAD
HEALTH issues mainly due to connectivity isses between nodes. I have tried
this thrice but failed to solve this problem. I request the experts in the
forum to help me get through this. Your help is really appreciated.

I have Centos 6 64 bit running on 4 VMs.
Cluster configuration that i wanted is 1 Master node (Name & Sec Name)
which serves as data node and another 3 data nodes on different VMs.

Before installing Cloudera Manager, below are the network setting that i
made.

*Step 1:*
all my hosts has user with same name "hadoop" and for the sake of password
less sudo login for Cloudera manager, i added ALL ALL = (ALL) NOPASSWD:
ALL in my /etc/sudoers in all my nodes for time being. Idea is to revert
this after CDH cluster installation.

*Step 2:*
In /etc/sysconfig/network i added the HOSTNAME=master.com / slave1.com /
slave2.com / slave3.com in corresponding VM instances.

*Step 3:*
In /etc/hosts i added host names of every node like..

192.168.1.1 master.com master
192.168.1.2 slave1.com slave1
192.168.1.3 slave2.com slave2
192.168.1.4 slave3.com slave3

*Step 4:*
Generated ssh key ( ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa ) and made
public key of each node available to all nodes in authorized_keys file.
Note: after above step, i am able to do ssh from/to any node with out
password on a terminal.

Step 5:
Installed CDH 4.4 using cloudera manager successfully, using path A
installation method with default embedded postgress db.

But after installation, the home page of my cloudera manager web console, i
could see my hdfs, mapreduce1, hbase & other few serves showing bad health.
When i clicked to see details, i could see the reason is because if
connectivity between nodes. Data nodes not able to connect with name nodes.

Attaching the screen shot for your reference.

<https://lh3.googleusercontent.com/-zo3kFNJGBO0/UjTJ-DRSaUI/AAAAAAAABb8/7u03mdNp-LE/s1600/CDH+cluster_BadHealth.jpg>

This is the server log..

Caused by: java.net.NoRouteToHostException: No route to host
  at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
  at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
  at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:207)
  at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:528)
  at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:492)
  at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:509)
  at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:603)
  at org.apache.hadoop.ipc.Client$Connection.access$2100(Client.java:252)
  at org.apache.hadoop.ipc.Client.getConnection(Client.java:1290)
  at org.apache.hadoop.ipc.Client.call(Client.java:1208)


and also


Caused by: java.lang.NullPointerException
  at org.apache.hadoop.hdfs.server.datanode.DataNode.getVolumeInfo(DataNode.java:2344)
  at sun.reflect.GeneratedMethodAccessor80.invoke(Unknown Source)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
  at java.lang.reflect.Method.invoke(Method.java:597)
  at com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:167)
  at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:96)
  at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:33)
  at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:208)
  at com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:65)
  at com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:216)
  at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:666)


Could anybody please help me.


Thanks in advance






To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.

Search Discussions

  • DevP at Sep 14, 2013 at 9:05 pm
    Just want to add few more steps i did before i install Cloudera manager.

    Disabled SELINUX on all my hosts
    and opened the ports as mentioned in Ports configuration for Cloudera
    manager<http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/latest/Cloudera-Manager-Installation-Guide/cmig_config_ports.html>

    Thanks
    DevP
    On Sunday, 15 September 2013 02:26:07 UTC+5:30, DevP wrote:

    Hello Clodera Manager Users,

    I am new to Cloudera Manager or even hadoop. I installed the CDH 4.4 using
    Cloudera manager using path A installation method and facing some BAD
    HEALTH issues mainly due to connectivity isses between nodes. I have tried
    this thrice but failed to solve this problem. I request the experts in the
    forum to help me get through this. Your help is really appreciated.

    I have Centos 6 64 bit running on 4 VMs.
    Cluster configuration that i wanted is 1 Master node (Name & Sec Name)
    which serves as data node and another 3 data nodes on different VMs.

    Before installing Cloudera Manager, below are the network setting that i
    made.

    *Step 1:*
    all my hosts has user with same name "hadoop" and for the sake of password
    less sudo login for Cloudera manager, i added ALL ALL = (ALL) NOPASSWD:
    ALL in my /etc/sudoers in all my nodes for time being. Idea is to revert
    this after CDH cluster installation.

    *Step 2:*
    In /etc/sysconfig/network i added the HOSTNAME=master.com / slave1.com /
    slave2.com / slave3.com in corresponding VM instances.

    *Step 3:*
    In /etc/hosts i added host names of every node like..

    192.168.1.1 master.com master
    192.168.1.2 slave1.com slave1
    192.168.1.3 slave2.com slave2
    192.168.1.4 slave3.com slave3

    *Step 4:*
    Generated ssh key ( ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa ) and made
    public key of each node available to all nodes in authorized_keys file.
    Note: after above step, i am able to do ssh from/to any node with out
    password on a terminal.

    Step 5:
    Installed CDH 4.4 using cloudera manager successfully, using path A
    installation method with default embedded postgress db.

    But after installation, the home page of my cloudera manager web console,
    i could see my hdfs, mapreduce1, hbase & other few serves showing bad
    health. When i clicked to see details, i could see the reason is because if
    connectivity between nodes. Data nodes not able to connect with name nodes.

    Attaching the screen shot for your reference.


    <https://lh3.googleusercontent.com/-zo3kFNJGBO0/UjTJ-DRSaUI/AAAAAAAABb8/7u03mdNp-LE/s1600/CDH+cluster_BadHealth.jpg>

    This is the server log..

    Caused by: java.net.NoRouteToHostException: No route to host
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:207)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:528)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:492)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:509)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:603)
    at org.apache.hadoop.ipc.Client$Connection.access$2100(Client.java:252)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1290)
    at org.apache.hadoop.ipc.Client.call(Client.java:1208)


    and also


    Caused by: java.lang.NullPointerException
    at org.apache.hadoop.hdfs.server.datanode.DataNode.getVolumeInfo(DataNode.java:2344)
    at sun.reflect.GeneratedMethodAccessor80.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:167)
    at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:96)
    at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:33)
    at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:208)
    at com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:65)
    at com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:216)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:666)


    Could anybody please help me.


    Thanks in advance





    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.
  • DevP at Sep 14, 2013 at 9:29 pm
    Attaching netstat of tcp on my master node for your reference.


    PLease let me know if you need any other information.

    Thanks
    DevP

    On Sunday, 15 September 2013 02:26:07 UTC+5:30, DevP wrote:

    Hello Clodera Manager Users,

    I am new to Cloudera Manager or even hadoop. I installed the CDH 4.4 using
    Cloudera manager using path A installation method and facing some BAD
    HEALTH issues mainly due to connectivity isses between nodes. I have tried
    this thrice but failed to solve this problem. I request the experts in the
    forum to help me get through this. Your help is really appreciated.

    I have Centos 6 64 bit running on 4 VMs.
    Cluster configuration that i wanted is 1 Master node (Name & Sec Name)
    which serves as data node and another 3 data nodes on different VMs.

    Before installing Cloudera Manager, below are the network setting that i
    made.

    *Step 1:*
    all my hosts has user with same name "hadoop" and for the sake of password
    less sudo login for Cloudera manager, i added ALL ALL = (ALL) NOPASSWD:
    ALL in my /etc/sudoers in all my nodes for time being. Idea is to revert
    this after CDH cluster installation.

    *Step 2:*
    In /etc/sysconfig/network i added the HOSTNAME=master.com / slave1.com /
    slave2.com / slave3.com in corresponding VM instances.

    *Step 3:*
    In /etc/hosts i added host names of every node like..

    192.168.1.1 master.com master
    192.168.1.2 slave1.com slave1
    192.168.1.3 slave2.com slave2
    192.168.1.4 slave3.com slave3

    *Step 4:*
    Generated ssh key ( ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa ) and made
    public key of each node available to all nodes in authorized_keys file.
    Note: after above step, i am able to do ssh from/to any node with out
    password on a terminal.

    Step 5:
    Installed CDH 4.4 using cloudera manager successfully, using path A
    installation method with default embedded postgress db.

    But after installation, the home page of my cloudera manager web console,
    i could see my hdfs, mapreduce1, hbase & other few serves showing bad
    health. When i clicked to see details, i could see the reason is because if
    connectivity between nodes. Data nodes not able to connect with name nodes.

    Attaching the screen shot for your reference.


    <https://lh3.googleusercontent.com/-zo3kFNJGBO0/UjTJ-DRSaUI/AAAAAAAABb8/7u03mdNp-LE/s1600/CDH+cluster_BadHealth.jpg>

    This is the server log..

    Caused by: java.net.NoRouteToHostException: No route to host
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:207)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:528)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:492)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:509)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:603)
    at org.apache.hadoop.ipc.Client$Connection.access$2100(Client.java:252)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1290)
    at org.apache.hadoop.ipc.Client.call(Client.java:1208)


    and also


    Caused by: java.lang.NullPointerException
    at org.apache.hadoop.hdfs.server.datanode.DataNode.getVolumeInfo(DataNode.java:2344)
    at sun.reflect.GeneratedMethodAccessor80.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:167)
    at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:96)
    at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:33)
    at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:208)
    at com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:65)
    at com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:216)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:666)


    Could anybody please help me.


    Thanks in advance





    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.
  • DevP at Sep 14, 2013 at 10:20 pm
    I realized later that ipv6 was enabled on my hosts.
    Disabled it now.

    Just to confirm the above step, after reboot of all nodes
    lsmod | grep ipv6 returned nothing no all nodes.


    But still no luck. here is the log

    java.util.concurrent.ExecutionException: org.apache.avro.AvroRemoteException: java.net.ConnectException: Connection refused

    Caused by: org.apache.avro.AvroRemoteException: java.net.ConnectException: Connection refused
      at org.apache.avro.ipc.specific.SpecificRequestor.invoke(SpecificRequestor.java:88)
      at $Proxy59.queryMultiTimeSeries(Unknown Source)
      at com.cloudera.cmon.TimeoutNozzleIPC.queryMultiTimeSeries(TimeoutNozzleIPC.java:371)
      at com.cloudera.server.cmf.tsquery.TimeSeriesMultiRequest.call(TimeSeriesMultiRequest.java:63)
      at com.cloudera.server.cmf.tsquery.TimeSeriesMultiRequest.call(TimeSeriesMultiRequest.java:18)
      at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
      at java.util.concurrent.FutureTask.run(FutureTask.java:138)
      at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
      at java.lang.Thread.run(Thread.java:662)
    Caused by: java.net.ConnectException: Connection refused
      at java.net.PlainSocketImpl.socketConnect(Native Method)
      at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351)

    On Sunday, 15 September 2013 02:59:32 UTC+5:30, DevP wrote:


    Attaching netstat of tcp on my master node for your reference.


    PLease let me know if you need any other information.

    Thanks
    DevP

    On Sunday, 15 September 2013 02:26:07 UTC+5:30, DevP wrote:

    Hello Clodera Manager Users,

    I am new to Cloudera Manager or even hadoop. I installed the CDH 4.4
    using Cloudera manager using path A installation method and facing some BAD
    HEALTH issues mainly due to connectivity isses between nodes. I have tried
    this thrice but failed to solve this problem. I request the experts in the
    forum to help me get through this. Your help is really appreciated.

    I have Centos 6 64 bit running on 4 VMs.
    Cluster configuration that i wanted is 1 Master node (Name & Sec Name)
    which serves as data node and another 3 data nodes on different VMs.

    Before installing Cloudera Manager, below are the network setting that i
    made.

    *Step 1:*
    all my hosts has user with same name "hadoop" and for the sake of
    password less sudo login for Cloudera manager, i added ALL ALL = (ALL)
    NOPASSWD: ALL in my /etc/sudoers in all my nodes for time being. Idea is to
    revert this after CDH cluster installation.

    *Step 2:*
    In /etc/sysconfig/network i added the HOSTNAME=master.com / slave1.com /
    slave2.com / slave3.com in corresponding VM instances.

    *Step 3:*
    In /etc/hosts i added host names of every node like..

    192.168.1.1 master.com master
    192.168.1.2 slave1.com slave1
    192.168.1.3 slave2.com slave2
    192.168.1.4 slave3.com slave3

    *Step 4:*
    Generated ssh key ( ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa ) and made
    public key of each node available to all nodes in authorized_keys file.
    Note: after above step, i am able to do ssh from/to any node with out
    password on a terminal.

    Step 5:
    Installed CDH 4.4 using cloudera manager successfully, using path A
    installation method with default embedded postgress db.

    But after installation, the home page of my cloudera manager web console,
    i could see my hdfs, mapreduce1, hbase & other few serves showing bad
    health. When i clicked to see details, i could see the reason is because if
    connectivity between nodes. Data nodes not able to connect with name nodes.

    Attaching the screen shot for your reference.


    <https://lh3.googleusercontent.com/-zo3kFNJGBO0/UjTJ-DRSaUI/AAAAAAAABb8/7u03mdNp-LE/s1600/CDH+cluster_BadHealth.jpg>

    This is the server log..

    Caused by: java.net.NoRouteToHostException: No route to host
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:207)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:528)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:492)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:509)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:603)
    at org.apache.hadoop.ipc.Client$Connection.access$2100(Client.java:252)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1290)
    at org.apache.hadoop.ipc.Client.call(Client.java:1208)


    and also


    Caused by: java.lang.NullPointerException
    at org.apache.hadoop.hdfs.server.datanode.DataNode.getVolumeInfo(DataNode.java:2344)
    at sun.reflect.GeneratedMethodAccessor80.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:167)
    at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:96)
    at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:33)
    at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:208)
    at com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:65)
    at com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:216)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:666)


    Could anybody please help me.


    Thanks in advance





    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.
  • Marco Shaw at Sep 14, 2013 at 11:11 pm
    You mention VMs...  Do you know if your host is properly configured?  Have you done host configs before and know that you have properly setup the network virtualization layer?
    On Sat, Sep 14, 2013 at 5:56 PM, DevP wrote:

    Hello Clodera Manager Users,
    I am new to Cloudera Manager or even hadoop. I installed the CDH 4.4 using
    Cloudera manager using path A installation method and facing some BAD
    HEALTH issues mainly due to connectivity isses between nodes. I have tried
    this thrice but failed to solve this problem. I request the experts in the
    forum to help me get through this. Your help is really appreciated.
    I have Centos 6 64 bit running on 4 VMs.
    Cluster configuration that i wanted is 1 Master node (Name & Sec Name)
    which serves as data node and another 3 data nodes on different VMs.
    Before installing Cloudera Manager, below are the network setting that i
    made.
    *Step 1:*
    all my hosts has user with same name "hadoop" and for the sake of password
    less sudo login for Cloudera manager, i added ALL ALL = (ALL) NOPASSWD:
    ALL in my /etc/sudoers in all my nodes for time being. Idea is to revert
    this after CDH cluster installation.
    *Step 2:*
    In /etc/sysconfig/network i added the HOSTNAME=master.com / slave1.com /
    slave2.com / slave3.com in corresponding VM instances.
    *Step 3:*
    In /etc/hosts i added host names of every node like..
    192.168.1.1 master.com master
    192.168.1.2 slave1.com slave1
    192.168.1.3 slave2.com slave2
    192.168.1.4 slave3.com slave3
    *Step 4:*
    Generated ssh key ( ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa ) and made
    public key of each node available to all nodes in authorized_keys file.
    Note: after above step, i am able to do ssh from/to any node with out
    password on a terminal.
    Step 5:
    Installed CDH 4.4 using cloudera manager successfully, using path A
    installation method with default embedded postgress db.
    But after installation, the home page of my cloudera manager web console, i
    could see my hdfs, mapreduce1, hbase & other few serves showing bad health.
    When i clicked to see details, i could see the reason is because if
    connectivity between nodes. Data nodes not able to connect with name nodes.
    Attaching the screen shot for your reference.
    <https://lh3.googleusercontent.com/-zo3kFNJGBO0/UjTJ-DRSaUI/AAAAAAAABb8/7u03mdNp-LE/s1600/CDH+cluster_BadHealth.jpg>
    This is the server log..
    Caused by: java.net.NoRouteToHostException: No route to host
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:207)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:528)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:492)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:509)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:603)
    at org.apache.hadoop.ipc.Client$Connection.access$2100(Client.java:252)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1290)
    at org.apache.hadoop.ipc.Client.call(Client.java:1208)
    and also
    Caused by: java.lang.NullPointerException
    at org.apache.hadoop.hdfs.server.datanode.DataNode.getVolumeInfo(DataNode.java:2344)
    at sun.reflect.GeneratedMethodAccessor80.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:167)
    at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:96)
    at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:33)
    at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:208)
    at com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:65)
    at com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:216)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:666)
    Could anybody please help me.
    Thanks in advance
    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.
    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.
  • DevP at Sep 15, 2013 at 5:07 am
    Well.. first of all thanks a to for your quick reply.

    But I am not sure what is setting up virtualization layer.
    Is that something which we enable on host OS bias settings on which all
    these VMs were installed.. If yes, yes i have done that.

    If not please let me know how can i do virtualization layer settings. How
    can i make sure virtualization layer is properly configured?

    Also I have mentioned all my steps to configure hosts in my very first
    post. If something is missing how can i make sure it is configured
    properly..?

    Just repeating same steps in this post to make my steps clear.

    *Step 1:*
    all my hosts has user with same name "hadoop" and for the sake of password
    less sudo login for Cloudera manager, i added ALL ALL = (ALL) NOPASSWD:
    ALL in my /etc/sudoers in all my nodes for time being. Idea is to revert
    this after CDH cluster installation.

    *Step 2:*
    In /etc/sysconfig/network i added the HOSTNAME=master.com / slave1.com /
    slave2.com / slave3.com in corresponding VM instances.

    *Step 3:*
    In /etc/hosts i added host names of every node like..

    192.168.1.1 master.com master
    192.168.1.2 slave1.com slave1
    192.168.1.3 slave2.com slave2
    192.168.1.4 slave3.com slave3

    *Step 4:*
    Generated ssh key ( ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa ) and made
    public key of each node available to all nodes in authorized_keys file.
    Note: after above step, i am able to do ssh from/to any node with out
    password on a terminal.

    *Step 5:*
    Disabled SELINUX on all my hosts

    *Step 7:* opened the ports as mentioned in Ports configuration for Cloudera
    manager<http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/latest/Cloudera-Manager-Installation-Guide/cmig_config_ports.html>

    *Step 8: *disabled ipv6 on all my VM hosts

    *Step 6: *
    Installed CDH 4.4 using cloudera manager successfully, using path A
    installation method with default embedded postgress db.

    Note:
    *I am able to connect to each node without password using SSH.*
    *I am able to run any command in all my nodes with sudo with out password.*

    Please let me know if above steps are wrong or missed any.

    Thanks,
    DevP



    On Sunday, 15 September 2013 04:41:22 UTC+5:30, Marco Shaw wrote:

    You mention VMs... Do you know if your host is properly configured? Have
    you done host configs before and know that you have properly setup the
    network virtualization layer?


    On Sat, Sep 14, 2013 at 5:56 PM, DevP <pradeep...@gmail.com <javascript:>>wrote:
    Hello Clodera Manager Users,

    I am new to Cloudera Manager or even hadoop. I installed the CDH 4.4
    using Cloudera manager using path A installation method and facing some BAD
    HEALTH issues mainly due to connectivity isses between nodes. I have tried
    this thrice but failed to solve this problem. I request the experts in the
    forum to help me get through this. Your help is really appreciated.

    I have Centos 6 64 bit running on 4 VMs.
    Cluster configuration that i wanted is 1 Master node (Name & Sec Name)
    which serves as data node and another 3 data nodes on different VMs.

    Before installing Cloudera Manager, below are the network setting that i
    made.

    *Step 1:*
    all my hosts has user with same name "hadoop" and for the sake of
    password less sudo login for Cloudera manager, i added ALL ALL = (ALL)
    NOPASSWD: ALL in my /etc/sudoers in all my nodes for time being. Idea is to
    revert this after CDH cluster installation.

    *Step 2:*
    In /etc/sysconfig/network i added the HOSTNAME=master.com / slave1.com /
    slave2.com / slave3.com in corresponding VM instances.

    *Step 3:*
    In /etc/hosts i added host names of every node like..

    192.168.1.1 master.com master
    192.168.1.2 slave1.com slave1
    192.168.1.3 slave2.com slave2
    192.168.1.4 slave3.com slave3

    *Step 4:*
    Generated ssh key ( ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa ) and made
    public key of each node available to all nodes in authorized_keys file.
    Note: after above step, i am able to do ssh from/to any node with out
    password on a terminal.

    Step 5:
    Installed CDH 4.4 using cloudera manager successfully, using path A
    installation method with default embedded postgress db.

    But after installation, the home page of my cloudera manager web console,
    i could see my hdfs, mapreduce1, hbase & other few serves showing bad
    health. When i clicked to see details, i could see the reason is because if
    connectivity between nodes. Data nodes not able to connect with name nodes.

    Attaching the screen shot for your reference.


    <https://lh3.googleusercontent.com/-zo3kFNJGBO0/UjTJ-DRSaUI/AAAAAAAABb8/7u03mdNp-LE/s1600/CDH+cluster_BadHealth.jpg>

    This is the server log..

    Caused by: java.net.NoRouteToHostException: No route to host
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:207)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:528)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:492)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:509)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:603)
    at org.apache.hadoop.ipc.Client$Connection.access$2100(Client.java:252)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1290)
    at org.apache.hadoop.ipc.Client.call(Client.java:1208)


    and also


    Caused by: java.lang.NullPointerException
    at org.apache.hadoop.hdfs.server.datanode.DataNode.getVolumeInfo(DataNode.java:2344)
    at sun.reflect.GeneratedMethodAccessor80.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:167)
    at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:96)
    at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:33)
    at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:208)
    at com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:65)
    at com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:216)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:666)


    Could anybody please help me.


    Thanks in advance






    To unsubscribe from this group and stop receiving emails from it, send
    an email to scm-users+...@cloudera.org <javascript:>.
    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.
  • Marco Shaw at Sep 15, 2013 at 5:56 am
    Sorry, I scanned your message too quickly, it looks like you have the networking done properly.


    Marco

    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.
  • DevP at Sep 15, 2013 at 6:50 am
    No problem.. :)

    Could you please let me know what the issue could be? I have been stuck on
    this for the past 15 days and need help.
    On Sunday, 15 September 2013 11:26:05 UTC+5:30, Marco Shaw wrote:



    Sorry, I scanned your message too quickly, it looks like you have the
    networking done properly.

    Marco
    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.
  • Vipul vikram at Sep 15, 2013 at 8:14 pm
    Try to disable iptables it would work.

    On Sun, Sep 15, 2013 at 12:20 PM, DevP wrote:


    No problem.. :)

    Could you please let me know what the issue could be? I have been stuck on
    this for the past 15 days and need help.
    On Sunday, 15 September 2013 11:26:05 UTC+5:30, Marco Shaw wrote:



    Sorry, I scanned your message too quickly, it looks like you have the
    networking done properly.

    Marco
    To unsubscribe from this group and stop receiving emails from it, send an
    email to scm-users+unsubscribe@cloudera.org.


    --
    --
    Vikram Vipul
    Software engg.
    Ph +91 998607 4003

    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.
  • DevP at Sep 16, 2013 at 7:43 am
    Excellent! Perfect coincidence. Even i tried disabling iptables last night
    and my bad health problem because of connectivity among nodes is solved.

    Thanks a ton Vipul for giving very precise solution. cheers. Thanks
    everyone.

    But But... my slave node are processing good health now but i have no cue
    why master having processes like Name Node, secondary name node and 1 data
    node is in *concerning health.*

    Reason for concerning health in:

    *hdfs1 process page of cloudera manager we console.*

    *The active name node 's health was concerning* - in pale yellow
    background.

    *Test disabled because of HDFS was not configured with a secondary name
    node. Test of whether there is running , healthy, standby Namenode* - in
      gray background

    But my secondary name node is up and running. i am able to access
    http://master:50090/logs/ and see working status.

    *all the process pages of hdfs which runs on master node (NN, sec NN, DN)
    shows *

    *The health of this role's host is concerning. The following heinalth
    checks are concerning: swapping* - pale yellow background

    also

    Test disabled while Quorum-Based storage is not in use. Te test of whether
    the JournalNodes are in sync with the name node - in light gray background.

    I have no jobs no files stored in hdfs. *Please see the attached screen
    shot of my master node page on cloudera manger.*

    Please let me know what could be the problem.

    On Monday, 16 September 2013 01:44:51 UTC+5:30, vipul vikram wrote:

    Try to disable iptables it would work.


    On Sun, Sep 15, 2013 at 12:20 PM, DevP <pradeep...@gmail.com <javascript:>
    wrote:
    No problem.. :)

    Could you please let me know what the issue could be? I have been stuck
    on this for the past 15 days and need help.
    On Sunday, 15 September 2013 11:26:05 UTC+5:30, Marco Shaw wrote:



    Sorry, I scanned your message too quickly, it looks like you have the
    networking done properly.

    Marco
    To unsubscribe from this group and stop receiving emails from it, send
    an email to scm-users+...@cloudera.org <javascript:>.


    --
    --
    Vikram Vipul
    Software engg.
    Ph +91 998607 4003
    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.
  • Vipul vikram at Sep 16, 2013 at 8:49 am
    That is a hardware problem. Physical memory of your Master is too less to
    handle that many roles.
    Take a Machine of 8GB Ram and Dual core processor. That won't happen.

    On Mon, Sep 16, 2013 at 1:12 PM, DevP wrote:

    Excellent! Perfect coincidence. Even i tried disabling iptables last
    night and my bad health problem because of connectivity among nodes is
    solved.

    Thanks a ton Vipul for giving very precise solution. cheers. Thanks
    everyone.

    But But... my slave node are processing good health now but i have no cue
    why master having processes like Name Node, secondary name node and 1 data
    node is in *concerning health.*

    Reason for concerning health in:

    *hdfs1 process page of cloudera manager we console.*

    *The active name node 's health was concerning* - in pale yellow
    background.

    *Test disabled because of HDFS was not configured with a secondary name
    node. Test of whether there is running , healthy, standby Namenode* - in
    gray background

    But my secondary name node is up and running. i am able to access
    http://master:50090/**logs/ <http://master:50090/logs/> and see working
    status.

    *all the process pages of hdfs which runs on master node (NN, sec NN, DN)
    shows *

    *The health of this role's host is concerning. The following heinalth
    checks are concerning: swapping* - pale yellow background

    also

    Test disabled while Quorum-Based storage is not in use. Te test of whether
    the JournalNodes are in sync with the name node - in light gray background.

    I have no jobs no files stored in hdfs. *Please see the attached screen
    shot of my master node page on cloudera manger.*

    Please let me know what could be the problem.

    On Monday, 16 September 2013 01:44:51 UTC+5:30, vipul vikram wrote:

    Try to disable iptables it would work.

    On Sun, Sep 15, 2013 at 12:20 PM, DevP wrote:


    No problem.. :)

    Could you please let me know what the issue could be? I have been stuck
    on this for the past 15 days and need help.
    On Sunday, 15 September 2013 11:26:05 UTC+5:30, Marco Shaw wrote:



    Sorry, I scanned your message too quickly, it looks like you have the
    networking done properly.

    Marco
    To unsubscribe from this group and stop receiving emails from it, send
    an email to scm-users+...@**cloudera.org.


    --
    --
    Vikram Vipul
    Software engg.
    Ph +91 998607 4003
    To unsubscribe from this group and stop receiving emails from it, send an
    email to scm-users+unsubscribe@cloudera.org.


    --
    --
    Vikram Vipul
    Software engg.
    Ph +91 998607 4003

    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.
  • DevP at Sep 16, 2013 at 9:07 am
    Oh is it? OK. Sure I will take care of that.

    This is a fantastic group.. Thanks Vipul and all others.

    Cheers
    On Monday, 16 September 2013 14:19:13 UTC+5:30, vipul vikram wrote:

    That is a hardware problem. Physical memory of your Master is too less to
    handle that many roles.
    Take a Machine of 8GB Ram and Dual core processor. That won't happen.


    On Mon, Sep 16, 2013 at 1:12 PM, DevP <pradeep...@gmail.com <javascript:>>wrote:
    Excellent! Perfect coincidence. Even i tried disabling iptables last
    night and my bad health problem because of connectivity among nodes is
    solved.

    Thanks a ton Vipul for giving very precise solution. cheers. Thanks
    everyone.

    But But... my slave node are processing good health now but i have no cue
    why master having processes like Name Node, secondary name node and 1 data
    node is in *concerning health.*

    Reason for concerning health in:

    *hdfs1 process page of cloudera manager we console.*

    *The active name node 's health was concerning* - in pale yellow
    background.

    *Test disabled because of HDFS was not configured with a secondary name
    node. Test of whether there is running , healthy, standby Namenode* - in
    gray background

    But my secondary name node is up and running. i am able to access
    http://master:50090/**logs/ <http://master:50090/logs/> and see working
    status.

    *all the process pages of hdfs which runs on master node (NN, sec NN,
    DN) shows *

    *The health of this role's host is concerning. The following heinalth
    checks are concerning: swapping* - pale yellow background

    also

    Test disabled while Quorum-Based storage is not in use. Te test of
    whether the JournalNodes are in sync with the name node - in light gray
    background.

    I have no jobs no files stored in hdfs. *Please see the attached screen
    shot of my master node page on cloudera manger.*

    Please let me know what could be the problem.

    On Monday, 16 September 2013 01:44:51 UTC+5:30, vipul vikram wrote:

    Try to disable iptables it would work.

    On Sun, Sep 15, 2013 at 12:20 PM, DevP wrote:


    No problem.. :)

    Could you please let me know what the issue could be? I have been stuck
    on this for the past 15 days and need help.
    On Sunday, 15 September 2013 11:26:05 UTC+5:30, Marco Shaw wrote:



    Sorry, I scanned your message too quickly, it looks like you have the
    networking done properly.

    Marco
    To unsubscribe from this group and stop receiving emails from it, send
    an email to scm-users+...@**cloudera.org.


    --
    --
    Vikram Vipul
    Software engg.
    Ph +91 998607 4003
    To unsubscribe from this group and stop receiving emails from it, send
    an email to scm-users+...@cloudera.org <javascript:>.


    --
    --
    Vikram Vipul
    Software engg.
    Ph +91 998607 4003
    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupscm-users @
categorieshadoop
postedSep 14, '13 at 8:56p
activeSep 16, '13 at 9:07a
posts12
users3
websitecloudera.com
irc#hadoop

3 users in discussion

DevP: 8 posts Marco Shaw: 2 posts Vipul vikram: 2 posts

People

Translate

site design / logo © 2022 Grokbase