FAQ
I have configured master node as below:

file: /etc/hosts
added -> ipaddress_of_slave_node full_domain_name short_name

also changed host name in slave node's /etc/sysconfig/network file.
changed HOSTNAME to above mentioned full_domain_name.

Ports 7180 & 7182 are opened on Master nodes.

I am running CentOS 6.2 64 bit on virtual cloud servers and trying to
install CHD4 via cloudera manager

Please help me out.

Mehal

Search Discussions

  • Mark Schnegelberger at Nov 17, 2012 at 8:17 pm
    Hi Mehal,

    Could you paste the output of

    # cat /etc/cloudera-scm-cat agent/config.ini | head -6

    If you configured this by hand, ensure that the agent is configured to
    heartbeat to the proper port (7182 by default) and not 7180.

    Also, with /etc/hosts and /etc/sysconfig/network config, does

    # hostname -f

    return back the FQDN as expected?

    If all the above looks good, paste a snippit of
    /var/log/cloudera-scm-agent/cloudera-scm-agent.log from the node that
    doesn't heartbeat as intended.

    Regards,
    --
    On Sat, Nov 17, 2012 at 3:04 AM, Mehal Patel wrote:

    I have configured master node as below:

    file: /etc/hosts
    added -> ipaddress_of_slave_node full_domain_name short_name

    also changed host name in slave node's /etc/sysconfig/network file.
    changed HOSTNAME to above mentioned full_domain_name.

    Ports 7180 & 7182 are opened on Master nodes.

    I am running CentOS 6.2 64 bit on virtual cloud servers and trying to
    install CHD4 via cloudera manager

    Please help me out.

    Mehal


    --
    *Mark Schnegelberger*
  • Mehal Patel at Nov 18, 2012 at 11:48 pm
    Hi Mark,

    config.ini file has correct details ( 7182 port for master and 9000 on host
    itself ).
    I am also able to see GOOD health in cloudera manager for that slave node
    but now i am getting this below error.

    *java.net.ConnectException: connection timed out to
    http://slave001.slave.com:9000/heartbeat*
    * at
    com.ning.http.client.providers.netty.NettyConnectListener.operationComplete(NettyConnectListener.java:100)
    *
    * at
    org.jboss.netty.channel.DefaultChannelFuture.notifyListener(DefaultChannelFuture.java:381)
    *
    * at
    org.jboss.netty.channel.DefaultChannelFuture.notifyListeners(DefaultChannelFuture.java:372)
    *
    * at
    org.jboss.netty.channel.DefaultChannelFuture.setFailure(DefaultChannelFuture.java:334)
    *
    * at
    org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processConnectTimeout(NioClientSocketPipelineSink.java:374)
    *
    * at
    org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:283)
    *
    * at
    org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    *
    * at
    org.jboss.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:46)
    *
    * at
    java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    *
    * at
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    *
    * at java.lang.Thread.run(Thread.java:662)*
    *Caused by: java.net.ConnectException: connection timed out*
    * at
    org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processConnectTimeout(NioClientSocketPipelineSink.java:371)
    *
    * ... 6 more*

    Any idea on this ?
    On Saturday, 17 November 2012 12:17:07 UTC-8, smark wrote:

    Hi Mehal,

    Could you paste the output of

    # cat /etc/cloudera-scm-cat agent/config.ini | head -6

    If you configured this by hand, ensure that the agent is configured to
    heartbeat to the proper port (7182 by default) and not 7180.

    Also, with /etc/hosts and /etc/sysconfig/network config, does

    # hostname -f

    return back the FQDN as expected?

    If all the above looks good, paste a snippit of
    /var/log/cloudera-scm-agent/cloudera-scm-agent.log from the node that
    doesn't heartbeat as intended.

    Regards,
    --

    On Sat, Nov 17, 2012 at 3:04 AM, Mehal Patel <mehal...@gmail.com<javascript:>
    wrote:
    I have configured master node as below:

    file: /etc/hosts
    added -> ipaddress_of_slave_node full_domain_name short_name

    also changed host name in slave node's /etc/sysconfig/network file.
    changed HOSTNAME to above mentioned full_domain_name.

    Ports 7180 & 7182 are opened on Master nodes.

    I am running CentOS 6.2 64 bit on virtual cloud servers and trying to
    install CHD4 via cloudera manager

    Please help me out.

    Mehal


    --
    *Mark Schnegelberger*



  • Dalia Hassan at Nov 19, 2012 at 8:47 am
    Have you configured your dns server??

    Sent from my iPad
    On Nov 19, 2012, at 1:48 AM, Mehal Patel wrote:

    Hi Mark,

    config.ini file has correct details ( 7182 port for master and 9000 on host itself ).
    I am also able to see GOOD health in cloudera manager for that slave node but now i am getting this below error.

    java.net.ConnectException: connection timed out to http://slave001.slave.com:9000/heartbeat
    at com.ning.http.client.providers.netty.NettyConnectListener.operationComplete(NettyConnectListener.java:100)
    at org.jboss.netty.channel.DefaultChannelFuture.notifyListener(DefaultChannelFuture.java:381)
    at org.jboss.netty.channel.DefaultChannelFuture.notifyListeners(DefaultChannelFuture.java:372)
    at org.jboss.netty.channel.DefaultChannelFuture.setFailure(DefaultChannelFuture.java:334)
    at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processConnectTimeout(NioClientSocketPipelineSink.java:374)
    at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:283)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:46)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)
    Caused by: java.net.ConnectException: connection timed out
    at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processConnectTimeout(NioClientSocketPipelineSink.java:371)
    ... 6 more

    Any idea on this ?
    On Saturday, 17 November 2012 12:17:07 UTC-8, smark wrote:

    Hi Mehal,

    Could you paste the output of

    # cat /etc/cloudera-scm-cat agent/config.ini | head -6

    If you configured this by hand, ensure that the agent is configured to heartbeat to the proper port (7182 by default) and not 7180.

    Also, with /etc/hosts and /etc/sysconfig/network config, does

    # hostname -f

    return back the FQDN as expected?

    If all the above looks good, paste a snippit of /var/log/cloudera-scm-agent/cloudera-scm-agent.log from the node that doesn't heartbeat as intended.

    Regards,
    --
    On Sat, Nov 17, 2012 at 3:04 AM, Mehal Patel wrote:
    I have configured master node as below:

    file: /etc/hosts
    added -> ipaddress_of_slave_node full_domain_name short_name

    also changed host name in slave node's /etc/sysconfig/network file.
    changed HOSTNAME to above mentioned full_domain_name.

    Ports 7180 & 7182 are opened on Master nodes.

    I am running CentOS 6.2 64 bit on virtual cloud servers and trying to install CHD4 via cloudera manager

    Please help me out.

    Mehal


    --
    Mark Schnegelberger


  • Mehal at Nov 19, 2012 at 8:53 am
    Hi Dalia,

    Probably yes because i have cluster of 6 slaves and 1 master node and i am able to see them in cloudera manager with GOOD health. Only issue is when i see log file on master node it is showing error as i mentioned below.

    Lwt me know if above makes sense.

    Sent from my iPod
    On 19-Nov-2012, at 0:47, Dalia Hassan wrote:

    Have you configured your dns server??

    Sent from my iPad
    On Nov 19, 2012, at 1:48 AM, Mehal Patel wrote:

    Hi Mark,

    config.ini file has correct details ( 7182 port for master and 9000 on host itself ).
    I am also able to see GOOD health in cloudera manager for that slave node but now i am getting this below error.

    java.net.ConnectException: connection timed out to http://slave001.slave.com:9000/heartbeat
    at com.ning.http.client.providers.netty.NettyConnectListener.operationComplete(NettyConnectListener.java:100)
    at org.jboss.netty.channel.DefaultChannelFuture.notifyListener(DefaultChannelFuture.java:381)
    at org.jboss.netty.channel.DefaultChannelFuture.notifyListeners(DefaultChannelFuture.java:372)
    at org.jboss.netty.channel.DefaultChannelFuture.setFailure(DefaultChannelFuture.java:334)
    at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processConnectTimeout(NioClientSocketPipelineSink.java:374)
    at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:283)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:46)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)
    Caused by: java.net.ConnectException: connection timed out
    at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processConnectTimeout(NioClientSocketPipelineSink.java:371)
    ... 6 more

    Any idea on this ?

    On Saturday, 17 November 2012 12:17:07 UTC-8, smark wrote:
    Hi Mehal,

    Could you paste the output of

    # cat /etc/cloudera-scm-cat agent/config.ini | head -6

    If you configured this by hand, ensure that the agent is configured to heartbeat to the proper port (7182 by default) and not 7180.

    Also, with /etc/hosts and /etc/sysconfig/network config, does

    # hostname -f

    return back the FQDN as expected?

    If all the above looks good, paste a snippit of /var/log/cloudera-scm-agent/cloudera-scm-agent.log from the node that doesn't heartbeat as intended.

    Regards,
    --

    On Sat, Nov 17, 2012 at 3:04 AM, Mehal Patel wrote:
    I have configured master node as below:

    file: /etc/hosts
    added -> ipaddress_of_slave_node full_domain_name short_name

    also changed host name in slave node's /etc/sysconfig/network file.
    changed HOSTNAME to above mentioned full_domain_name.

    Ports 7180 & 7182 are opened on Master nodes.

    I am running CentOS 6.2 64 bit on virtual cloud servers and trying to install CHD4 via cloudera manager

    Please help me out.

    Mehal



    --
    Mark Schnegelberger


  • Dalia Hassan at Nov 19, 2012 at 1:58 pm
    Are all the services up Mehal ??

    Sent from my iPhone
    On 2012-11-19, at 10:55 AM, Mehal wrote:

    Hi Dalia,

    Probably yes because i have cluster of 6 slaves and 1 master node and i am able to see them in cloudera manager with GOOD health. Only issue is when i see log file on master node it is showing error as i mentioned below.

    Lwt me know if above makes sense.

    Sent from my iPod
    On 19-Nov-2012, at 0:47, Dalia Hassan wrote:

    Have you configured your dns server??

    Sent from my iPad
    On Nov 19, 2012, at 1:48 AM, Mehal Patel wrote:

    Hi Mark,

    config.ini file has correct details ( 7182 port for master and 9000 on host itself ).
    I am also able to see GOOD health in cloudera manager for that slave node but now i am getting this below error.

    java.net.ConnectException: connection timed out to http://slave001.slave.com:9000/heartbeat
    at com.ning.http.client.providers.netty.NettyConnectListener.operationComplete(NettyConnectListener.java:100)
    at org.jboss.netty.channel.DefaultChannelFuture.notifyListener(DefaultChannelFuture.java:381)
    at org.jboss.netty.channel.DefaultChannelFuture.notifyListeners(DefaultChannelFuture.java:372)
    at org.jboss.netty.channel.DefaultChannelFuture.setFailure(DefaultChannelFuture.java:334)
    at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processConnectTimeout(NioClientSocketPipelineSink.java:374)
    at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:283)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:46)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)
    Caused by: java.net.ConnectException: connection timed out
    at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processConnectTimeout(NioClientSocketPipelineSink.java:371)
    ... 6 more

    Any idea on this ?
    On Saturday, 17 November 2012 12:17:07 UTC-8, smark wrote:

    Hi Mehal,

    Could you paste the output of

    # cat /etc/cloudera-scm-cat agent/config.ini | head -6

    If you configured this by hand, ensure that the agent is configured to heartbeat to the proper port (7182 by default) and not 7180.

    Also, with /etc/hosts and /etc/sysconfig/network config, does

    # hostname -f

    return back the FQDN as expected?

    If all the above looks good, paste a snippit of /var/log/cloudera-scm-agent/cloudera-scm-agent.log from the node that doesn't heartbeat as intended.

    Regards,
    --
    On Sat, Nov 17, 2012 at 3:04 AM, Mehal Patel wrote:
    I have configured master node as below:

    file: /etc/hosts
    added -> ipaddress_of_slave_node full_domain_name short_name

    also changed host name in slave node's /etc/sysconfig/network file.
    changed HOSTNAME to above mentioned full_domain_name.

    Ports 7180 & 7182 are opened on Master nodes.

    I am running CentOS 6.2 64 bit on virtual cloud servers and trying to install CHD4 via cloudera manager

    Please help me out.

    Mehal


    --
    Mark Schnegelberger


  • Joey Echeverria at Nov 19, 2012 at 2:23 pm
    Can you make sure you don't have any software firewalls turned on?

    -Joey
    On Mon, Nov 19, 2012 at 8:58 AM, Dalia Hassan wrote:
    Are all the services up Mehal ??

    Sent from my iPhone

    On 2012-11-19, at 10:55 AM, Mehal wrote:

    Hi Dalia,

    Probably yes because i have cluster of 6 slaves and 1 master node and i am
    able to see them in cloudera manager with GOOD health. Only issue is when i
    see log file on master node it is showing error as i mentioned below.

    Lwt me know if above makes sense.

    Sent from my iPod

    On 19-Nov-2012, at 0:47, Dalia Hassan wrote:

    Have you configured your dns server??

    Sent from my iPad

    On Nov 19, 2012, at 1:48 AM, Mehal Patel wrote:

    Hi Mark,

    config.ini file has correct details ( 7182 port for master and 9000 on host
    itself ).
    I am also able to see GOOD health in cloudera manager for that slave node
    but now i am getting this below error.

    java.net.ConnectException: connection timed out to
    http://slave001.slave.com:9000/heartbeat
    at
    com.ning.http.client.providers.netty.NettyConnectListener.operationComplete(NettyConnectListener.java:100)
    at
    org.jboss.netty.channel.DefaultChannelFuture.notifyListener(DefaultChannelFuture.java:381)
    at
    org.jboss.netty.channel.DefaultChannelFuture.notifyListeners(DefaultChannelFuture.java:372)
    at
    org.jboss.netty.channel.DefaultChannelFuture.setFailure(DefaultChannelFuture.java:334)
    at
    org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processConnectTimeout(NioClientSocketPipelineSink.java:374)
    at
    org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:283)
    at
    org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at
    org.jboss.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:46)
    at
    java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)
    Caused by: java.net.ConnectException: connection timed out
    at
    org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processConnectTimeout(NioClientSocketPipelineSink.java:371)
    ... 6 more

    Any idea on this ?
    On Saturday, 17 November 2012 12:17:07 UTC-8, smark wrote:

    Hi Mehal,

    Could you paste the output of

    # cat /etc/cloudera-scm-cat agent/config.ini | head -6

    If you configured this by hand, ensure that the agent is configured to
    heartbeat to the proper port (7182 by default) and not 7180.

    Also, with /etc/hosts and /etc/sysconfig/network config, does

    # hostname -f

    return back the FQDN as expected?

    If all the above looks good, paste a snippit of
    /var/log/cloudera-scm-agent/cloudera-scm-agent.log from the node that
    doesn't heartbeat as intended.

    Regards,
    --
    On Sat, Nov 17, 2012 at 3:04 AM, Mehal Patel wrote:

    I have configured master node as below:

    file: /etc/hosts
    added -> ipaddress_of_slave_node full_domain_name short_name

    also changed host name in slave node's /etc/sysconfig/network file.
    changed HOSTNAME to above mentioned full_domain_name.

    Ports 7180 & 7182 are opened on Master nodes.

    I am running CentOS 6.2 64 bit on virtual cloud servers and trying to
    install CHD4 via cloudera manager

    Please help me out.

    Mehal



    --
    Mark Schnegelberger




    --
    Joey Echeverria
    Principal Solutions Architect
    Cloudera, Inc.
  • Mehal Patel at Nov 20, 2012 at 3:43 pm
    I do have firewalls turned on but i have opened most of the ports in
    /etc/sysconfig/iptables.

    The thing is my slave as well master node us in domain slave.com ad
    mentioned in the error.
    So for each slave i have changed hostname inthe format such as
    slave001.slave.com,slave002.slave.com etc through /etc/hosts and
    /etc/sysconfig/network files with corresponding ipaddress field.

    So i am able to access ports with ipaddress:port frim browser but not with
    slave001.slave.com:port format through browser. So due to this is it
    getting time out error?

    Appreciate any help on this.

    Mehal
    On Monday, November 19, 2012, Joey Echeverria wrote:

    Can you make sure you don't have any software firewalls turned on?

    -Joey
    On Mon, Nov 19, 2012 at 8:58 AM, Dalia Hassan wrote:
    Are all the services up Mehal ??

    Sent from my iPhone

    On 2012-11-19, at 10:55 AM, Mehal wrote:

    Hi Dalia,

    Probably yes because i have cluster of 6 slaves and 1 master node and i am
    able to see them in cloudera manager with GOOD health. Only issue is when i
    see log file on master node it is showing error as i mentioned below.

    Lwt me know if above makes sense.

    Sent from my iPod

    On 19-Nov-2012, at 0:47, Dalia Hassan wrote:

    Have you configured your dns server??

    Sent from my iPad

    On Nov 19, 2012, at 1:48 AM, Mehal Patel wrote:

    Hi Mark,

    config.ini file has correct details ( 7182 port for master and 9000 on host
    itself ).
    I am also able to see GOOD health in cloudera manager for that slave node
    but now i am getting this below error.

    java.net.ConnectException: connection timed out to
    http://slave001.slave.com:9000/heartbeat
    at
    com.ning.http.client.providers.netty.NettyConnectListener.operationComplete(NettyConnectListener.java:100)
    at
    org.jboss.netty.channel.DefaultChannelFuture.notifyListener(DefaultChannelFuture.java:381)
    at
    org.jboss.netty.channel.DefaultChannelFuture.notifyListeners(DefaultChannelFuture.java:372)
    at
    org.jboss.netty.channel.DefaultChannelFuture.setFailure(DefaultChannelFuture.java:334)
    at
    org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processConnectTimeout(NioClientSocketPipelineSink.java:374)
    at
    org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:283)
    at
    org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at
    org.jboss.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:46)
    at
    java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)
    Caused by: java.net.ConnectException: connection timed out
    at
    org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processConnectTimeout(NioClientSocketPipelineSink.java:371)
    ... 6 more

    Any idea on this ?
    On Saturday, 17 November 2012 12:17:07 UTC-8, smark wrote:

    Hi Mehal,

    Could you paste the output of

    # cat /etc/cloudera-scm-cat agent/config.ini | head -6

    If you configured this by hand, ensure that the agent is configured to
    heartbeat to the proper port (7182 by default) and not 7180.

    Also, with /etc/hosts and /etc/sysconfig/network config, does

    # hostname -f

    return back the FQDN as expected?

    If all the above looks good, paste a snippit of
    /var/log/cloudera-scm-agent/cloudera-scm-agent.log from the node that
    doesn't heartbeat as intended.

    Regards,
    --
    On Sat, Nov 17, 2012 at 3:04 AM, Mehal Patel wrote:

    I have configured master --
    Joey Echeverria
    Principal Solutions Architect
    Cloudera, Inc.
  • Joey Echeverria at Nov 19, 2012 at 5:57 pm
    You can check that DNS is set up correctly with the following commands:

    dig slave001.slave.com
    dig -x <ip for slave001.slave.com>

    If one of those doesn't return the other, than DNS is not configured
    correctly. If you're using /etc/hosts, make sure that it's configured
    thusly:

    127.0.0.1 localhost.localdomain
    <ip of slave1> slave001.slave.com slave001
    <ip of slave2> slave002.slave.com slave002
    <...>

    and make sure that the hostname command returns the FQDN:

    $ hostname
    slave001.slave.com

    and not just the short name.

    -Joey
    On Mon, Nov 19, 2012 at 11:46 AM, Mehal Patel wrote:
    I do have firewalls turned on but i have opened most of the ports in
    /etc/sysconfig/iptables.

    The thing is my slave as well master node us in domain slave.com ad
    mentioned in the error.
    So for each slave i have changed hostname inthe format such as
    slave001.slave.com,slave002.slave.com etc through /etc/hosts and
    /etc/sysconfig/network files with corresponding ipaddress field.

    So i am able to access ports with ipaddress:port frim browser but not with
    slave001.slave.com:port format through browser. So due to this is it getting
    time out error?

    Appreciate any help on this.

    Mehal

    On Monday, November 19, 2012, Joey Echeverria wrote:

    Can you make sure you don't have any software firewalls turned on?

    -Joey

    On Mon, Nov 19, 2012 at 8:58 AM, Dalia Hassan <daliahassan0@gmail.com>
    wrote:
    Are all the services up Mehal ??

    Sent from my iPhone

    On 2012-11-19, at 10:55 AM, Mehal wrote:

    Hi Dalia,

    Probably yes because i have cluster of 6 slaves and 1 master node and i
    am
    able to see them in cloudera manager with GOOD health. Only issue is
    when i
    see log file on master node it is showing error as i mentioned below.

    Lwt me know if above makes sense.

    Sent from my iPod

    On 19-Nov-2012, at 0:47, Dalia Hassan wrote:

    Have you configured your dns server??

    Sent from my iPad

    On Nov 19, 2012, at 1:48 AM, Mehal Patel wrote:

    Hi Mark,

    config.ini file has correct details ( 7182 port for master and 9000 on
    host
    itself ).
    I am also able to see GOOD health in cloudera manager for that slave
    node
    but now i am getting this below error.

    java.net.ConnectException: connection timed out to
    http://slave001.slave.com:9000/heartbeat
    at

    com.ning.http.client.providers.netty.NettyConnectListener.operationComplete(NettyConnectListener.java:100)
    at

    org.jboss.netty.channel.DefaultChannelFuture.notifyListener(DefaultChannelFuture.java:381)
    at

    org.jboss.netty.channel.DefaultChannelFuture.notifyListeners(DefaultChannelFuture.java:372)
    at

    org.jboss.netty.channel.DefaultChannelFuture.setFailure(DefaultChannelFuture.java:334)
    at

    org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processConnectTimeout(NioClientSocketPipelineSink.java:374)
    at

    org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:283)
    at

    org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at

    org.jboss.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:46)
    at

    java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at

    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)
    Caused by: java.net.ConnectException: connection timed out
    at

    org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processConnectTimeout(NioClientSocketPipelineSink.java:371)
    ... 6 more

    Any idea on this ?
    On Saturday, 17 November 2012 12:17:07 UTC-8, smark wrote:

    Hi Mehal,

    Could you paste the output of

    # cat /etc/cloudera-scm-cat agent/config.ini | head -6

    If you configured this by hand, ensure that the agent is configured to
    heartbeat to the proper port (7182 by default) and not 7180.

    Also, with /etc/hosts and /etc/sysconfig/network config, does

    # hostname -f

    return back the FQDN as expected?

    If all the above looks good, paste a snippit of
    /var/log/cloudera-scm-agent/cloudera-scm-agent.log from the node that
    doesn't heartbeat as intended.

    Regards,
    --

    On Sat, Nov 17, 2012 at 3:04 AM, Mehal Patel <mehal...@gmail.com>
    wrote:
    I have configured master --
    Joey Echeverria
    Principal Solutions Architect
    Cloudera, Inc.


    --
    Joey Echeverria
    Principal Solutions Architect
    Cloudera, Inc.
  • Mehal Patel at Nov 20, 2012 at 3:43 pm
    Hi Joey,

    The below mentioned /etc/hosts should have details of master as well as all
    slaves nodes on all the nodes in a cluster or only on the master node ?

    And could you please brief upon dig command u mentioned? Did you mean that
    both will turn out same results ? And does this command have to be issued
    on master node only or can be issued on slave nodes also.

    Thanks,
    Mehal
    On Monday, November 19, 2012, Joey Echeverria wrote:

    You can check that DNS is set up correctly with the following commands:

    dig slave001.slave.com
    dig -x <ip for slave001.slave.com>

    If one of those doesn't return the other, than DNS is not configured
    correctly. If you're using /etc/hosts, make sure that it's configured
    thusly:

    127.0.0.1 localhost.localdomain
    <ip of slave1> slave001.slave.com slave001
    <ip of slave2> slave002.slave.com slave002
    <...>

    and make sure that the hostname command returns the FQDN:

    $ hostname
    slave001.slave.com

    and not just the short name.

    -Joey
    On Mon, Nov 19, 2012 at 11:46 AM, Mehal Patel wrote:
    I do have firewalls turned on but i have opened most of the ports in
    /etc/sysconfig/iptables.

    The thing is my slave as well master node us in domain slave.com ad
    mentioned in the error.
    So for each slave i have changed hostname inthe format such as
    slave001.slave.com,slave002.slave.com etc through /etc/hosts and
    /etc/sysconfig/network files with corresponding ipaddress field.

    So i am able to access ports with ipaddress:port frim browser but not with
    slave001.slave.com:port format through browser. So due to this is it getting
    time out error?

    Appreciate any help on this.

    Mehal

    On Monday, November 19, 2012, Joey Echeverria wrote:

    Can you make sure you don't have any software firewalls turned on?

    -Joey

    On Mon, Nov 19, 2012 at 8:58 AM, Dalia Hassan <daliahassan0@gmail.com>
    wrote:
    Are all the services up Mehal ??

    Sent from my iPhone

    On 2012-11-19, at 10:55 AM, Mehal wrote:

    Hi Dalia,

    Probably yes because i have cluster of 6 slaves and 1 master node and
    i
    am
    able to see them in cloudera manager with GOOD health. Only issue is
    when i
    see log file on master node it is showing error as i mentioned below.

    Lwt me know if above makes sense.

    Sent from my iPod

    On 19-Nov-2012, at 0:47, Dalia Hassan wrote:

    Have you configured your dns server??

    Sent from my iPad

    On Nov 19, 2012, at 1:48 AM, Mehal Patel wrote:

    Hi Mark,

    config.ini file has correct details ( 7182 port for master and 9000 on
    host
    itself ).
    I am also able to see GOOD health in cloudera manager for that slave
    node
    but now i am getting this below error.

    java.net.ConnectException: connection timed out to
    http://slave001.slave.com:9000/heartbeat
    at
    com.ning.http.client.providers.netty.NettyConnectListener.operationComplete(NettyConnectListener.java:100)
    at
    org.jboss.netty.channel.DefaultChannelFuture.notifyListener(DefaultChannelFuture.java:381)
    at
    org.jboss.netty.channel.DefaultChannelFuture.notifyListeners(DefaultChannelFuture.java:372)
    at
    org.jboss.netty.channel.DefaultChannelFuture.setFailure(DefaultChannelFuture.java:334)
    at
    org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processConnectTimeout(NioClientSocketPipelineSink.java:374)
    at
    org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:283)
    at
    org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at

    org.jboss.netty
  • Joey Echeverria at Nov 19, 2012 at 6:17 pm
    Yes, the master should be in /etc/hosts as well.

    The two dig commands won't return the exact same results, they should
    basically return the input of the other. Can you run them on one of
    your nodes and send the output?

    -Joey
    On Mon, Nov 19, 2012 at 1:13 PM, Mehal Patel wrote:
    Hi Joey,

    The below mentioned /etc/hosts should have details of master as well as all
    slaves nodes on all the nodes in a cluster or only on the master node ?

    And could you please brief upon dig command u mentioned? Did you mean that
    both will turn out same results ? And does this command have to be issued on
    master node only or can be issued on slave nodes also.

    Thanks,
    Mehal
    On Monday, November 19, 2012, Joey Echeverria wrote:

    You can check that DNS is set up correctly with the following commands:

    dig slave001.slave.com
    dig -x <ip for slave001.slave.com>

    If one of those doesn't return the other, than DNS is not configured
    correctly. If you're using /etc/hosts, make sure that it's configured
    thusly:

    127.0.0.1 localhost.localdomain
    <ip of slave1> slave001.slave.com slave001
    <ip of slave2> slave002.slave.com slave002
    <...>

    and make sure that the hostname command returns the FQDN:

    $ hostname
    slave001.slave.com

    and not just the short name.

    -Joey

    On Mon, Nov 19, 2012 at 11:46 AM, Mehal Patel <mehal01988@gmail.com>
    wrote:
    I do have firewalls turned on but i have opened most of the ports in
    /etc/sysconfig/iptables.

    The thing is my slave as well master node us in domain slave.com ad
    mentioned in the error.
    So for each slave i have changed hostname inthe format such as
    slave001.slave.com,slave002.slave.com etc through /etc/hosts and
    /etc/sysconfig/network files with corresponding ipaddress field.

    So i am able to access ports with ipaddress:port frim browser but not
    with
    slave001.slave.com:port format through browser. So due to this is it
    getting
    time out error?

    Appreciate any help on this.

    Mehal

    On Monday, November 19, 2012, Joey Echeverria wrote:

    Can you make sure you don't have any software firewalls turned on?

    -Joey

    On Mon, Nov 19, 2012 at 8:58 AM, Dalia Hassan <daliahassan0@gmail.com>
    wrote:
    Are all the services up Mehal ??

    Sent from my iPhone

    On 2012-11-19, at 10:55 AM, Mehal wrote:

    Hi Dalia,

    Probably yes because i have cluster of 6 slaves and 1 master node and
    i
    am
    able to see them in cloudera manager with GOOD health. Only issue is
    when i
    see log file on master node it is showing error as i mentioned below.

    Lwt me know if above makes sense.

    Sent from my iPod

    On 19-Nov-2012, at 0:47, Dalia Hassan wrote:

    Have you configured your dns server??

    Sent from my iPad

    On Nov 19, 2012, at 1:48 AM, Mehal Patel <mehal01988@gmail.com>
    wrote:

    Hi Mark,

    config.ini file has correct details ( 7182 port for master and 9000
    on
    host
    itself ).
    I am also able to see GOOD health in cloudera manager for that slave
    node
    but now i am getting this below error.

    java.net.ConnectException: connection timed out to
    http://slave001.slave.com:9000/heartbeat
    at


    com.ning.http.client.providers.netty.NettyConnectListener.operationComplete(NettyConnectListener.java:100)
    at


    org.jboss.netty.channel.DefaultChannelFuture.notifyListener(DefaultChannelFuture.java:381)
    at


    org.jboss.netty.channel.DefaultChannelFuture.notifyListeners(DefaultChannelFuture.java:372)
    at


    org.jboss.netty.channel.DefaultChannelFuture.setFailure(DefaultChannelFuture.java:334)
    at


    org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processConnectTimeout(NioClientSocketPipelineSink.java:374)
    at


    org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:283)
    at


    org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at

    org.jboss.netty


    --
    Joey Echeverria
    Principal Solutions Architect
    Cloudera, Inc.
  • Mehal Patel at Nov 20, 2012 at 3:43 pm
    Thanks for the clarification. I shall run the comman and send you the
    output.

    Regards,
    Mehal
    On Monday, November 19, 2012, Joey Echeverria wrote:

    Yes, the master should be in /etc/hosts as well.

    The two dig commands won't return the exact same results, they should
    basically return the input of the other. Can you run them on one of
    your nodes and send the output?

    -Joey
    On Mon, Nov 19, 2012 at 1:13 PM, Mehal Patel wrote:
    Hi Joey,

    The below mentioned /etc/hosts should have details of master as well as all
    slaves nodes on all the nodes in a cluster or only on the master node ?

    And could you please brief upon dig command u mentioned? Did you mean that
    both will turn out same results ? And does this command have to be issued on
    master node only or can be issued on slave nodes also.

    Thanks,
    Mehal
    On Monday, November 19, 2012, Joey Echeverria wrote:

    You can check that DNS is set up correctly with the following commands:

    dig slave001.slave.com
    dig -x <ip for slave001.slave.com>

    If one of those doesn't return the other, than DNS is not configured
    correctly. If you're using /etc/hosts, make sure that it's configured
    thusly:

    127.0.0.1 localhost.localdomain
    <ip of slave1> slave001.slave.com slave001
    <ip of slave2> slave002.slave.com slave002
    <...>

    and make sure that the hostname command returns the FQDN:

    $ hostname
    slave001.slave.com

    and not just the short name.

    -Joey

    On Mon, Nov 19, 2012 at 11:46 AM, Mehal Patel <mehal01988@gmail.com>
    wrote:
    I do have firewalls turned on but i have opened most of the ports in
    /etc/sysconfig/iptables.

    The thing is my slave as well master node us in domain slave.com ad
    mentioned in the error.
    So for each slave i have changed hostname inthe format such as
    slave001.slave.com,slave002.slave.com etc through /etc/hosts and
    /etc/sysconfig/network files with corresponding ipaddress field.

    So i am able to access ports with ipaddress:port frim browser but not
    with
    slave001.slave.com:port format through browser. So due to this is it
    getting
    time out error?

    Appreciate any help on this.

    Mehal

    On Monday, November 19, 2012, Joey Echeverria wrote:

    Can you make sure you don't have any software firewalls turned on?

    -Joey

    On Mon, Nov 19, 2012 at 8:58 AM, Dalia Hassan <
    daliahassan0@gmail.com>
    wrote:
    Are all the services up Mehal ??

    Sent from my iPhone

    On 2012-11-19, at 10:55 AM, Mehal wrote:

    Hi Dalia,

    Probably yes because i have cluster of 6 slaves and 1 master node
    and
    i
    am
    able to see them in cloudera manager with GOOD health. Only issue
    is
    when i
    see log file on master node it is showing error as i mentioned
    below.
    Lwt me know if above makes sense.

    Sent from my iPod

    On 19-Nov-2012, at 0:47, Dalia Hassan wrote:

    Have you configured your dns server??
    --
    Joey Echeverria
    Principal Solutions Architect
    Cloudera, Inc.
  • Mehal Patel at Nov 19, 2012 at 4:38 pm
    Yes. I have 5 services running on nameli hdfs, hive, maper, oozie,
    zookeeper. All are in good health. Moreover master and slave node are
    communicating also. I ensured this by doing hard shut down on ome of the
    slave node and it did reflect in cloudera manager. But still i am Getting
    this error in log file every now and then.

    Could please let me know how can i check if i have a setup of reverse dns
    in correct manner ?
    Mehal
    On Monday, November 19, 2012, Dalia Hassan wrote:

    Are all the services up Mehal ??

    Sent from my iPhone

    On 2012-11-19, at 10:55 AM, Mehal wrote:

    Hi Dalia,

    Probably yes because i have cluster of 6 slaves and 1 master node and i am
    able to see them in cloudera manager with GOOD health. Only issue is when i
    see log file on master node it is showing error as i mentioned below.

    Lwt me know if above makes sense.

    Sent from my iPod

    On 19-Nov-2012, at 0:47, Dalia Hassan wrote:

    Have you configured your dns server??

    Sent from my iPad

    On Nov 19, 2012, at 1:48 AM, Mehal Patel wrote:

    Hi Mark,

    config.ini file has correct details ( 7182 port for master and 9000 on
    host itself ).
    I am also able to see GOOD health in cloudera manager for that slave node
    but now i am getting this below error.

    *java.net.ConnectException: connection timed out to
    http://slave001.slave.com:9000/heartbeat*
    * at
    com.ning.http.client.providers.netty.NettyConnectListener.operationComplete(NettyConnectListener.java:100)
    *
    * at
    org.jboss.netty.channel.DefaultChannelFuture.notifyListener(DefaultChannelFuture.java:381)
    *
    * at
    org.jboss.netty.channel.DefaultChannelFuture.notifyListeners(DefaultChannelFuture.java:372)
    *
    * at
    org.jboss.netty.channel.DefaultChannelFuture.setFailure(DefaultChannelFuture.java:334)
    *
    * at
    org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processConnectTimeout(NioClientSocketPipelineSink.java:374)
    *
    * at
    org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:283)
    *
    * at
    org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    *
    * at
    org.jboss.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:46)
    *
    * at
    java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    *
    * at
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    *
    * at java.lang.Thread.run(Thread.java:662)*
    *Caused by: java.net.ConnectException: connection timed out*
    * at
    org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processConnectTimeout(NioClientSocketPipelineSink.java:371)
    *
    * ... 6 more*

    Any idea on this ?

    On Saturday, 17 November 2012 12:17:07 UTC-8, smark wrote:

    Hi Mehal,

    Could you paste the output of

    # cat /etc/cloudera-scm-cat agent/config.ini | head -6

    If you configured this by hand, ensure that the agent is configured to
    heartbeat to the proper port (7182 by default) and not 7180.

    Also, with /etc/hosts and /etc/sysconfig/network config, does

    # hostname -f

    return back the FQDN as expected?

    If all the above looks good, paste a snippit of
    /var/log/cloudera-scm-agent/**cloudera-scm-agent.log from the node that
    doesn't heartbeat as intended.

    Regards,
    --

    On Sat, Nov 17, 2012 at 3:04 AM, Mehal Patel wrote:

    I have configured master node as below:

    file: /etc/hosts
    added -> ipaddress_of_slave_node full_domain_name short_name

    also changed host name in slave node's /etc/sysconfig/network file.
    changed HOSTNAME to above mentioned full_domain_name.

    Ports 7180 & 7182 are opened on Master nodes.

    I am running CentOS 6.2 64 bit on virtual cloud servers and trying to
    install CHD4 via cloudera manager

    Please help me out.
  • Mehal Patel at Nov 20, 2012 at 3:43 pm
    Hi Mark,

    Agent (slave node) is sending heart beats on port 7182 only as per
    config.ini file.

    And hostname -f command returns like this,
    03246-1-1434656

    And i see on master node's log file (*
    /var/log/cloudera-scm-server/cloudera-scm-server.log*) below error:
    *org.apache.avro.AvroRuntimeException: Unknown datum type:
    java.lang.IllegalArgumentException: Hostid is invalid (must be like a
    hostname): 03246-1-1434656*
    *
    *
    And in slave node log file i get ( *
    /var/log/cloudera-scm-agent/cloudera-scm-agent.log)*
    [17/Nov/2012 04:26:33 +0000] 14486 MainThread agent ERROR
    Heartbeating to xxx.xxx.127.101:7182 failed.
    Traceback (most recent call last):

    I am not sure if it is related to DNS resolution error.. any idea?

    Thanks,

    Mehal'
    On Sat, Nov 17, 2012 at 12:16 PM, Mark Schnegelberger wrote:

    Hi Mehal,

    Could you paste the output of

    # cat /etc/cloudera-scm-cat agent/config.ini | head -6

    If you configured this by hand, ensure that the agent is configured to
    heartbeat to the proper port (7182 by default) and not 7180.

    Also, with /etc/hosts and /etc/sysconfig/network config, does

    # hostname -f

    return back the FQDN as expected?

    If all the above looks good, paste a snippit of
    /var/log/cloudera-scm-agent/cloudera-scm-agent.log from the node that
    doesn't heartbeat as intended.

    Regards,
    --

    On Sat, Nov 17, 2012 at 3:04 AM, Mehal Patel wrote:

    I have configured master node as below:

    file: /etc/hosts
    added -> ipaddress_of_slave_node full_domain_name short_name

    also changed host name in slave node's /etc/sysconfig/network file.
    changed HOSTNAME to above mentioned full_domain_name.

    Ports 7180 & 7182 are opened on Master nodes.

    I am running CentOS 6.2 64 bit on virtual cloud servers and trying to
    install CHD4 via cloudera manager

    Please help me out.

    Mehal


    --
    *Mark Schnegelberger*



Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupscm-users @
categorieshadoop
postedNov 17, '12 at 8:04a
activeNov 20, '12 at 3:43p
posts14
users4
websitecloudera.com
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase