FAQ
Hi everyone,
I'm trying to monitor hdfs with ganglia, which isn't included in the free
edition of CM4. Question is, where do I modify the configuration settings
on the hadoop nodes to send the metrics to the ganglia monitoring server?
I can't seem to find the page on the UI to modify the
hadoop.metrics.properties configuration.

Search Discussions

  • Vinithra Varadharajan at Aug 6, 2012 at 6:11 pm
    Hi Koh,

    If you are using CDH4, you can configure ganglia monitoring for HDFS via
    hadoop-metrics2.properties. Currently CM4 provides a safety valve under
    HDFS service: go to the Configurations tab of your HDFS service, and search
    for "Hadoop Metrics2". Then add the ganglia monitoring key-value entries
    for the file in the safety valve box. When you restart the roles,
    hadoop-metrics2.properties file will be generated and distributed to the
    HDFS roles.

    -Vinithra
    On Sun, Aug 5, 2012 at 8:09 PM, Koh Linxin wrote:

    Hi everyone,
    I'm trying to monitor hdfs with ganglia, which isn't included in the free
    edition of CM4. Question is, where do I modify the configuration settings
    on the hadoop nodes to send the metrics to the ganglia monitoring server?
    I can't seem to find the page on the UI to modify the
    hadoop.metrics.properties configuration.
  • VicSanchez at Sep 24, 2012 at 7:51 am
    Hi guys!

    I look fro hadoop-metrics2.properties safety-valve but no luck.

    I'm running CM4 and under the hdfs services tab I only get the following
    properties for the NN, DN and Secondary NN.

    Hadoop Metrics Class
      org.apache.hadoop.metrics.spi.NoEmitMetricsContext

    Implementation daemons will use to report some internal statistics. The
    default (NoEmitMetricsContext) will display metrics on /metrics on the
    status port. The GangliaContext and GangliaContext31 classes will report
    metrics to your specified Ganglia Monitoring Daemons (gmond). The ganglia
    wire format changed incompatibly at version 3.1.0. If you are running any
    version of ganglia 3.1.0 or newer, use the GangliaContext31 metric class;
    otherwise, use the GangliaContext metric class.
       Hadoop Metrics Output Directory
      /tmp/metrics

    If using FileContext, directory to write metrics to.
       Hadoop Metrics Ganglia Servers
       Default value is empty. Click to edit.

          If using GangliaContext, a comma-delimited list of host:port pairs
    pointing to 'gmond' servers you would like to publish metrics to. In
    practice, this set of 'gmond' should match the set of 'gmond' in your
    'gmetad' datasource list for the cluster.



    Can anyone point me to the right place to find that configuration file?
    I'm able to run ganglia but no hdfs metrics get pushed to it. I mean no NN,
    DN metrics are available only CPU, RAM and all that stuff.

    Thanks in advance for your help!

    /Victor
    On Monday, August 6, 2012 8:10:34 PM UTC+2, Vinithra wrote:

    Hi Koh,

    If you are using CDH4, you can configure ganglia monitoring for HDFS via
    hadoop-metrics2.properties. Currently CM4 provides a safety valve under
    HDFS service: go to the Configurations tab of your HDFS service, and search
    for "Hadoop Metrics2". Then add the ganglia monitoring key-value entries
    for the file in the safety valve box. When you restart the roles,
    hadoop-metrics2.properties file will be generated and distributed to the
    HDFS roles.
    -Vinithra

    On Sun, Aug 5, 2012 at 8:09 PM, Koh Linxin <[email protected]<javascript:>
    wrote:
    Hi everyone,
    I'm trying to monitor hdfs with ganglia, which isn't included in the free
    edition of CM4. Question is, where do I modify the configuration settings
    on the hadoop nodes to send the metrics to the ganglia monitoring server?
    I can't seem to find the page on the UI to modify the
    hadoop.metrics.properties configuration.
  • Vinithra Varadharajan at Sep 24, 2012 at 6:54 pm
    Victor,

    It looks like you have setup a CDH3 cluster using CM. That is why you can't
    see the Hadoop Metrics2 safety valve. If you set up a CDH4 cluster, you
    will see this config.

    -Vinithra
    On Mon, Sep 24, 2012 at 12:51 AM, VicSanchez wrote:

    Hi guys!

    I look fro hadoop-metrics2.properties safety-valve but no luck.

    I'm running CM4 and under the hdfs services tab I only get the following
    properties for the NN, DN and Secondary NN.

    Hadoop Metrics Class
    org.apache.hadoop.metrics.spi.NoEmitMetricsContext

    Implementation daemons will use to report some internal statistics. The
    default (NoEmitMetricsContext) will display metrics on /metrics on the
    status port. The GangliaContext and GangliaContext31 classes will report
    metrics to your specified Ganglia Monitoring Daemons (gmond). The ganglia
    wire format changed incompatibly at version 3.1.0. If you are running any
    version of ganglia 3.1.0 or newer, use the GangliaContext31 metric class;
    otherwise, use the GangliaContext metric class.
    Hadoop Metrics Output Directory
    /tmp/metrics

    If using FileContext, directory to write metrics to.
    Hadoop Metrics Ganglia Servers
    Default value is empty. Click to edit.

    If using GangliaContext, a comma-delimited list of host:port pairs
    pointing to 'gmond' servers you would like to publish metrics to. In
    practice, this set of 'gmond' should match the set of 'gmond' in your
    'gmetad' datasource list for the cluster.



    Can anyone point me to the right place to find that configuration file?
    I'm able to run ganglia but no hdfs metrics get pushed to it. I mean no
    NN, DN metrics are available only CPU, RAM and all that stuff.

    Thanks in advance for your help!

    /Victor

    On Monday, August 6, 2012 8:10:34 PM UTC+2, Vinithra wrote:

    Hi Koh,

    If you are using CDH4, you can configure ganglia monitoring for HDFS via
    hadoop-metrics2.properties. Currently CM4 provides a safety valve under
    HDFS service: go to the Configurations tab of your HDFS service, and search
    for "Hadoop Metrics2". Then add the ganglia monitoring key-value entries
    for the file in the safety valve box. When you restart the roles,
    hadoop-metrics2.properties file will be generated and distributed to the
    HDFS roles.
    -Vinithra

    On Sun, Aug 5, 2012 at 8:09 PM, Koh Linxin wrote:

    Hi everyone,
    I'm trying to monitor hdfs with ganglia, which isn't included in the
    free edition of CM4. Question is, where do I modify the configuration
    settings on the hadoop nodes to send the metrics to the ganglia monitoring
    server?
    I can't seem to find the page on the UI to modify the
    hadoop.metrics.properties configuration.
  • Victor Sanchez at Sep 25, 2012 at 7:06 am
    I believe there is something wrong with the installation. I ran host inspector from Cloudera Manager and this are the versions I get.

    Component

    Version

    MapReduce 2 (CDH4 only)

    2.0.0+91

    HDFS (CDH4 only)

    2.0.0+91

    Hue Plugins

    2.0.0+59

    HBase

    0.92.1+67

    Oozie

    3.1.3+155

    Yarn (CDH4 only)

    2.0.0+91

    Zookeeper

    3.4.3+15

    Hue

    2.0.0+59

    MapReduce 1 (CDH4 only)

    0.20.2+1216

    HttpFS (CDH4 only)

    2.0.0+91

    Hadoop

    2.0.0+91

    Hive

    0.8.1+61

    Cloudera Manager Management Daemons

    4.0.2

    Cloudera Manager Agent

    4.0.2


    As far as I understand I installed CDH4, but still those options doesn't show up.
    Here it is a list of installed packages.
    # rpm -qa | grep cloudera
    cloudera-manager-repository-4.0-1.noarch
    cloudera-manager-server-4.0.2-1.cm402.p0.26.x86_64
    cloudera-manager-server-db-4.0.2-1.cm402.p0.26.x86_64
    cloudera-manager-agent-4.0.2-1.cm402.p0.26.x86_64
    cloudera-manager-daemons-4.0.2-1.cm402.p0.26.x86_64

    # rpm -qa | grep cdh
    hue-filebrowser-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64
    sqoop-1.4.1+28-1.cdh4.0.1.p0.1.el6.noarch
    hue-useradmin-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64
    bigtop-tomcat-0.4+300-1.cdh4.0.1.p0.1.el6.noarch
    hadoop-2.0.0+91-1.cdh4.0.1.p0.1.el6.x86_64
    hadoop-mapreduce-2.0.0+91-1.cdh4.0.1.p0.1.el6.x86_64
    hive-0.8.1+61-1.cdh4.0.1.p0.1.el6.noarch
    hue-plugins-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64
    hue-proxy-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64
    bigtop-jsvc-0.4+300-1.cdh4.0.1.p0.1.el6.x86_64
    hadoop-hdfs-2.0.0+91-1.cdh4.0.1.p0.1.el6.x86_64
    hadoop-yarn-2.0.0+91-1.cdh4.0.1.p0.1.el6.x86_64
    hadoop-0.20-mapreduce-0.20.2+1216-1.cdh4.0.1.p0.1.el6.x86_64
    hbase-0.92.1+67-1.cdh4.0.1.p0.1.el6.noarch
    hue-help-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64
    hue-jobbrowser-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64
    hue-beeswax-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64
    hue-shell-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64
    zookeeper-3.4.3+15-1.cdh4.0.1.p0.1.el6.noarch
    oozie-client-3.1.3+155-1.cdh4.0.1.p0.1.el6.noarch
    pig-0.9.2+26-1.cdh4.0.1.p0.1.el6.noarch
    hue-common-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64
    hue-about-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64
    hue-jobsub-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64
    hue-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64
    bigtop-utils-0.4+300-1.cdh4.0.1.p0.1.el6.noarch
    hadoop-httpfs-2.0.0+91-1.cdh4.0.1.p0.1.el6.x86_64
    hadoop-client-2.0.0+91-1.cdh4.0.1.p0.1.el6.x86_64
    oozie-3.1.3+155-1.cdh4.0.1.p0.1.el6.noarch


    /V


    From: Vinithra Varadharajan
    Sent: den 24 september 2012 20:54
    To: Victor Sanchez
    Cc: [email protected]
    Subject: Re: How to configure ganglia monitoring on CM4

    Victor,

    It looks like you have setup a CDH3 cluster using CM. That is why you can't see the Hadoop Metrics2 safety valve. If you set up a CDH4 cluster, you will see this config.

    -Vinithra
    On Mon, Sep 24, 2012 at 12:51 AM, VicSanchez wrote:
    Hi guys!

    I look fro hadoop-metrics2.properties safety-valve but no luck.

    I'm running CM4 and under the hdfs services tab I only get the following properties for the NN, DN and Secondary NN.
    Hadoop Metrics Class

    org.apache.hadoop.metrics.spi.NoEmitMetricsContext


    Implementation daemons will use to report some internal statistics. The default (NoEmitMetricsContext) will display metrics on /metrics on the status port. The GangliaContext and GangliaContext31 classes will report metrics to your specified Ganglia Monitoring Daemons (gmond). The ganglia wire format changed incompatibly at version 3.1.0. If you are running any version of ganglia 3.1.0 or newer, use the GangliaContext31 metric class; otherwise, use the GangliaContext metric class.

    Hadoop Metrics Output Directory

    /tmp/metrics


    If using FileContext, directory to write metrics to.

    Hadoop Metrics Ganglia Servers

    Default value is empty. Click to edit.


    If using GangliaContext, a comma-delimited list of host:port pairs pointing to 'gmond' servers you would like to publish metrics to. In practice, this set of 'gmond' should match the set of 'gmond' in your 'gmetad' datasource list for the cluster.



    Can anyone point me to the right place to find that configuration file?
    I'm able to run ganglia but no hdfs metrics get pushed to it. I mean no NN, DN metrics are available only CPU, RAM and all that stuff.

    Thanks in advance for your help!

    /Victor


    On Monday, August 6, 2012 8:10:34 PM UTC+2, Vinithra wrote:
    Hi Koh,

    If you are using CDH4, you can configure ganglia monitoring for HDFS via hadoop-metrics2.properties. Currently CM4 provides a safety valve under HDFS service: go to the Configurations tab of your HDFS service, and search for "Hadoop Metrics2". Then add the ganglia monitoring key-value entries for the file in the safety valve box. When you restart the roles, hadoop-metrics2.properties file will be generated and distributed to the HDFS roles.



    -Vinithra

    On Sun, Aug 5, 2012 at 8:09 PM, Koh Linxin wrote:
    Hi everyone,
    I'm trying to monitor hdfs with ganglia, which isn't included in the free edition of CM4. Question is, where do I modify the configuration settings on the hadoop nodes to send the metrics to the ganglia monitoring server?
    I can't seem to find the page on the UI to modify the hadoop.metrics.properties configuration.



    Victor Sanchez

    Database Architect

    Net Entertainment NE AB, Luntmakargatan 18, SE-111 37, Stockholm, SE
    T: +46 709 124 656, M: +46 709 124 656, F: +46 8 578 545 10
    [email protected] www.netent.com

    Better Games



    This email and the information it contains are confidential and may be legally privileged and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify me immediately. Please note that any views or opinions presented in this email are solely those of the author and do not necessarily represent those of the company. You should not copy it for any purpose, or disclose its contents to any other person. Internet communications are not secure and, therefore, Net Entertainment does not accept legal responsibility for the contents of this message as it has been transmitted over a public network. If you suspect the message may have been intercepted or amended please call me. Finally, the recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email. Thank you.
  • Philip Langdale at Sep 26, 2012 at 4:44 pm
    Hi Victor,

    You've installed CDH4, but when you set up the cluster in CM, you
    independently choose what version the cluster should be. This will
    be reflected in the default name of the cluster "Cluster 1 - CDH4" or
    similar. If you accidentally created it as a CDH3 cluster, it would
    explain your symptoms, but it would also prevent the services from running
    against CDH4 packages, which is not something you've
    noted, so it might be something else. Which exact version of CM are you
    using?

    --phil


    On 25 September 2012 00:05, Victor Sanchez wrote:

    I believe there is something wrong with the installation. I ran host
    inspector from Cloudera Manager and this are the versions I get.



    *Component*

    *Version*

    MapReduce 2 (CDH4 only)

    2.0.0+91

    HDFS (CDH4 only)

    2.0.0+91

    Hue Plugins

    2.0.0+59

    HBase

    0.92.1+67

    Oozie

    3.1.3+155

    Yarn (CDH4 only)

    2.0.0+91

    Zookeeper

    3.4.3+15

    Hue

    2.0.0+59

    MapReduce 1 (CDH4 only)

    0.20.2+1216

    HttpFS (CDH4 only)

    2.0.0+91

    Hadoop

    2.0.0+91

    Hive

    0.8.1+61

    Cloudera Manager Management Daemons

    4.0.2

    Cloudera Manager Agent

    4.0.2



    As far as I understand I installed CDH4, but still those options doesn’t
    show up.

    Here it is a list of installed packages.

    # rpm -qa | grep cloudera

    cloudera-manager-repository-4.0-1.noarch

    cloudera-manager-server-4.0.2-1.cm402.p0.26.x86_64

    cloudera-manager-server-db-4.0.2-1.cm402.p0.26.x86_64

    cloudera-manager-agent-4.0.2-1.cm402.p0.26.x86_64

    cloudera-manager-daemons-4.0.2-1.cm402.p0.26.x86_64



    # rpm -qa | grep cdh

    hue-filebrowser-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64

    sqoop-1.4.1+28-1.cdh4.0.1.p0.1.el6.noarch

    hue-useradmin-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64

    bigtop-tomcat-0.4+300-1.cdh4.0.1.p0.1.el6.noarch

    hadoop-2.0.0+91-1.cdh4.0.1.p0.1.el6.x86_64

    hadoop-mapreduce-2.0.0+91-1.cdh4.0.1.p0.1.el6.x86_64

    hive-0.8.1+61-1.cdh4.0.1.p0.1.el6.noarch

    hue-plugins-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64

    hue-proxy-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64

    bigtop-jsvc-0.4+300-1.cdh4.0.1.p0.1.el6.x86_64

    hadoop-hdfs-2.0.0+91-1.cdh4.0.1.p0.1.el6.x86_64

    hadoop-yarn-2.0.0+91-1.cdh4.0.1.p0.1.el6.x86_64

    hadoop-0.20-mapreduce-0.20.2+1216-1.cdh4.0.1.p0.1.el6.x86_64

    hbase-0.92.1+67-1.cdh4.0.1.p0.1.el6.noarch

    hue-help-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64

    hue-jobbrowser-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64

    hue-beeswax-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64

    hue-shell-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64

    zookeeper-3.4.3+15-1.cdh4.0.1.p0.1.el6.noarch

    oozie-client-3.1.3+155-1.cdh4.0.1.p0.1.el6.noarch

    pig-0.9.2+26-1.cdh4.0.1.p0.1.el6.noarch

    hue-common-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64

    hue-about-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64

    hue-jobsub-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64

    hue-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64

    bigtop-utils-0.4+300-1.cdh4.0.1.p0.1.el6.noarch

    hadoop-httpfs-2.0.0+91-1.cdh4.0.1.p0.1.el6.x86_64

    hadoop-client-2.0.0+91-1.cdh4.0.1.p0.1.el6.x86_64

    oozie-3.1.3+155-1.cdh4.0.1.p0.1.el6.noarch





    /V





    *From:* Vinithra Varadharajan
    *Sent:* den 24 september 2012 20:54
    *To:* Victor Sanchez
    *Cc:* [email protected]
    *Subject:* Re: How to configure ganglia monitoring on CM4



    Victor,



    It looks like you have setup a CDH3 cluster using CM. That is why you
    can't see the Hadoop Metrics2 safety valve. If you set up a CDH4 cluster,
    you will see this config.



    -Vinithra

    On Mon, Sep 24, 2012 at 12:51 AM, VicSanchez wrote:

    Hi guys!

    I look fro hadoop-metrics2.properties safety-valve but no luck.

    I'm running CM4 and under the hdfs services tab I only get the following
    properties for the NN, DN and Secondary NN.

    Hadoop Metrics Class

    org.apache.hadoop.metrics.spi.NoEmitMetricsContext

    Implementation daemons will use to report some internal statistics. The
    default (NoEmitMetricsContext) will display metrics on /metrics on the
    status port. The GangliaContext and GangliaContext31 classes will report
    metrics to your specified Ganglia Monitoring Daemons (gmond). The ganglia
    wire format changed incompatibly at version 3.1.0. If you are running any
    version of ganglia 3.1.0 or newer, use the GangliaContext31 metric class;
    otherwise, use the GangliaContext metric class.

    Hadoop Metrics Output Directory

    /tmp/metrics

    If using FileContext, directory to write metrics to.

    Hadoop Metrics Ganglia Servers

    Default value is empty. Click to edit.

    If using GangliaContext, a comma-delimited list of host:port pairs
    pointing to 'gmond' servers you would like to publish metrics to. In
    practice, this set of 'gmond' should match the set of 'gmond' in your
    'gmetad' datasource list for the cluster.



    Can anyone point me to the right place to find that configuration file?
    I'm able to run ganglia but no hdfs metrics get pushed to it. I mean no
    NN, DN metrics are available only CPU, RAM and all that stuff.

    Thanks in advance for your help!

    /Victor



    On Monday, August 6, 2012 8:10:34 PM UTC+2, Vinithra wrote:

    Hi Koh,



    If you are using CDH4, you can configure ganglia monitoring for HDFS via
    hadoop-metrics2.properties. Currently CM4 provides a safety valve under
    HDFS service: go to the Configurations tab of your HDFS service, and search
    for "Hadoop Metrics2". Then add the ganglia monitoring key-value entries
    for the file in the safety valve box. When you restart the roles,
    hadoop-metrics2.properties file will be generated and distributed to the
    HDFS roles.






    -Vinithra



    On Sun, Aug 5, 2012 at 8:09 PM, Koh Linxin wrote:

    Hi everyone,

    I'm trying to monitor hdfs with ganglia, which isn't included in the free
    edition of CM4. Question is, where do I modify the configuration settings
    on the hadoop nodes to send the metrics to the ganglia monitoring server?

    I can't seem to find the page on the UI to modify the
    hadoop.metrics.properties configuration.





    Victor Sanchez

    Database Architect

    Net Entertainment NE AB, Luntmakargatan 18, SE-111 37, Stockholm, SE
    T: +46 709 124 656, M: +46 709 124 656, F: +46 8 578 545 10
    [email protected] www.netent.com

    Better Games


    This email and the information it contains are confidential and may be
    legally privileged and intended solely for the use of the individual or
    entity to whom they are addressed. If you have received this email in error
    please notify me immediately. Please note that any views or opinions
    presented in this email are solely those of the author and do not
    necessarily represent those of the company. You should not copy it for any
    purpose, or disclose its contents to any other person. Internet
    communications are not secure and, therefore, Net Entertainment does not
    accept legal responsibility for the contents of this message as it has been
    transmitted over a public network. If you suspect the message may have been
    intercepted or amended please call me. Finally, the recipient should check
    this email and any attachments for the presence of viruses. The company
    accepts no liability for any damage caused by any virus transmitted by this
    email. Thank you.
  • Victor Sanchez at Sep 27, 2012 at 8:06 am
    Hi Phil,

    I have been documenting what I did for setting up Hadoop from the beginning. I’m pretty sure I installed CM4 and deployed the cluster with CDH4. In the name I got Cluster 1- CDH4 which I renamed to Dev_Cluster – CDH4.

    I believe I’m using CM 4.0.2

    Here is what I get when looking for CM packages installed on my NN.
    $ rpm -qa | grep cm
    cloudera-manager-server-4.0.2-1.cm402.p0.26.x86_64
    cloudera-manager-server-db-4.0.2-1.cm402.p0.26.x86_64
    lcms-libs-1.19-1.el6.x86_64
    cloudera-manager-agent-4.0.2-1.cm402.p0.26.x86_64
    cloudera-manager-daemons-4.0.2-1.cm402.p0.26.x86_64

    I have done everything using the cloudera manager installer, which I took from:

    $ wget http://archive.cloudera.com/cm4/installer/latest/cloudera-manager-installer.bin
    $ ls
    -rwxrw-r--. 1 cloud cloud 493K Jun 25 20:59 cloudera-manager-installer.bin*

    Everything else seems to be working fine sucha as HDFS, HIVE, HBASE. So not really sure why I don’t get the additional configuration options. I’m even able to see the JMX metrics exposed from NN:50070/JMX but still nothing when I try to configure ganglia and hadoop. Any hint on where else to check will be much appreciated.

    /Victor

    From: Philip Langdale
    Sent: den 26 september 2012 18:44
    To: Victor Sanchez
    Cc: Vinithra Varadharajan; [email protected]
    Subject: Re: How to configure ganglia monitoring on CM4

    Hi Victor,

    You've installed CDH4, but when you set up the cluster in CM, you independently choose what version the cluster should be. This will
    be reflected in the default name of the cluster "Cluster 1 - CDH4" or similar. If you accidentally created it as a CDH3 cluster, it would
    explain your symptoms, but it would also prevent the services from running against CDH4 packages, which is not something you've
    noted, so it might be something else. Which exact version of CM are you using?

    --phil


    On 25 September 2012 00:05, Victor Sanchez wrote:
    I believe there is something wrong with the installation. I ran host inspector from Cloudera Manager and this are the versions I get.

    Component

    Version

    MapReduce 2 (CDH4 only)

    2.0.0+91

    HDFS (CDH4 only)

    2.0.0+91

    Hue Plugins

    2.0.0+59

    HBase

    0.92.1+67

    Oozie

    3.1.3+155

    Yarn (CDH4 only)

    2.0.0+91

    Zookeeper

    3.4.3+15

    Hue

    2.0.0+59

    MapReduce 1 (CDH4 only)

    0.20.2+1216

    HttpFS (CDH4 only)

    2.0.0+91

    Hadoop

    2.0.0+91

    Hive

    0.8.1+61

    Cloudera Manager Management Daemons

    4.0.2

    Cloudera Manager Agent

    4.0.2


    As far as I understand I installed CDH4, but still those options doesn’t show up.
    Here it is a list of installed packages.
    # rpm -qa | grep cloudera
    cloudera-manager-repository-4.0-1.noarch
    cloudera-manager-server-4.0.2-1.cm402.p0.26.x86_64
    cloudera-manager-server-db-4.0.2-1.cm402.p0.26.x86_64
    cloudera-manager-agent-4.0.2-1.cm402.p0.26.x86_64
    cloudera-manager-daemons-4.0.2-1.cm402.p0.26.x86_64

    # rpm -qa | grep cdh
    hue-filebrowser-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64
    sqoop-1.4.1+28-1.cdh4.0.1.p0.1.el6.noarch
    hue-useradmin-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64
    bigtop-tomcat-0.4+300-1.cdh4.0.1.p0.1.el6.noarch
    hadoop-2.0.0+91-1.cdh4.0.1.p0.1.el6.x86_64
    hadoop-mapreduce-2.0.0+91-1.cdh4.0.1.p0.1.el6.x86_64
    hive-0.8.1+61-1.cdh4.0.1.p0.1.el6.noarch
    hue-plugins-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64
    hue-proxy-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64
    bigtop-jsvc-0.4+300-1.cdh4.0.1.p0.1.el6.x86_64
    hadoop-hdfs-2.0.0+91-1.cdh4.0.1.p0.1.el6.x86_64
    hadoop-yarn-2.0.0+91-1.cdh4.0.1.p0.1.el6.x86_64
    hadoop-0.20-mapreduce-0.20.2+1216-1.cdh4.0.1.p0.1.el6.x86_64
    hbase-0.92.1+67-1.cdh4.0.1.p0.1.el6.noarch
    hue-help-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64
    hue-jobbrowser-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64
    hue-beeswax-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64
    hue-shell-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64
    zookeeper-3.4.3+15-1.cdh4.0.1.p0.1.el6.noarch
    oozie-client-3.1.3+155-1.cdh4.0.1.p0.1.el6.noarch
    pig-0.9.2+26-1.cdh4.0.1.p0.1.el6.noarch
    hue-common-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64
    hue-about-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64
    hue-jobsub-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64
    hue-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64
    bigtop-utils-0.4+300-1.cdh4.0.1.p0.1.el6.noarch
    hadoop-httpfs-2.0.0+91-1.cdh4.0.1.p0.1.el6.x86_64
    hadoop-client-2.0.0+91-1.cdh4.0.1.p0.1.el6.x86_64
    oozie-3.1.3+155-1.cdh4.0.1.p0.1.el6.noarch


    /V


    From: Vinithra Varadharajan
    Sent: den 24 september 2012 20:54
    To: Victor Sanchez
    Cc: [email protected]
    Subject: Re: How to configure ganglia monitoring on CM4

    Victor,

    It looks like you have setup a CDH3 cluster using CM. That is why you can't see the Hadoop Metrics2 safety valve. If you set up a CDH4 cluster, you will see this config.

    -Vinithra
    On Mon, Sep 24, 2012 at 12:51 AM, VicSanchez wrote:
    Hi guys!

    I look fro hadoop-metrics2.properties safety-valve but no luck.

    I'm running CM4 and under the hdfs services tab I only get the following properties for the NN, DN and Secondary NN.
    Hadoop Metrics Class

    org.apache.hadoop.metrics.spi.NoEmitMetricsContext


    Implementation daemons will use to report some internal statistics. The default (NoEmitMetricsContext) will display metrics on /metrics on the status port. The GangliaContext and GangliaContext31 classes will report metrics to your specified Ganglia Monitoring Daemons (gmond). The ganglia wire format changed incompatibly at version 3.1.0. If you are running any version of ganglia 3.1.0 or newer, use the GangliaContext31 metric class; otherwise, use the GangliaContext metric class.

    Hadoop Metrics Output Directory

    /tmp/metrics


    If using FileContext, directory to write metrics to.

    Hadoop Metrics Ganglia Servers

    Default value is empty. Click to edit.


    If using GangliaContext, a comma-delimited list of host:port pairs pointing to 'gmond' servers you would like to publish metrics to. In practice, this set of 'gmond' should match the set of 'gmond' in your 'gmetad' datasource list for the cluster.



    Can anyone point me to the right place to find that configuration file?
    I'm able to run ganglia but no hdfs metrics get pushed to it. I mean no NN, DN metrics are available only CPU, RAM and all that stuff.

    Thanks in advance for your help!

    /Victor


    On Monday, August 6, 2012 8:10:34 PM UTC+2, Vinithra wrote:
    Hi Koh,

    If you are using CDH4, you can configure ganglia monitoring for HDFS via hadoop-metrics2.properties. Currently CM4 provides a safety valve under HDFS service: go to the Configurations tab of your HDFS service, and search for "Hadoop Metrics2". Then add the ganglia monitoring key-value entries for the file in the safety valve box. When you restart the roles, hadoop-metrics2.properties file will be generated and distributed to the HDFS roles.



    -Vinithra

    On Sun, Aug 5, 2012 at 8:09 PM, Koh Linxin wrote:
    Hi everyone,
    I'm trying to monitor hdfs with ganglia, which isn't included in the free edition of CM4. Question is, where do I modify the configuration settings on the hadoop nodes to send the metrics to the ganglia monitoring server?
    I can't seem to find the page on the UI to modify the hadoop.metrics.properties configuration.



    Victor Sanchez

    Database Architect

    Net Entertainment NE AB, Luntmakargatan 18, SE-111 37, Stockholm, SE
    T: +46 709 124 656<tel:%2B46%20709%20124%20656>, M: +46 709 124 656<tel:%2B46%20709%20124%20656>, F: +46 8 578 545 10<tel:%2B46%208%20578%20545%2010>
    [email protected] www.netent.com<http://www.netent.com>

    Better Games


    This email and the information it contains are confidential and may be legally privileged and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify me immediately. Please note that any views or opinions presented in this email are solely those of the author and do not necessarily represent those of the company. You should not copy it for any purpose, or disclose its contents to any other person. Internet communications are not secure and, therefore, Net Entertainment does not accept legal responsibility for the contents of this message as it has been transmitted over a public network. If you suspect the message may have been intercepted or amended please call me. Finally, the recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email. Thank you.
  • Philip Langdale at Sep 27, 2012 at 6:04 pm
    Ah, you're running 4.0.2. The metrics2 safety valve was added in 4.0.3, and
    as 4.0.4 is
    already out, you should upgrade to that. After that's done, you'll be able
    to use this
    feature.

    --phil


    On 27 September 2012 01:06, Victor Sanchez wrote:

    Hi Phil,****

    ** **

    I have been documenting what I did for setting up Hadoop from the
    beginning. I’m pretty sure I installed CM4 and deployed the cluster with
    CDH4. In the name I got Cluster 1- CDH4 which I renamed to Dev_Cluster –
    CDH4.****

    ** **

    I believe I’m using CM 4.0.2****

    ** **

    Here is what I get when looking for CM packages installed on my NN.****

    $ rpm -qa | grep cm****

    cloudera-manager-server-4.0.2-1.cm402.p0.26.x86_64****

    cloudera-manager-server-db-4.0.2-1.cm402.p0.26.x86_64****

    lcms-libs-1.19-1.el6.x86_64****

    cloudera-manager-agent-4.0.2-1.cm402.p0.26.x86_64****

    cloudera-manager-daemons-4.0.2-1.cm402.p0.26.x86_64****

    ** **

    I have done everything using the cloudera manager installer, which I took
    from:****

    ** **

    *$ wget **
    http://archive.cloudera.com/cm4/installer/latest/cloudera-manager-installer.bin
    ***

    *$ ls *

    *-rwxrw-r--. 1 cloud cloud 493K Jun 25 20:59
    cloudera-manager-installer.bin**

    ** **

    Everything else seems to be working fine sucha as HDFS, HIVE, HBASE. So
    not really sure why I don’t get the additional configuration options. I’m
    even able to see the JMX metrics exposed from NN:50070/JMX but still
    nothing when I try to configure ganglia and hadoop. Any hint on where else
    to check will be much appreciated.****

    ** **

    /Victor****

    ** **

    *From:* Philip Langdale
    *Sent:* den 26 september 2012 18:44
    *To:* Victor Sanchez
    *Cc:* Vinithra Varadharajan; [email protected]

    *Subject:* Re: How to configure ganglia monitoring on CM4****

    ** **

    Hi Victor,****

    ** **

    You've installed CDH4, but when you set up the cluster in CM, you
    independently choose what version the cluster should be. This will****

    be reflected in the default name of the cluster "Cluster 1 - CDH4" or
    similar. If you accidentally created it as a CDH3 cluster, it would****

    explain your symptoms, but it would also prevent the services from running
    against CDH4 packages, which is not something you've****

    noted, so it might be something else. Which exact version of CM are you
    using?****


    --phil


    ****

    On 25 September 2012 00:05, Victor Sanchez wrote:****

    I believe there is something wrong with the installation. I ran host
    inspector from Cloudera Manager and this are the versions I get.****

    ****

    *Component*****

    *Version*****

    MapReduce 2 (CDH4 only)****

    2.0.0+91****

    HDFS (CDH4 only)****

    2.0.0+91****

    Hue Plugins****

    2.0.0+59****

    HBase****

    0.92.1+67****

    Oozie****

    3.1.3+155****

    Yarn (CDH4 only)****

    2.0.0+91****

    Zookeeper****

    3.4.3+15****

    Hue****

    2.0.0+59****

    MapReduce 1 (CDH4 only)****

    0.20.2+1216****

    HttpFS (CDH4 only)****

    2.0.0+91****

    Hadoop****

    2.0.0+91****

    Hive****

    0.8.1+61****

    Cloudera Manager Management Daemons****

    4.0.2****

    Cloudera Manager Agent****

    4.0.2****

    ****

    As far as I understand I installed CDH4, but still those options doesn’t
    show up.****

    Here it is a list of installed packages.****

    # rpm -qa | grep cloudera****

    cloudera-manager-repository-4.0-1.noarch****

    cloudera-manager-server-4.0.2-1.cm402.p0.26.x86_64****

    cloudera-manager-server-db-4.0.2-1.cm402.p0.26.x86_64****

    cloudera-manager-agent-4.0.2-1.cm402.p0.26.x86_64****

    cloudera-manager-daemons-4.0.2-1.cm402.p0.26.x86_64****

    ****

    # rpm -qa | grep cdh****

    hue-filebrowser-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64****

    sqoop-1.4.1+28-1.cdh4.0.1.p0.1.el6.noarch****

    hue-useradmin-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64****

    bigtop-tomcat-0.4+300-1.cdh4.0.1.p0.1.el6.noarch****

    hadoop-2.0.0+91-1.cdh4.0.1.p0.1.el6.x86_64****

    hadoop-mapreduce-2.0.0+91-1.cdh4.0.1.p0.1.el6.x86_64****

    hive-0.8.1+61-1.cdh4.0.1.p0.1.el6.noarch****

    hue-plugins-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64****

    hue-proxy-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64****

    bigtop-jsvc-0.4+300-1.cdh4.0.1.p0.1.el6.x86_64****

    hadoop-hdfs-2.0.0+91-1.cdh4.0.1.p0.1.el6.x86_64****

    hadoop-yarn-2.0.0+91-1.cdh4.0.1.p0.1.el6.x86_64****

    hadoop-0.20-mapreduce-0.20.2+1216-1.cdh4.0.1.p0.1.el6.x86_64****

    hbase-0.92.1+67-1.cdh4.0.1.p0.1.el6.noarch****

    hue-help-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64****

    hue-jobbrowser-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64****

    hue-beeswax-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64****

    hue-shell-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64****

    zookeeper-3.4.3+15-1.cdh4.0.1.p0.1.el6.noarch****

    oozie-client-3.1.3+155-1.cdh4.0.1.p0.1.el6.noarch****

    pig-0.9.2+26-1.cdh4.0.1.p0.1.el6.noarch****

    hue-common-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64****

    hue-about-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64****

    hue-jobsub-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64****

    hue-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64****

    bigtop-utils-0.4+300-1.cdh4.0.1.p0.1.el6.noarch****

    hadoop-httpfs-2.0.0+91-1.cdh4.0.1.p0.1.el6.x86_64****

    hadoop-client-2.0.0+91-1.cdh4.0.1.p0.1.el6.x86_64****

    oozie-3.1.3+155-1.cdh4.0.1.p0.1.el6.noarch****

    ****

    ****

    /V****

    ****

    ****

    *From:* Vinithra Varadharajan
    *Sent:* den 24 september 2012 20:54
    *To:* Victor Sanchez
    *Cc:* [email protected]
    *Subject:* Re: How to configure ganglia monitoring on CM4****

    ****

    Victor,****

    ****

    It looks like you have setup a CDH3 cluster using CM. That is why you
    can't see the Hadoop Metrics2 safety valve. If you set up a CDH4 cluster,
    you will see this config.****

    ****

    -Vinithra****

    On Mon, Sep 24, 2012 at 12:51 AM, VicSanchez wrote:****

    Hi guys!

    I look fro hadoop-metrics2.properties safety-valve but no luck.

    I'm running CM4 and under the hdfs services tab I only get the following
    properties for the NN, DN and Secondary NN.****

    Hadoop Metrics Class ****

    org.apache.hadoop.metrics.spi.NoEmitMetricsContext****

    Implementation daemons will use to report some internal statistics. The
    default (NoEmitMetricsContext) will display metrics on /metrics on the
    status port. The GangliaContext and GangliaContext31 classes will report
    metrics to your specified Ganglia Monitoring Daemons (gmond). The ganglia
    wire format changed incompatibly at version 3.1.0. If you are running any
    version of ganglia 3.1.0 or newer, use the GangliaContext31 metric class;
    otherwise, use the GangliaContext metric class.****

    Hadoop Metrics Output Directory ****

    /tmp/metrics****

    If using FileContext, directory to write metrics to.****

    Hadoop Metrics Ganglia Servers ****

    Default value is empty. Click to edit. ****

    If using GangliaContext, a comma-delimited list of host:port pairs
    pointing to 'gmond' servers you would like to publish metrics to. In
    practice, this set of 'gmond' should match the set of 'gmond' in your
    'gmetad' datasource list for the cluster.****

    ****

    Can anyone point me to the right place to find that configuration file?
    I'm able to run ganglia but no hdfs metrics get pushed to it. I mean no
    NN, DN metrics are available only CPU, RAM and all that stuff.

    Thanks in advance for your help!

    /Victor****



    On Monday, August 6, 2012 8:10:34 PM UTC+2, Vinithra wrote:****

    Hi Koh,****

    ****

    If you are using CDH4, you can configure ganglia monitoring for HDFS via
    hadoop-metrics2.properties. Currently CM4 provides a safety valve under
    HDFS service: go to the Configurations tab of your HDFS service, and search
    for "Hadoop Metrics2". Then add the ganglia monitoring key-value entries
    for the file in the safety valve box. When you restart the roles,
    hadoop-metrics2.properties file will be generated and distributed to the
    HDFS roles.****


    ****

    ****

    -Vinithra****

    ****

    On Sun, Aug 5, 2012 at 8:09 PM, Koh Linxin wrote:****

    Hi everyone,****

    I'm trying to monitor hdfs with ganglia, which isn't included in the free
    edition of CM4. Question is, where do I modify the configuration settings
    on the hadoop nodes to send the metrics to the ganglia monitoring server?*
    ***

    I can't seem to find the page on the UI to modify the
    hadoop.metrics.properties configuration.****

    ****

    ****

    ** **
    Victor Sanchez****

    Database Architect****

    Net Entertainment NE AB, Luntmakargatan 18, SE-111 37, Stockholm, SE
    T: +46 709 124 656, M: +46 709 124 656, F: +46 8 578 545 10
    [email protected] www.netent.com ****

    *Better Games*

    ** **

    This email and the information it contains are confidential and may be
    legally privileged and intended solely for the use of the individual or
    entity to whom they are addressed. If you have received this email in error
    please notify me immediately. Please note that any views or opinions
    presented in this email are solely those of the author and do not
    necessarily represent those of the company. You should not copy it for any
    purpose, or disclose its contents to any other person. Internet
    communications are not secure and, therefore, Net Entertainment does not
    accept legal responsibility for the contents of this message as it has been
    transmitted over a public network. If you suspect the message may have been
    intercepted or amended please call me. Finally, the recipient should check
    this email and any attachments for the presence of viruses. The company
    accepts no liability for any damage caused by any virus transmitted by this
    email. Thank you. ****

    ** **
  • VicSanchez at Oct 2, 2012 at 1:48 pm
    Hi Phil!

    That did the trick I updated my CM 4.0.2 to CM 4.0.4 and now I'm able to
    see metrics coming to ganglia.

    Here is what I wrote on the safety-value of hadoop-metrics2.properties .
    Most of the examples are just about writing to a file but if you have
    ganglia, you may find this set up useful.

    *.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
    # default sampling period
    *.period=10

    namenode.sink.ganglia.servers=239.2.11.71:8649
    #datanode.sink.ganglia.servers=239.2.11.71:8649
    #secondarynamenode.sink.ganglia.servers=239.2.11.71:8649

    #This are for YARN - MapReducev2
    #nodemanager.sink.ganglia.servers=239.2.11.71:8649
    #resourcemanager.sink.ganglia.servers=239.2.11.71:8649

    One other question, if I need some metrics from the jobtracker and
    tasktracer where shall I configure it? So far now that I updated to CM
    4.0.4 the metrics2 only appears into the HDFS service, the MapReduce
    service keeps displaying the "old" hadoop-metrics.propertie. So I got a bit
    confuse in how to use it.

    Thanks for the help!

    /V

    On Thursday, September 27, 2012 8:04:46 PM UTC+2, Philip Langdale wrote:

    Ah, you're running 4.0.2. The metrics2 safety valve was added in 4.0.3,
    and as 4.0.4 is
    already out, you should upgrade to that. After that's done, you'll be able
    to use this
    feature.

    --phil



    On 27 September 2012 01:06, Victor Sanchez <[email protected]<javascript:>
    wrote:
    Hi Phil,****

    ** **

    I have been documenting what I did for setting up Hadoop from the
    beginning. I’m pretty sure I installed CM4 and deployed the cluster with
    CDH4. In the name I got Cluster 1- CDH4 which I renamed to Dev_Cluster –
    CDH4.****

    ** **

    I believe I’m using CM 4.0.2****

    ** **

    Here is what I get when looking for CM packages installed on my NN.****

    $ rpm -qa | grep cm****

    cloudera-manager-server-4.0.2-1.cm402.p0.26.x86_64****

    cloudera-manager-server-db-4.0.2-1.cm402.p0.26.x86_64****

    lcms-libs-1.19-1.el6.x86_64****

    cloudera-manager-agent-4.0.2-1.cm402.p0.26.x86_64****

    cloudera-manager-daemons-4.0.2-1.cm402.p0.26.x86_64****

    ** **

    I have done everything using the cloudera manager installer, which I took
    from:****

    ** **

    *$ wget **
    http://archive.cloudera.com/cm4/installer/latest/cloudera-manager-installer.bin
    ***

    *$ ls *

    *-rwxrw-r--. 1 cloud cloud 493K Jun 25 20:59
    cloudera-manager-installer.bin**

    ** **

    Everything else seems to be working fine sucha as HDFS, HIVE, HBASE. So
    not really sure why I don’t get the additional configuration options. I’m
    even able to see the JMX metrics exposed from NN:50070/JMX but still
    nothing when I try to configure ganglia and hadoop. Any hint on where else
    to check will be much appreciated.****

    ** **

    /Victor****

    ** **

    *From:* Philip Langdale [mailto:[email protected] <javascript:>]
    *Sent:* den 26 september 2012 18:44
    *To:* Victor Sanchez
    *Cc:* Vinithra Varadharajan; [email protected] <javascript:>

    *Subject:* Re: How to configure ganglia monitoring on CM4****

    ** **

    Hi Victor,****

    ** **

    You've installed CDH4, but when you set up the cluster in CM, you
    independently choose what version the cluster should be. This will****

    be reflected in the default name of the cluster "Cluster 1 - CDH4" or
    similar. If you accidentally created it as a CDH3 cluster, it would****

    explain your symptoms, but it would also prevent the services from
    running against CDH4 packages, which is not something you've****

    noted, so it might be something else. Which exact version of CM are you
    using?****


    --phil


    ****

    On 25 September 2012 00:05, Victor Sanchez <[email protected]<javascript:>>
    wrote:****

    I believe there is something wrong with the installation. I ran host
    inspector from Cloudera Manager and this are the versions I get.****

    ****

    *Component*****

    *Version*****

    MapReduce 2 (CDH4 only)****

    2.0.0+91****

    HDFS (CDH4 only)****

    2.0.0+91****

    Hue Plugins****

    2.0.0+59****

    HBase****

    0.92.1+67****

    Oozie****

    3.1.3+155****

    Yarn (CDH4 only)****

    2.0.0+91****

    Zookeeper****

    3.4.3+15****

    Hue****

    2.0.0+59****

    MapReduce 1 (CDH4 only)****

    0.20.2+1216****

    HttpFS (CDH4 only)****

    2.0.0+91****

    Hadoop****

    2.0.0+91****

    Hive****

    0.8.1+61****

    Cloudera Manager Management Daemons****

    4.0.2****

    Cloudera Manager Agent****

    4.0.2****

    ****

    As far as I understand I installed CDH4, but still those options doesn’t
    show up.****

    Here it is a list of installed packages.****

    # rpm -qa | grep cloudera****

    cloudera-manager-repository-4.0-1.noarch****

    cloudera-manager-server-4.0.2-1.cm402.p0.26.x86_64****

    cloudera-manager-server-db-4.0.2-1.cm402.p0.26.x86_64****

    cloudera-manager-agent-4.0.2-1.cm402.p0.26.x86_64****

    cloudera-manager-daemons-4.0.2-1.cm402.p0.26.x86_64****

    ****

    # rpm -qa | grep cdh****

    hue-filebrowser-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64****

    sqoop-1.4.1+28-1.cdh4.0.1.p0.1.el6.noarch****

    hue-useradmin-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64****

    bigtop-tomcat-0.4+300-1.cdh4.0.1.p0.1.el6.noarch****

    hadoop-2.0.0+91-1.cdh4.0.1.p0.1.el6.x86_64****

    hadoop-mapreduce-2.0.0+91-1.cdh4.0.1.p0.1.el6.x86_64****

    hive-0.8.1+61-1.cdh4.0.1.p0.1.el6.noarch****

    hue-plugins-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64****

    hue-proxy-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64****

    bigtop-jsvc-0.4+300-1.cdh4.0.1.p0.1.el6.x86_64****

    hadoop-hdfs-2.0.0+91-1.cdh4.0.1.p0.1.el6.x86_64****

    hadoop-yarn-2.0.0+91-1.cdh4.0.1.p0.1.el6.x86_64****

    hadoop-0.20-mapreduce-0.20.2+1216-1.cdh4.0.1.p0.1.el6.x86_64****

    hbase-0.92.1+67-1.cdh4.0.1.p0.1.el6.noarch****

    hue-help-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64****

    hue-jobbrowser-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64****

    hue-beeswax-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64****

    hue-shell-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64****

    zookeeper-3.4.3+15-1.cdh4.0.1.p0.1.el6.noarch****

    oozie-client-3.1.3+155-1.cdh4.0.1.p0.1.el6.noarch****

    pig-0.9.2+26-1.cdh4.0.1.p0.1.el6.noarch****

    hue-common-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64****

    hue-about-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64****

    hue-jobsub-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64****

    hue-2.0.0+59-1.cdh4.0.1.p0.1.el6.x86_64****

    bigtop-utils-0.4+300-1.cdh4.0.1.p0.1.el6.noarch****

    hadoop-httpfs-2.0.0+91-1.cdh4.0.1.p0.1.el6.x86_64****

    hadoop-client-2.0.0+91-1.cdh4.0.1.p0.1.el6.x86_64****

    oozie-3.1.3+155-1.cdh4.0.1.p0.1.el6.noarch****

    ****

    ****

    /V****

    ****

    ****

    *From:* Vinithra Varadharajan [mailto:[email protected] <javascript:>]

    *Sent:* den 24 september 2012 20:54
    *To:* Victor Sanchez
    *Cc:* [email protected] <javascript:>
    *Subject:* Re: How to configure ganglia monitoring on CM4****

    ****

    Victor,****

    ****

    It looks like you have setup a CDH3 cluster using CM. That is why you
    can't see the Hadoop Metrics2 safety valve. If you set up a CDH4 cluster,
    you will see this config.****

    ****

    -Vinithra****

    On Mon, Sep 24, 2012 at 12:51 AM, VicSanchez <[email protected]<javascript:>>
    wrote:****

    Hi guys!

    I look fro hadoop-metrics2.properties safety-valve but no luck.

    I'm running CM4 and under the hdfs services tab I only get the following
    properties for the NN, DN and Secondary NN.****

    Hadoop Metrics Class ****

    org.apache.hadoop.metrics.spi.NoEmitMetricsContext****

    Implementation daemons will use to report some internal statistics. The
    default (NoEmitMetricsContext) will display metrics on /metrics on the
    status port. The GangliaContext and GangliaContext31 classes will report
    metrics to your specified Ganglia Monitoring Daemons (gmond). The ganglia
    wire format changed incompatibly at version 3.1.0. If you are running any
    version of ganglia 3.1.0 or newer, use the GangliaContext31 metric class;
    otherwise, use the GangliaContext metric class.****

    Hadoop Metrics Output Directory ****

    /tmp/metrics****

    If using FileContext, directory to write metrics to.****

    Hadoop Metrics Ganglia Servers ****

    Default value is empty. Click to edit. ****

    If using GangliaContext, a comma-delimited list of host:port pairs
    pointing to 'gmond' servers you would like to publish metrics to. In
    practice, this set of 'gmond' should match the set of 'gmond' in your
    'gmetad' datasource list for the cluster.****

    ****

    Can anyone point me to the right place to find that configuration file?
    I'm able to run ganglia but no hdfs metrics get pushed to it. I mean no
    NN, DN metrics are available only CPU, RAM and all that stuff.

    Thanks in advance for your help!

    /Victor****



    On Monday, August 6, 2012 8:10:34 PM UTC+2, Vinithra wrote:****

    Hi Koh,****

    ****

    If you are using CDH4, you can configure ganglia monitoring for HDFS via
    hadoop-metrics2.properties. Currently CM4 provides a safety valve under
    HDFS service: go to the Configurations tab of your HDFS service, and search
    for "Hadoop Metrics2". Then add the ganglia monitoring key-value entries
    for the file in the safety valve box. When you restart the roles,
    hadoop-metrics2.properties file will be generated and distributed to the
    HDFS roles.****


    ****

    ****

    -Vinithra****

    ****

    On Sun, Aug 5, 2012 at 8:09 PM, Koh Linxin wrote:****

    Hi everyone,****

    I'm trying to monitor hdfs with ganglia, which isn't included in the free
    edition of CM4. Question is, where do I modify the configuration settings
    on the hadoop nodes to send the metrics to the ganglia monitoring server?
    ****

    I can't seem to find the page on the UI to modify the
    hadoop.metrics.properties configuration.****

    ****

    ****

    ** **
    Victor Sanchez****

    Database Architect****

    Net Entertainment NE AB, Luntmakargatan 18, SE-111 37, Stockholm, SE
    T: +46 709 124 656, M: +46 709 124 656, F: +46 8 578 545 10
    [email protected] <javascript:> www.netent.com ****

    *Better Games*

    ** **

    This email and the information it contains are confidential and may be
    legally privileged and intended solely for the use of the individual or
    entity to whom they are addressed. If you have received this email in error
    please notify me immediately. Please note that any views or opinions
    presented in this email are solely those of the author and do not
    necessarily represent those of the company. You should not copy it for any
    purpose, or disclose its contents to any other person. Internet
    communications are not secure and, therefore, Net Entertainment does not
    accept legal responsibility for the contents of this message as it has been
    transmitted over a public network. If you suspect the message may have been
    intercepted or amended please call me. Finally, the recipient should check
    this email and any attachments for the presence of viruses. The company
    accepts no liability for any damage caused by any virus transmitted by this
    email. Thank you. ****

    ** **
  • Vinithra Varadharajan at Oct 2, 2012 at 5:39 pm
    Victor,
    While CDH4 HDFS has switched to the hadoop-metrics2 system, MapReduce
    continues to use hadoop-metrics.properties. For instance, you can get the
    JobTracker metrics from http://jtHost:21001/metrics. You can use the
    metrics configurations available in the MR service to hook up Ganglia to
    these metrics: in the Configurations tab of the MR service, search for
    "metrics".

    -Vinithra
    On Tue, Oct 2, 2012 at 6:48 AM, VicSanchez wrote:

    Hi Phil!

    That did the trick I updated my CM 4.0.2 to CM 4.0.4 and now I'm able to
    see metrics coming to ganglia.

    Here is what I wrote on the safety-value of hadoop-metrics2.properties .
    Most of the examples are just about writing to a file but if you have
    ganglia, you may find this set up useful.

    *.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
    # default sampling period
    *.period=10

    namenode.sink.ganglia.servers=239.2.11.71:8649
    #datanode.sink.ganglia.servers=239.2.11.71:8649
    #secondarynamenode.sink.ganglia.servers=239.2.11.71:8649

    #This are for YARN - MapReducev2
    #nodemanager.sink.ganglia.servers=239.2.11.71:8649
    #resourcemanager.sink.ganglia.servers=239.2.11.71:8649

    One other question, if I need some metrics from the jobtracker and
    tasktracer where shall I configure it? So far now that I updated to CM
    4.0.4 the metrics2 only appears into the HDFS service, the MapReduce
    service keeps displaying the "old" hadoop-metrics.propertie. So I got a bit
    confuse in how to use it.

    Thanks for the help!

    /V


    On Thursday, September 27, 2012 8:04:46 PM UTC+2, Philip Langdale wrote:

    Ah, you're running 4.0.2. The metrics2 safety valve was added in 4.0.3,
    and as 4.0.4 is
    already out, you should upgrade to that. After that's done, you'll be
    able to use this
    feature.

    --phil


    On 27 September 2012 01:06, Victor Sanchez wrote:

    Hi Phil,****

    ** **

    I have been documenting what I did for setting up Hadoop from the
    beginning. I’m pretty sure I installed CM4 and deployed the cluster with
    CDH4. In the name I got Cluster 1- CDH4 which I renamed to Dev_Cluster –
    CDH4.****

    ** **

    I believe I’m using CM 4.0.2****

    ** **

    Here is what I get when looking for CM packages installed on my NN.****

    $ rpm -qa | grep cm****

    cloudera-manager-server-4.0.2-**1.cm402.p0.26.x86_64****

    cloudera-manager-server-db-4.**0.2-1.cm402.p0.26.x86_64****

    lcms-libs-1.19-1.el6.x86_64****

    cloudera-manager-agent-4.0.2-**1.cm402.p0.26.x86_64****

    cloudera-manager-daemons-4.0.**2-1.cm402.p0.26.x86_64****

    ** **

    I have done everything using the cloudera manager installer, which I
    took from:****

    ** **

    *$ wget **http://archive.cloudera.com/cm4/installer/latest/cloudera-
    manager-installer.bin***

    *$ ls *

    *-rwxrw-r--. 1 cloud cloud 493K Jun 25 20:59
    cloudera-manager-installer.bin**

    ** **

    Everything else seems to be working fine sucha as HDFS, HIVE, HBASE. So
    not really sure why I don’t get the additional configuration options. I’m
    even able to see the JMX metrics exposed from NN:50070/JMX but still
    nothing when I try to configure ganglia and hadoop. Any hint on where else
    to check will be much appreciated.****

    ** **

    /Victor****

    ** **

    *From:* Philip Langdale
    *Sent:* den 26 september 2012 18:44
    *To:* Victor Sanchez
    *Cc:* Vinithra Varadharajan; [email protected]


    *Subject:* Re: How to configure ganglia monitoring on CM4****

    ** **

    Hi Victor,****

    ** **

    You've installed CDH4, but when you set up the cluster in CM, you
    independently choose what version the cluster should be. This will****

    be reflected in the default name of the cluster "Cluster 1 - CDH4" or
    similar. If you accidentally created it as a CDH3 cluster, it would****

    explain your symptoms, but it would also prevent the services from
    running against CDH4 packages, which is not something you've****

    noted, so it might be something else. Which exact version of CM are you
    using?****


    --phil


    ****

    On 25 September 2012 00:05, Victor Sanchez <[email protected]>
    wrote:****

    I believe there is something wrong with the installation. I ran host
    inspector from Cloudera Manager and this are the versions I get.****

    ****

    *Component*****

    *Version*****

    MapReduce 2 (CDH4 only)****

    2.0.0+91****

    HDFS (CDH4 only)****

    2.0.0+91****

    Hue Plugins****

    2.0.0+59****

    HBase****

    0.92.1+67****

    Oozie****

    3.1.3+155****

    Yarn (CDH4 only)****

    2.0.0+91****

    Zookeeper****

    3.4.3+15****

    Hue****

    2.0.0+59****

    MapReduce 1 (CDH4 only)****

    0.20.2+1216****

    HttpFS (CDH4 only)****

    2.0.0+91****

    Hadoop****

    2.0.0+91****

    Hive****

    0.8.1+61****

    Cloudera Manager Management Daemons****

    4.0.2****

    Cloudera Manager Agent****

    4.0.2****

    ****

    As far as I understand I installed CDH4, but still those options doesn’t
    show up.****

    Here it is a list of installed packages.****

    # rpm -qa | grep cloudera****

    cloudera-manager-repository-4.**0-1.noarch****

    cloudera-manager-server-4.0.2-**1.cm402.p0.26.x86_64****

    cloudera-manager-server-db-4.**0.2-1.cm402.p0.26.x86_64****

    cloudera-manager-agent-4.0.2-**1.cm402.p0.26.x86_64****

    cloudera-manager-daemons-4.0.**2-1.cm402.p0.26.x86_64****

    ****

    # rpm -qa | grep cdh****

    hue-filebrowser-2.0.0+59-1.cdh**4.0.1.p0.1.el6.x86_64****

    sqoop-1.4.1+28-1.cdh4.0.1.p0.**1.el6.noarch****

    hue-useradmin-2.0.0+59-1.cdh4.**0.1.p0.1.el6.x86_64****

    bigtop-tomcat-0.4+300-1.cdh4.**0.1.p0.1.el6.noarch****

    hadoop-2.0.0+91-1.cdh4.0.1.p0.**1.el6.x86_64****

    hadoop-mapreduce-2.0.0+91-1.cd**h4.0.1.p0.1.el6.x86_64****

    hive-0.8.1+61-1.cdh4.0.1.p0.1.**el6.noarch****

    hue-plugins-2.0.0+59-1.cdh4.0.**1.p0.1.el6.x86_64****

    hue-proxy-2.0.0+59-1.cdh4.0.1.**p0.1.el6.x86_64****

    bigtop-jsvc-0.4+300-1.cdh4.0.**1.p0.1.el6.x86_64****

    hadoop-hdfs-2.0.0+91-1.cdh4.0.**1.p0.1.el6.x86_64****

    hadoop-yarn-2.0.0+91-1.cdh4.0.**1.p0.1.el6.x86_64****

    hadoop-0.20-mapreduce-0.20.2+**1216-1.cdh4.0.1.p0.1.el6.x86_**64****

    hbase-0.92.1+67-1.cdh4.0.1.p0.**1.el6.noarch****

    hue-help-2.0.0+59-1.cdh4.0.1.**p0.1.el6.x86_64****

    hue-jobbrowser-2.0.0+59-1.cdh4**.0.1.p0.1.el6.x86_64****

    hue-beeswax-2.0.0+59-1.cdh4.0.**1.p0.1.el6.x86_64****

    hue-shell-2.0.0+59-1.cdh4.0.1.**p0.1.el6.x86_64****

    zookeeper-3.4.3+15-1.cdh4.0.1.**p0.1.el6.noarch****

    oozie-client-3.1.3+155-1.cdh4.**0.1.p0.1.el6.noarch****

    pig-0.9.2+26-1.cdh4.0.1.p0.1.**el6.noarch****

    hue-common-2.0.0+59-1.cdh4.0.**1.p0.1.el6.x86_64****

    hue-about-2.0.0+59-1.cdh4.0.1.**p0.1.el6.x86_64****

    hue-jobsub-2.0.0+59-1.cdh4.0.**1.p0.1.el6.x86_64****

    hue-2.0.0+59-1.cdh4.0.1.p0.1.**el6.x86_64****

    bigtop-utils-0.4+300-1.cdh4.0.**1.p0.1.el6.noarch****

    hadoop-httpfs-2.0.0+91-1.cdh4.**0.1.p0.1.el6.x86_64****

    hadoop-client-2.0.0+91-1.cdh4.**0.1.p0.1.el6.x86_64****

    oozie-3.1.3+155-1.cdh4.0.1.p0.**1.el6.noarch****

    ****

    ****

    /V****

    ****

    ****

    *From:* Vinithra Varadharajan
    *Sent:* den 24 september 2012 20:54
    *To:* Victor Sanchez
    *Cc:* [email protected]

    *Subject:* Re: How to configure ganglia monitoring on CM4
    ****

    ****

    Victor,****

    ****

    It looks like you have setup a CDH3 cluster using CM. That is why you
    can't see the Hadoop Metrics2 safety valve. If you set up a CDH4 cluster,
    you will see this config.****

    ****

    -Vinithra****

    On Mon, Sep 24, 2012 at 12:51 AM, VicSanchez <[email protected]>
    wrote:****

    Hi guys!

    I look fro hadoop-metrics2.properties safety-valve but no luck.

    I'm running CM4 and under the hdfs services tab I only get the following
    properties for the NN, DN and Secondary NN.****

    Hadoop Metrics Class ****

    org.apache.hadoop.metrics.spi.**NoEmitMetricsContext****

    Implementation daemons will use to report some internal statistics. The
    default (NoEmitMetricsContext) will display metrics on /metrics on the
    status port. The GangliaContext and GangliaContext31 classes will report
    metrics to your specified Ganglia Monitoring Daemons (gmond). The ganglia
    wire format changed incompatibly at version 3.1.0. If you are running any
    version of ganglia 3.1.0 or newer, use the GangliaContext31 metric class;
    otherwise, use the GangliaContext metric class.****

    Hadoop Metrics Output Directory ****

    /tmp/metrics****

    If using FileContext, directory to write metrics to.****

    Hadoop Metrics Ganglia Servers ****

    Default value is empty. Click to edit. ****

    If using GangliaContext, a comma-delimited list of host:port pairs
    pointing to 'gmond' servers you would like to publish metrics to. In
    practice, this set of 'gmond' should match the set of 'gmond' in your
    'gmetad' datasource list for the cluster.****

    ****

    Can anyone point me to the right place to find that configuration file?
    I'm able to run ganglia but no hdfs metrics get pushed to it. I mean no
    NN, DN metrics are available only CPU, RAM and all that stuff.

    Thanks in advance for your help!

    /Victor****



    On Monday, August 6, 2012 8:10:34 PM UTC+2, Vinithra wrote:****

    Hi Koh,****

    ****

    If you are using CDH4, you can configure ganglia monitoring for HDFS via
    hadoop-metrics2.properties. Currently CM4 provides a safety valve under
    HDFS service: go to the Configurations tab of your HDFS service, and search
    for "Hadoop Metrics2". Then add the ganglia monitoring key-value entries
    for the file in the safety valve box. When you restart the roles,
    hadoop-metrics2.properties file will be generated and distributed to the
    HDFS roles.****


    ****

    ****

    -Vinithra****

    ****

    On Sun, Aug 5, 2012 at 8:09 PM, Koh Linxin wrote:***
    *

    Hi everyone,****

    I'm trying to monitor hdfs with ganglia, which isn't included in the
    free edition of CM4. Question is, where do I modify the configuration
    settings on the hadoop nodes to send the metrics to the ganglia monitoring
    server?****

    I can't seem to find the page on the UI to modify the
    hadoop.metrics.properties configuration.****

    ****

    ****

    ** **
    Victor Sanchez****

    Database Architect****

    Net Entertainment NE AB, Luntmakargatan 18, SE-111 37, Stockholm, SE
    T: +46 709 124 656, M: +46 709 124 656, F: +46 8 578 545 10
    [email protected] www.netent.com ****

    *Better Games*

    ** **

    This email and the information it contains are confidential and may be
    legally privileged and intended solely for the use of the individual or
    entity to whom they are addressed. If you have received this email in error
    please notify me immediately. Please note that any views or opinions
    presented in this email are solely those of the author and do not
    necessarily represent those of the company. You should not copy it for any
    purpose, or disclose its contents to any other person. Internet
    communications are not secure and, therefore, Net Entertainment does not
    accept legal responsibility for the contents of this message as it has been
    transmitted over a public network. If you suspect the message may have been
    intercepted or amended please call me. Finally, the recipient should check
    this email and any attachments for the presence of viruses. The company
    accepts no liability for any damage caused by any virus transmitted by this
    email. Thank you. ****

    ** **

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupscm-users @
categorieshadoop
postedAug 6, '12 at 3:09a
activeOct 2, '12 at 5:39p
posts10
users4
websitecloudera.com
irc#hadoop

People

Translate

site design / logo © 2023 Grokbase