FAQ
Hi, I'm trying to upgrade UAT cluster (13 nodes) from CM 4.1.3 to CM 4.5.2
I have run host upgrade wizard and got messages which confused me:

Inspector ran on all 13 hosts.Individual hosts resolved their own hostnames
correctly.No errors were found while looking for conflicting init scripts.No
errors were found while checking /etc/hosts.All hosts resolved localhost to
127.0.0.1.All hosts checked resolved each other's hostnames correctly.Host
clocks are approximately in sync (within ten minutes).Host time zones are
consistent across the cluster.No users or groups are missing.No kernel
versions that are known to be bad are running.0 hosts are running CDH3 and
13 hosts are running CDH4.There are mismatched versions across the system.
See details below for details on which hosts are running what versions of
components.All checked Cloudera Management Daemons versions are consistent
with the server.Some Cloudera Management Agents are installed with a
differing version from the server. Check package versions in the tables
below.

Here
*Group 1 (CDH4)*Hostsnode01.lol.ru, node02.lol.ru, node09.lol.ru,
node10.lol.ru, node11.lol.ru*Component*VersionCDH VersionImpalaUnavailableNot
installed or path incorrectHDFS (CDH4 only)2.0.0+556CDH4Hue Plugins2.1.0+223
CDH4MapReduce 2 (CDH4 only)2.0.0+556CDH4HBase0.92.1+165CDH4Oozie3.2.0+134
CDH4Yarn (CDH4 only)2.0.0+556CDH4Zookeeper3.4.3+32CDH4Hue2.1.0+223CDH4MapReduce
1 (CDH4 only)0.20.2+1270CDH4HttpFS (CDH4 only)2.0.0+556CDH4Hadoop2.0.0+556
CDH4Hive0.9.0+158CDH4Flume NG1.2.0+126CDH4Cloudera Manager Management
Daemons4.5.2Not applicable*Cloudera Manager Agent**4.1.3**Not applicable*

let's go to host node01.and see there:

[[email protected] ~]$ rpm -qa | grep cloud
cloudera-manager-agent-4.5.2-1.cm452.p0.327.x86_64
cloudera-manager-daemons-4.5.2-1.cm452.p0.327.x86_64

Why Cloudera Manager reports that node01 has *agent-4.1.3* but rpm utility
says that it's *4.5.2*

Search Discussions

  • Darren Lo at May 7, 2013 at 8:00 pm
    Did you restart the agents after upgrading them?

    On Tue, May 7, 2013 at 12:57 PM, Serega Sheypak wrote:

    Hi, I'm trying to upgrade UAT cluster (13 nodes) from CM 4.1.3 to CM 4.5.2
    I have run host upgrade wizard and got messages which confused me:

    Inspector ran on all 13 hosts.Individual hosts resolved their own
    hostnames correctly.No errors were found while looking for conflicting
    init scripts.No errors were found while checking /etc/hosts.All hosts
    resolved localhost to 127.0.0.1.All hosts checked resolved each other's
    hostnames correctly.Host clocks are approximately in sync (within ten
    minutes).Host time zones are consistent across the cluster.No users or
    groups are missing.No kernel versions that are known to be bad are
    running.0 hosts are running CDH3 and 13 hosts are running CDH4.There are
    mismatched versions across the system. See details below for details on
    which hosts are running what versions of components.All checked Cloudera
    Management Daemons versions are consistent with the server.Some Cloudera
    Management Agents are installed with a differing version from the server.
    Check package versions in the tables below.

    Here
    *Group 1 (CDH4)*Hostsnode01.lol.ru, node02.lol.ru, node09.lol.ru,
    node10.lol.ru, node11.lol.ru*Component*VersionCDH VersionImpalaUnavailableNot
    installed or path incorrectHDFS (CDH4 only)2.0.0+556CDH4Hue Plugins
    2.1.0+223CDH4MapReduce 2 (CDH4 only)2.0.0+556CDH4HBase0.92.1+165CDH4Oozie
    3.2.0+134CDH4Yarn (CDH4 only)2.0.0+556CDH4Zookeeper3.4.3+32CDH4Hue
    2.1.0+223CDH4MapReduce 1 (CDH4 only)0.20.2+1270CDH4HttpFS (CDH4 only)
    2.0.0+556CDH4Hadoop2.0.0+556CDH4Hive0.9.0+158CDH4Flume NG1.2.0+126CDH4Cloudera
    Manager Management Daemons4.5.2Not applicable*Cloudera Manager Agent**
    4.1.3**Not applicable*

    let's go to host node01.and see there:

    [[email protected] ~]$ rpm -qa | grep cloud
    cloudera-manager-agent-4.5.2-1.cm452.p0.327.x86_64
    cloudera-manager-daemons-4.5.2-1.cm452.p0.327.x86_64

    Why Cloudera Manager reports that node01 has *agent-4.1.3* but rpm
    utility says that it's *4.5.2*

    --
    Thanks,
    Darren
  • Serega Sheypak at May 7, 2013 at 8:41 pm
    Directly - no! How can I do that? I would like to avoid manual intervention
    to 10 diffrent hosts.


    2013/5/8 Darren Lo <[email protected]>
    Did you restart the agents after upgrading them?

    On Tue, May 7, 2013 at 12:57 PM, Serega Sheypak wrote:

    Hi, I'm trying to upgrade UAT cluster (13 nodes) from CM 4.1.3 to CM 4.5.2
    I have run host upgrade wizard and got messages which confused me:

    Inspector ran on all 13 hosts. Individual hosts resolved their own
    hostnames correctly. No errors were found while looking for conflicting
    init scripts. No errors were found while checking /etc/hosts. All hosts
    resolved localhost to 127.0.0.1. All hosts checked resolved each other's
    hostnames correctly. Host clocks are approximately in sync (within ten
    minutes). Host time zones are consistent across the cluster. No users or
    groups are missing. No kernel versions that are known to be bad are
    running. 0 hosts are running CDH3 and 13 hosts are running CDH4. There
    are mismatched versions across the system. See details below for details on
    which hosts are running what versions of components. All checked
    Cloudera Management Daemons versions are consistent with the server. Some
    Cloudera Management Agents are installed with a differing version from the
    server. Check package versions in the tables below.

    Here
    *Group 1 (CDH4)* Hosts node01.lol.ru, node02.lol.ru, node09.lol.ru,
    node10.lol.ru, node11.lol.ru *Component* Version CDH Version Impala
    Unavailable Not installed or path incorrect HDFS (CDH4 only)2.0.0+556CDH4Hue Plugins
    2.1.0+223 CDH4 MapReduce 2 (CDH4 only)2.0.0+556 CDH4 HBase0.92.1+165 CDH4Oozie
    3.2.0+134 CDH4 Yarn (CDH4 only)2.0.0+556 CDH4 Zookeeper3.4.3+32 CDH4 Hue
    2.1.0+223 CDH4 MapReduce 1 (CDH4 only)0.20.2+1270 CDH4 HttpFS (CDH4 only)
    2.0.0+556 CDH4 Hadoop2.0.0+556 CDH4 Hive0.9.0+158 CDH4 Flume NG1.2.0+126CDH4Cloudera Manager Management Daemons
    4.5.2 Not applicable*Cloudera Manager Agent* *4.1.3* *Not applicable*

    let's go to host node01.and see there:

    [[email protected] ~]$ rpm -qa | grep cloud
    cloudera-manager-agent-4.5.2-1.cm452.p0.327.x86_64
    cloudera-manager-daemons-4.5.2-1.cm452.p0.327.x86_64

    Why Cloudera Manager reports that node01 has *agent-4.1.3* but rpm
    utility says that it's *4.5.2*

    --
    Thanks,
    Darren
  • Darren Lo at May 7, 2013 at 9:02 pm
    The host upgrade wizard should have done this for you. If this version
    mismatch causes you problems, you can probably use cssh to simultaneously
    connect to all 10 hosts and run: sudo service cloudera-scm-agent restart

    On Tue, May 7, 2013 at 1:41 PM, Serega Sheypak wrote:

    Directly - no! How can I do that? I would like to avoid manual
    intervention to 10 diffrent hosts.


    2013/5/8 Darren Lo <[email protected]>
    Did you restart the agents after upgrading them?


    On Tue, May 7, 2013 at 12:57 PM, Serega Sheypak <[email protected]
    wrote:
    Hi, I'm trying to upgrade UAT cluster (13 nodes) from CM 4.1.3 to CM
    4.5.2
    I have run host upgrade wizard and got messages which confused me:

    Inspector ran on all 13 hosts. Individual hosts resolved their own
    hostnames correctly. No errors were found while looking for conflicting
    init scripts. No errors were found while checking /etc/hosts. All hosts
    resolved localhost to 127.0.0.1. All hosts checked resolved each
    other's hostnames correctly. Host clocks are approximately in sync
    (within ten minutes). Host time zones are consistent across the cluster. No
    users or groups are missing. No kernel versions that are known to be
    bad are running. 0 hosts are running CDH3 and 13 hosts are running CDH4. There
    are mismatched versions across the system. See details below for details on
    which hosts are running what versions of components. All checked
    Cloudera Management Daemons versions are consistent with the server. Some
    Cloudera Management Agents are installed with a differing version from the
    server. Check package versions in the tables below.

    Here
    *Group 1 (CDH4)* Hosts node01.lol.ru, node02.lol.ru, node09.lol.ru,
    node10.lol.ru, node11.lol.ru *Component* Version CDH Version Impala
    Unavailable Not installed or path incorrect HDFS (CDH4 only)2.0.0+556CDH4Hue Plugins
    2.1.0+223 CDH4 MapReduce 2 (CDH4 only)2.0.0+556 CDH4 HBase0.92.1+165CDH4Oozie
    3.2.0+134 CDH4 Yarn (CDH4 only)2.0.0+556 CDH4 Zookeeper3.4.3+32 CDH4 Hue
    2.1.0+223 CDH4 MapReduce 1 (CDH4 only)0.20.2+1270 CDH4 HttpFS (CDH4
    only)2.0.0+556 CDH4 Hadoop2.0.0+556 CDH4 Hive0.9.0+158 CDH4 Flume NG
    1.2.0+126 CDH4 Cloudera Manager Management Daemons4.5.2 Not applicable *Cloudera
    Manager Agent* *4.1.3* *Not applicable*

    let's go to host node01.and see there:

    [[email protected] ~]$ rpm -qa | grep cloud
    cloudera-manager-agent-4.5.2-1.cm452.p0.327.x86_64
    cloudera-manager-daemons-4.5.2-1.cm452.p0.327.x86_64

    Why Cloudera Manager reports that node01 has *agent-4.1.3* but rpm
    utility says that it's *4.5.2*

    --
    Thanks,
    Darren

    --
    Thanks,
    Darren
  • Philip Langdale at May 7, 2013 at 11:12 pm
    I'd recommend re-running the host upgrade wizard. If this had run
    originally, it should have upgraded and restarted. The incorrect version
    more strongly implies the wizard did not run to completion, so we can't be
    sure the packages were upgraded.

    The button for this is on the Hosts page.

    --phil

    On 7 May 2013 14:02, Darren Lo wrote:

    The host upgrade wizard should have done this for you. If this version
    mismatch causes you problems, you can probably use cssh to simultaneously
    connect to all 10 hosts and run: sudo service cloudera-scm-agent restart

    On Tue, May 7, 2013 at 1:41 PM, Serega Sheypak wrote:

    Directly - no! How can I do that? I would like to avoid manual
    intervention to 10 diffrent hosts.


    2013/5/8 Darren Lo <[email protected]>
    Did you restart the agents after upgrading them?


    On Tue, May 7, 2013 at 12:57 PM, Serega Sheypak <
    [email protected]> wrote:
    Hi, I'm trying to upgrade UAT cluster (13 nodes) from CM 4.1.3 to CM
    4.5.2
    I have run host upgrade wizard and got messages which confused me:

    Inspector ran on all 13 hosts. Individual hosts resolved their own
    hostnames correctly. No errors were found while looking for
    conflicting init scripts. No errors were found while checking
    /etc/hosts. All hosts resolved localhost to 127.0.0.1. All hosts
    checked resolved each other's hostnames correctly. Host clocks are
    approximately in sync (within ten minutes). Host time zones are
    consistent across the cluster. No users or groups are missing. No
    kernel versions that are known to be bad are running. 0 hosts are
    running CDH3 and 13 hosts are running CDH4. There are mismatched
    versions across the system. See details below for details on which hosts
    are running what versions of components. All checked Cloudera
    Management Daemons versions are consistent with the server. Some
    Cloudera Management Agents are installed with a differing version from the
    server. Check package versions in the tables below.

    Here
    *Group 1 (CDH4)* Hosts node01.lol.ru, node02.lol.ru, node09.lol.ru,
    node10.lol.ru, node11.lol.ru *Component* Version CDH Version Impala
    Unavailable Not installed or path incorrect HDFS (CDH4 only)2.0.0+556CDH4Hue Plugins
    2.1.0+223 CDH4 MapReduce 2 (CDH4 only)2.0.0+556 CDH4 HBase0.92.1+165CDH4Oozie
    3.2.0+134 CDH4 Yarn (CDH4 only)2.0.0+556 CDH4 Zookeeper3.4.3+32 CDH4Hue
    2.1.0+223 CDH4 MapReduce 1 (CDH4 only)0.20.2+1270 CDH4 HttpFS (CDH4
    only)2.0.0+556 CDH4 Hadoop2.0.0+556 CDH4 Hive0.9.0+158 CDH4 Flume NG
    1.2.0+126 CDH4 Cloudera Manager Management Daemons4.5.2 Not applicable
    *Cloudera Manager Agent* *4.1.3* *Not applicable*

    let's go to host node01.and see there:

    [[email protected] ~]$ rpm -qa | grep cloud
    cloudera-manager-agent-4.5.2-1.cm452.p0.327.x86_64
    cloudera-manager-daemons-4.5.2-1.cm452.p0.327.x86_64

    Why Cloudera Manager reports that node01 has *agent-4.1.3* but rpm
    utility says that it's *4.5.2*

    --
    Thanks,
    Darren

    --
    Thanks,
    Darren
  • Serega Sheypak at May 8, 2013 at 6:36 am
    Hi.
    Prev installation was 4.1.3 and based on RPMs
    The next one upgrade 4.1.3 -> 4.5.2 should be parcel based (???)
    I've tried to re-run host upgrade several times. It doesn't help.

    How can I determine: is my CDH package or parcel based now?



    2013/5/8 Philip Langdale <[email protected]>
    I'd recommend re-running the host upgrade wizard. If this had run
    originally, it should have upgraded and restarted. The incorrect version
    more strongly implies the wizard did not run to completion, so we can't be
    sure the packages were upgraded.

    The button for this is on the Hosts page.

    --phil

    On 7 May 2013 14:02, Darren Lo wrote:

    The host upgrade wizard should have done this for you. If this version
    mismatch causes you problems, you can probably use cssh to simultaneously
    connect to all 10 hosts and run: sudo service cloudera-scm-agent restart

    On Tue, May 7, 2013 at 1:41 PM, Serega Sheypak wrote:

    Directly - no! How can I do that? I would like to avoid manual
    intervention to 10 diffrent hosts.


    2013/5/8 Darren Lo <[email protected]>
    Did you restart the agents after upgrading them?


    On Tue, May 7, 2013 at 12:57 PM, Serega Sheypak <
    [email protected]> wrote:
    Hi, I'm trying to upgrade UAT cluster (13 nodes) from CM 4.1.3 to CM
    4.5.2
    I have run host upgrade wizard and got messages which confused me:

    Inspector ran on all 13 hosts. Individual hosts resolved their own
    hostnames correctly. No errors were found while looking for
    conflicting init scripts. No errors were found while checking
    /etc/hosts. All hosts resolved localhost to 127.0.0.1. All hosts
    checked resolved each other's hostnames correctly. Host clocks are
    approximately in sync (within ten minutes). Host time zones are
    consistent across the cluster. No users or groups are missing. No
    kernel versions that are known to be bad are running. 0 hosts are
    running CDH3 and 13 hosts are running CDH4. There are mismatched
    versions across the system. See details below for details on which hosts
    are running what versions of components. All checked Cloudera
    Management Daemons versions are consistent with the server. Some
    Cloudera Management Agents are installed with a differing version from the
    server. Check package versions in the tables below.

    Here
    *Group 1 (CDH4)* Hosts node01.lol.ru, node02.lol.ru, node09.lol.ru,
    node10.lol.ru, node11.lol.ru *Component* Version CDH Version Impala
    Unavailable Not installed or path incorrect HDFS (CDH4 only)2.0.0+556CDH4Hue Plugins
    2.1.0+223 CDH4 MapReduce 2 (CDH4 only)2.0.0+556 CDH4 HBase0.92.1+165CDH4Oozie
    3.2.0+134 CDH4 Yarn (CDH4 only)2.0.0+556 CDH4 Zookeeper3.4.3+32 CDH4Hue
    2.1.0+223 CDH4 MapReduce 1 (CDH4 only)0.20.2+1270 CDH4 HttpFS (CDH4
    only)2.0.0+556 CDH4 Hadoop2.0.0+556 CDH4 Hive0.9.0+158 CDH4 Flume NG
    1.2.0+126 CDH4 Cloudera Manager Management Daemons4.5.2 Not applicable
    *Cloudera Manager Agent* *4.1.3* *Not applicable*

    let's go to host node01.and see there:

    [[email protected] ~]$ rpm -qa | grep cloud
    cloudera-manager-agent-4.5.2-1.cm452.p0.327.x86_64
    cloudera-manager-daemons-4.5.2-1.cm452.p0.327.x86_64

    Why Cloudera Manager reports that node01 has *agent-4.1.3* but rpm
    utility says that it's *4.5.2*

    --
    Thanks,
    Darren

    --
    Thanks,
    Darren
  • Serega Sheypak at May 8, 2013 at 6:47 am
    OMG,
    I've used puppet to restart cloudera-scm-agents on all nodes. I did only
    this and nothing more.
    BEFORE I've tried to re-run host upgrade wizard. It didn't help, you can
    see result in previous leter.

    Now Cloudera manager reports that it's distributing parcels for CDH 4.5.2
    and impala 1.0.x
    What;s going on...?



    2013/5/8 Serega Sheypak <[email protected]>
    Hi.
    Prev installation was 4.1.3 and based on RPMs
    The next one upgrade 4.1.3 -> 4.5.2 should be parcel based (???)
    I've tried to re-run host upgrade several times. It doesn't help.

    How can I determine: is my CDH package or parcel based now?



    2013/5/8 Philip Langdale <[email protected]>
    I'd recommend re-running the host upgrade wizard. If this had run
    originally, it should have upgraded and restarted. The incorrect version
    more strongly implies the wizard did not run to completion, so we can't be
    sure the packages were upgraded.

    The button for this is on the Hosts page.

    --phil

    On 7 May 2013 14:02, Darren Lo wrote:

    The host upgrade wizard should have done this for you. If this version
    mismatch causes you problems, you can probably use cssh to simultaneously
    connect to all 10 hosts and run: sudo service cloudera-scm-agent restart


    On Tue, May 7, 2013 at 1:41 PM, Serega Sheypak <[email protected]
    wrote:
    Directly - no! How can I do that? I would like to avoid manual
    intervention to 10 diffrent hosts.


    2013/5/8 Darren Lo <[email protected]>
    Did you restart the agents after upgrading them?


    On Tue, May 7, 2013 at 12:57 PM, Serega Sheypak <
    [email protected]> wrote:
    Hi, I'm trying to upgrade UAT cluster (13 nodes) from CM 4.1.3 to CM
    4.5.2
    I have run host upgrade wizard and got messages which confused me:

    Inspector ran on all 13 hosts. Individual hosts resolved their own
    hostnames correctly. No errors were found while looking for
    conflicting init scripts. No errors were found while checking
    /etc/hosts. All hosts resolved localhost to 127.0.0.1. All hosts
    checked resolved each other's hostnames correctly. Host clocks are
    approximately in sync (within ten minutes). Host time zones are
    consistent across the cluster. No users or groups are missing. No
    kernel versions that are known to be bad are running. 0 hosts are
    running CDH3 and 13 hosts are running CDH4. There are mismatched
    versions across the system. See details below for details on which hosts
    are running what versions of components. All checked Cloudera
    Management Daemons versions are consistent with the server. Some
    Cloudera Management Agents are installed with a differing version from the
    server. Check package versions in the tables below.

    Here
    *Group 1 (CDH4)* Hosts node01.lol.ru, node02.lol.ru, node09.lol.ru,
    node10.lol.ru, node11.lol.ru *Component* Version CDH Version Impala
    Unavailable Not installed or path incorrect HDFS (CDH4 only)2.0.0+556CDH4Hue Plugins
    2.1.0+223 CDH4 MapReduce 2 (CDH4 only)2.0.0+556 CDH4 HBase0.92.1+165CDH4Oozie
    3.2.0+134 CDH4 Yarn (CDH4 only)2.0.0+556 CDH4 Zookeeper3.4.3+32 CDH4Hue
    2.1.0+223 CDH4 MapReduce 1 (CDH4 only)0.20.2+1270 CDH4 HttpFS (CDH4
    only)2.0.0+556 CDH4 Hadoop2.0.0+556 CDH4 Hive0.9.0+158 CDH4 Flume NG
    1.2.0+126 CDH4 Cloudera Manager Management Daemons4.5.2 Not
    applicable *Cloudera Manager Agent* *4.1.3* *Not applicable*

    let's go to host node01.and see there:

    [[email protected] ~]$ rpm -qa | grep cloud
    cloudera-manager-agent-4.5.2-1.cm452.p0.327.x86_64
    cloudera-manager-daemons-4.5.2-1.cm452.p0.327.x86_64

    Why Cloudera Manager reports that node01 has *agent-4.1.3* but rpm
    utility says that it's *4.5.2*

    --
    Thanks,
    Darren

    --
    Thanks,
    Darren
  • Serega Sheypak at May 8, 2013 at 6:49 am
    Now Host inspector reports:
    *Group 1 (CDH4) (it's only one group)*Hostsall my nodes listed here!*
    Component*VersionCDH VersionFlume NG1.3.0+96CDH4MapReduce 1 (CDH4 only)
    0.20.2+1359CDH4HDFS (CDH4 only)2.0.0+960CDH4HttpFS (CDH4
    only)2.0.0+960CDH4MapReduce
    2 (CDH4 only)2.0.0+960CDH4Yarn (CDH4 only)2.0.0+960CDH4Hadoop2.0.0+960CDH4
    HBase0.94.2+218CDH4label.cdhVersionTable.hcatalog0.4.0+218CDH4Hive0.10.0+78
    CDH4Mahout0.7+15CDH4Oozie3.3.0+79CDH4Pig0.10.0+510CDH4Sqoop1.4.2+60CDH4Sqoop2
    (CDH4 only)1.99.1+33CDH4Whirr0.8.0+26CDH4Zookeeper3.4.5+16CDH4Impala1.0Not
    applicableCloudera Manager Management Daemons4.5.2Not applicableCloudera
    Manager Agent4.5.2Not applicable
    Looks like everything is fine!


    2013/5/8 Serega Sheypak <[email protected]>
    OMG,
    I've used puppet to restart cloudera-scm-agents on all nodes. I did only
    this and nothing more.
    BEFORE I've tried to re-run host upgrade wizard. It didn't help, you can
    see result in previous leter.

    Now Cloudera manager reports that it's distributing parcels for CDH 4.5.2
    and impala 1.0.x
    What;s going on...?



    2013/5/8 Serega Sheypak <[email protected]>
    Hi.
    Prev installation was 4.1.3 and based on RPMs
    The next one upgrade 4.1.3 -> 4.5.2 should be parcel based (???)
    I've tried to re-run host upgrade several times. It doesn't help.

    How can I determine: is my CDH package or parcel based now?



    2013/5/8 Philip Langdale <[email protected]>
    I'd recommend re-running the host upgrade wizard. If this had run
    originally, it should have upgraded and restarted. The incorrect version
    more strongly implies the wizard did not run to completion, so we can't be
    sure the packages were upgraded.

    The button for this is on the Hosts page.

    --phil

    On 7 May 2013 14:02, Darren Lo wrote:

    The host upgrade wizard should have done this for you. If this version
    mismatch causes you problems, you can probably use cssh to simultaneously
    connect to all 10 hosts and run: sudo service cloudera-scm-agent restart


    On Tue, May 7, 2013 at 1:41 PM, Serega Sheypak <
    [email protected]> wrote:
    Directly - no! How can I do that? I would like to avoid manual
    intervention to 10 diffrent hosts.


    2013/5/8 Darren Lo <[email protected]>
    Did you restart the agents after upgrading them?


    On Tue, May 7, 2013 at 12:57 PM, Serega Sheypak <
    [email protected]> wrote:
    Hi, I'm trying to upgrade UAT cluster (13 nodes) from CM 4.1.3 to CM
    4.5.2
    I have run host upgrade wizard and got messages which confused me:

    Inspector ran on all 13 hosts. Individual hosts resolved their own
    hostnames correctly. No errors were found while looking for
    conflicting init scripts. No errors were found while checking
    /etc/hosts. All hosts resolved localhost to 127.0.0.1. All hosts
    checked resolved each other's hostnames correctly. Host clocks are
    approximately in sync (within ten minutes). Host time zones are
    consistent across the cluster. No users or groups are missing. No
    kernel versions that are known to be bad are running. 0 hosts are
    running CDH3 and 13 hosts are running CDH4. There are mismatched
    versions across the system. See details below for details on which hosts
    are running what versions of components. All checked Cloudera
    Management Daemons versions are consistent with the server. Some
    Cloudera Management Agents are installed with a differing version from the
    server. Check package versions in the tables below.

    Here
    *Group 1 (CDH4)* Hosts node01.lol.ru, node02.lol.ru, node09.lol.ru,
    node10.lol.ru, node11.lol.ru *Component* Version CDH Version Impala
    Unavailable Not installed or path incorrect HDFS (CDH4 only)
    2.0.0+556 CDH4 Hue Plugins2.1.0+223 CDH4 MapReduce 2 (CDH4 only)
    2.0.0+556 CDH4 HBase0.92.1+165 CDH4 Oozie3.2.0+134 CDH4 Yarn (CDH4
    only)2.0.0+556 CDH4 Zookeeper3.4.3+32 CDH4 Hue2.1.0+223 CDH4MapReduce 1 (CDH4 only)
    0.20.2+1270 CDH4 HttpFS (CDH4 only)2.0.0+556 CDH4 Hadoop2.0.0+556CDH4Hive
    0.9.0+158 CDH4 Flume NG1.2.0+126 CDH4 Cloudera Manager Management
    Daemons4.5.2 Not applicable *Cloudera Manager Agent* *4.1.3* *Not
    applicable*

    let's go to host node01.and see there:

    [[email protected] ~]$ rpm -qa | grep cloud
    cloudera-manager-agent-4.5.2-1.cm452.p0.327.x86_64
    cloudera-manager-daemons-4.5.2-1.cm452.p0.327.x86_64

    Why Cloudera Manager reports that node01 has *agent-4.1.3* but rpm
    utility says that it's *4.5.2*

    --
    Thanks,
    Darren

    --
    Thanks,
    Darren
  • Herman Chen at May 8, 2013 at 6:56 am
    It looks like CM upgrade to 4.5.2 was successful. To answer your earlier
    question, CM server/agent/daemons always use packages, never parcels.

    As for the CDH4 services that you have been running since CM 4.1.3, they
    should still be on packages if you have not explicitly migrated them over
    to parcels. The parcel distribution that you saw only distribute the bits,
    but would not impact your services until parcel is actually activated, so
    don't worry.

    Herman


    On Tue, May 7, 2013 at 11:49 PM, Serega Sheypak wrote:

    Now Host inspector reports:
    *Group 1 (CDH4) (it's only one group)* Hosts all my nodes listed here! *
    Component* Version CDH Version Flume NG1.3.0+96 CDH4 MapReduce 1 (CDH4
    only)0.20.2+1359 CDH4 HDFS (CDH4 only)2.0.0+960 CDH4 HttpFS (CDH4 only)
    2.0.0+960 CDH4 MapReduce 2 (CDH4 only)2.0.0+960 CDH4 Yarn (CDH4 only)
    2.0.0+960 CDH4 Hadoop2.0.0+960 CDH4 HBase0.94.2+218 CDH4label.cdhVersionTable.hcatalog
    0.4.0+218 CDH4 Hive0.10.0+78 CDH4 Mahout0.7+15 CDH4 Oozie3.3.0+79 CDH4 Pig
    0.10.0+510 CDH4 Sqoop1.4.2+60 CDH4 Sqoop2 (CDH4 only)1.99.1+33 CDH4 Whirr
    0.8.0+26 CDH4 Zookeeper3.4.5+16 CDH4 Impala1.0 Not applicable Cloudera
    Manager Management Daemons4.5.2 Not applicable Cloudera Manager Agent4.5.2Not applicable
    Looks like everything is fine!


    2013/5/8 Serega Sheypak <[email protected]>
    OMG,
    I've used puppet to restart cloudera-scm-agents on all nodes. I did only
    this and nothing more.
    BEFORE I've tried to re-run host upgrade wizard. It didn't help, you can
    see result in previous leter.

    Now Cloudera manager reports that it's distributing parcels for CDH 4.5.2
    and impala 1.0.x
    What;s going on...?



    2013/5/8 Serega Sheypak <[email protected]>
    Hi.
    Prev installation was 4.1.3 and based on RPMs
    The next one upgrade 4.1.3 -> 4.5.2 should be parcel based (???)
    I've tried to re-run host upgrade several times. It doesn't help.

    How can I determine: is my CDH package or parcel based now?



    2013/5/8 Philip Langdale <[email protected]>
    I'd recommend re-running the host upgrade wizard. If this had run
    originally, it should have upgraded and restarted. The incorrect version
    more strongly implies the wizard did not run to completion, so we can't be
    sure the packages were upgraded.

    The button for this is on the Hosts page.

    --phil

    On 7 May 2013 14:02, Darren Lo wrote:

    The host upgrade wizard should have done this for you. If this version
    mismatch causes you problems, you can probably use cssh to simultaneously
    connect to all 10 hosts and run: sudo service cloudera-scm-agent restart


    On Tue, May 7, 2013 at 1:41 PM, Serega Sheypak <
    [email protected]> wrote:
    Directly - no! How can I do that? I would like to avoid manual
    intervention to 10 diffrent hosts.


    2013/5/8 Darren Lo <[email protected]>
    Did you restart the agents after upgrading them?


    On Tue, May 7, 2013 at 12:57 PM, Serega Sheypak <
    [email protected]> wrote:
    Hi, I'm trying to upgrade UAT cluster (13 nodes) from CM 4.1.3 to
    CM 4.5.2
    I have run host upgrade wizard and got messages which confused me:

    Inspector ran on all 13 hosts. Individual hosts resolved their
    own hostnames correctly. No errors were found while looking for
    conflicting init scripts. No errors were found while checking
    /etc/hosts. All hosts resolved localhost to 127.0.0.1. All hosts
    checked resolved each other's hostnames correctly. Host clocks are
    approximately in sync (within ten minutes). Host time zones are
    consistent across the cluster. No users or groups are missing. No
    kernel versions that are known to be bad are running. 0 hosts are
    running CDH3 and 13 hosts are running CDH4. There are mismatched
    versions across the system. See details below for details on which hosts
    are running what versions of components. All checked Cloudera
    Management Daemons versions are consistent with the server. Some
    Cloudera Management Agents are installed with a differing version from the
    server. Check package versions in the tables below.

    Here
    *Group 1 (CDH4)* Hosts node01.lol.ru, node02.lol.ru, node09.lol.ru,
    node10.lol.ru, node11.lol.ru *Component* Version CDH Version Impala
    Unavailable Not installed or path incorrect HDFS (CDH4 only)
    2.0.0+556 CDH4 Hue Plugins2.1.0+223 CDH4 MapReduce 2 (CDH4 only)
    2.0.0+556 CDH4 HBase0.92.1+165 CDH4 Oozie3.2.0+134 CDH4 Yarn (CDH4
    only)2.0.0+556 CDH4 Zookeeper3.4.3+32 CDH4 Hue2.1.0+223 CDH4MapReduce 1 (CDH4 only)
    0.20.2+1270 CDH4 HttpFS (CDH4 only)2.0.0+556 CDH4 Hadoop2.0.0+556CDH4Hive
    0.9.0+158 CDH4 Flume NG1.2.0+126 CDH4 Cloudera Manager Management
    Daemons4.5.2 Not applicable *Cloudera Manager Agent* *4.1.3* *Not
    applicable*

    let's go to host node01.and see there:

    [[email protected] ~]$ rpm -qa | grep cloud
    cloudera-manager-agent-4.5.2-1.cm452.p0.327.x86_64
    cloudera-manager-daemons-4.5.2-1.cm452.p0.327.x86_64

    Why Cloudera Manager reports that node01 has *agent-4.1.3* but rpm
    utility says that it's *4.5.2*

    --
    Thanks,
    Darren

    --
    Thanks,
    Darren
  • Serega Sheypak at May 8, 2013 at 7:03 am
    1. I do understand that Cloudera management stuff is package-based.
    2. I have alternatives for CDH4: packages/parcels. I'm upgrading UAT env.
    Our production env running on parcels. I did setup productino later and it
    was parcel-based installation under CM 4.5
    3. I want UAT CDH4 be also parcel-based.
    The question: does my cluster work on parcels or packages after upgrade?

    Here is my parcel conf page:
    [image: Встроенное изображение 1]

    And parcels page says that parcels are activated. I suppose that right now
    parcels are used instead of packages...?
    [image: Встроенное изображение 2]


    2013/5/8 Herman Chen <[email protected]>
    It looks like CM upgrade to 4.5.2 was successful. To answer your earlier
    question, CM server/agent/daemons always use packages, never parcels.

    As for the CDH4 services that you have been running since CM 4.1.3, they
    should still be on packages if you have not explicitly migrated them over
    to parcels. The parcel distribution that you saw only distribute the bits,
    but would not impact your services until parcel is actually activated, so
    don't worry.

    Herman


    On Tue, May 7, 2013 at 11:49 PM, Serega Sheypak wrote:

    Now Host inspector reports:
    *Group 1 (CDH4) (it's only one group)* Hosts all my nodes listed here! *
    Component* Version CDH Version Flume NG1.3.0+96 CDH4 MapReduce 1 (CDH4
    only)0.20.2+1359 CDH4 HDFS (CDH4 only)2.0.0+960 CDH4 HttpFS (CDH4 only)
    2.0.0+960 CDH4 MapReduce 2 (CDH4 only)2.0.0+960 CDH4 Yarn (CDH4 only)
    2.0.0+960 CDH4 Hadoop2.0.0+960 CDH4 HBase0.94.2+218 CDH4label.cdhVersionTable.hcatalog
    0.4.0+218 CDH4 Hive0.10.0+78 CDH4 Mahout0.7+15 CDH4 Oozie3.3.0+79 CDH4Pig
    0.10.0+510 CDH4 Sqoop1.4.2+60 CDH4 Sqoop2 (CDH4 only)1.99.1+33 CDH4 Whirr
    0.8.0+26 CDH4 Zookeeper3.4.5+16 CDH4 Impala1.0 Not applicable Cloudera
    Manager Management Daemons4.5.2 Not applicable Cloudera Manager Agent
    4.5.2 Not applicable
    Looks like everything is fine!


    2013/5/8 Serega Sheypak <[email protected]>
    OMG,
    I've used puppet to restart cloudera-scm-agents on all nodes. I did only
    this and nothing more.
    BEFORE I've tried to re-run host upgrade wizard. It didn't help, you can
    see result in previous leter.

    Now Cloudera manager reports that it's distributing parcels for CDH
    4.5.2 and impala 1.0.x
    What;s going on...?



    2013/5/8 Serega Sheypak <[email protected]>
    Hi.
    Prev installation was 4.1.3 and based on RPMs
    The next one upgrade 4.1.3 -> 4.5.2 should be parcel based (???)
    I've tried to re-run host upgrade several times. It doesn't help.

    How can I determine: is my CDH package or parcel based now?



    2013/5/8 Philip Langdale <[email protected]>
    I'd recommend re-running the host upgrade wizard. If this had run
    originally, it should have upgraded and restarted. The incorrect version
    more strongly implies the wizard did not run to completion, so we can't be
    sure the packages were upgraded.

    The button for this is on the Hosts page.

    --phil

    On 7 May 2013 14:02, Darren Lo wrote:

    The host upgrade wizard should have done this for you. If this
    version mismatch causes you problems, you can probably use cssh to
    simultaneously connect to all 10 hosts and run: sudo service
    cloudera-scm-agent restart


    On Tue, May 7, 2013 at 1:41 PM, Serega Sheypak <
    [email protected]> wrote:
    Directly - no! How can I do that? I would like to avoid manual
    intervention to 10 diffrent hosts.


    2013/5/8 Darren Lo <[email protected]>
    Did you restart the agents after upgrading them?


    On Tue, May 7, 2013 at 12:57 PM, Serega Sheypak <
    [email protected]> wrote:
    Hi, I'm trying to upgrade UAT cluster (13 nodes) from CM 4.1.3 to
    CM 4.5.2
    I have run host upgrade wizard and got messages which confused me:

    Inspector ran on all 13 hosts. Individual hosts resolved their
    own hostnames correctly. No errors were found while looking for
    conflicting init scripts. No errors were found while checking
    /etc/hosts. All hosts resolved localhost to 127.0.0.1. All hosts
    checked resolved each other's hostnames correctly. Host clocks
    are approximately in sync (within ten minutes). Host time zones
    are consistent across the cluster. No users or groups are missing. No
    kernel versions that are known to be bad are running. 0 hosts are
    running CDH3 and 13 hosts are running CDH4. There are mismatched
    versions across the system. See details below for details on which hosts
    are running what versions of components. All checked Cloudera
    Management Daemons versions are consistent with the server. Some
    Cloudera Management Agents are installed with a differing version from the
    server. Check package versions in the tables below.

    Here
    *Group 1 (CDH4)* Hosts node01.lol.ru, node02.lol.ru,
    node09.lol.ru, node10.lol.ru, node11.lol.ru *Component* VersionCDH VersionImpala
    Unavailable Not installed or path incorrect HDFS (CDH4 only)
    2.0.0+556 CDH4 Hue Plugins2.1.0+223 CDH4 MapReduce 2 (CDH4 only)
    2.0.0+556 CDH4 HBase0.92.1+165 CDH4 Oozie3.2.0+134 CDH4 Yarn
    (CDH4 only)2.0.0+556 CDH4 Zookeeper3.4.3+32 CDH4 Hue2.1.0+223 CDH4MapReduce 1 (CDH4 only)
    0.20.2+1270 CDH4 HttpFS (CDH4 only)2.0.0+556 CDH4 Hadoop2.0.0+556CDH4Hive
    0.9.0+158 CDH4 Flume NG1.2.0+126 CDH4 Cloudera Manager Management
    Daemons4.5.2 Not applicable *Cloudera Manager Agent* *4.1.3* *Not
    applicable*

    let's go to host node01.and see there:

    [[email protected] ~]$ rpm -qa | grep cloud
    cloudera-manager-agent-4.5.2-1.cm452.p0.327.x86_64
    cloudera-manager-daemons-4.5.2-1.cm452.p0.327.x86_64

    Why Cloudera Manager reports that node01 has *agent-4.1.3* but
    rpm utility says that it's *4.5.2*

    --
    Thanks,
    Darren

    --
    Thanks,
    Darren
  • Serega Sheypak at May 8, 2013 at 8:41 am
    Here is another problem. Roles don't start after resrting cluster.
    I suppose that Cloudera Manager tries to start parcel based roles but
    nobody did stop package-based software components.

    12:36:47.815INFOorg.apache.hadoop.mapred.TaskTracker

    SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down TaskTracker at node02.lol.ru/10.66.48.104
    ************************************************************/

    12:36:52.997INFOorg.apache.hadoop.mapred.TaskTracker

    STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting TaskTracker
    STARTUP_MSG: host = node02.lol.ru/10.66.48.104
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 2.0.0-mr1-cdh4.2.1
    STARTUP_MSG: classpath = /var/run/cloudera-scm-agent/process/1013-mapreduce-TASKTRACKER:/usr/java/jdk1.6.0_37/lib/tools.jar:/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/lib/hadoop-0.20-mapreduce:/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/lib/hadoop-0.20-mapreduce/hadoop-core-2.0.0-mr1-cdh4.2.1.jar:/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/lib/hadoop-0.20-mapreduce/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/lib/hadoop-0.20-mapreduce/lib/ant-contrib-1.0b3.jar:/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/lib/hadoop-0.20-mapreduce/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/lib/hadoop-0.20-mapreduce/lib/aspectjrt-1.6.5.jar:/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/lib/hadoop-0.20-mapreduce/lib/aspectjtools-1.6.5.jar:/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/lib/hadoop-0.20-mapreduce/lib/avro-1.7.3.jar:/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/lib/hadoop-0.20-mapreduce/lib/avro-compiler-1.7.3.jar


    2:36:55.168WARNorg.apache.hadoop.mapred.TaskTracker

    TaskTracker's totalMemoryAllottedForTasks is -1 and reserved physical memory is not configured. TaskMemoryManager is disabled.

    12:36:55.213INFOorg.apache.hadoop.mapred.IndexCache

    IndexCache created with max memory = 10485760

    12:36:55.229INFOorg.apache.hadoop.http.HttpServer

    HttpServer.start() threw a non Bind IOException
    java.net.BindException: Port in use: 0.0.0.0:50060
      at org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:729)
      at org.apache.hadoop.http.HttpServer.start(HttpServer.java:673)
      at org.apache.hadoop.mapred.TaskTracker.(TaskTracker.java:4041)
    Caused by: java.net.BindException: Address already in use
      at sun.nio.ch.Net.bind(Native Method)
      at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
      at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
      at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
      at org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:725)
      ... 3 more

    12:36:55.232ERRORorg.apache.hadoop.mapred.TaskTracker

    Can not start task tracker because java.net.BindException: Port in use: 0.0.0.0:50060
      at org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:729)
      at org.apache.hadoop.http.HttpServer.start(HttpServer.java:673)
      at org.apache.hadoop.mapred.TaskTracker.(TaskTracker.java:4041)
    Caused by: java.net.BindException: Address already in use
      at sun.nio.ch.Net.bind(Native Method)
      at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
      at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
      at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
      at org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:725)
      ... 3 more

    12:36:55.236INFOorg.apache.hadoop.mapred.TaskTracker

    SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down TaskTracker at node02.lol.ru/10.66.48.104
    ************************************************************/


    среда, 8 мая 2013 г., 10:56:33 UTC+4 пользователь Herman Chen написал:
    It looks like CM upgrade to 4.5.2 was successful. To answer your earlier
    question, CM server/agent/daemons always use packages, never parcels.

    As for the CDH4 services that you have been running since CM 4.1.3, they
    should still be on packages if you have not explicitly migrated them over
    to parcels. The parcel distribution that you saw only distribute the bits,
    but would not impact your services until parcel is actually activated, so
    don't worry.

    Herman



    On Tue, May 7, 2013 at 11:49 PM, Serega Sheypak <[email protected]<javascript:>
    wrote:
    Now Host inspector reports:
    *Group 1 (CDH4) (it's only one group)* Hosts all my nodes listed here! *
    Component* Version CDH Version Flume NG1.3.0+96 CDH4 MapReduce 1 (CDH4
    only)0.20.2+1359 CDH4 HDFS (CDH4 only)2.0.0+960 CDH4 HttpFS (CDH4 only)
    2.0.0+960 CDH4 MapReduce 2 (CDH4 only)2.0.0+960 CDH4 Yarn (CDH4 only)
    2.0.0+960 CDH4 Hadoop2.0.0+960 CDH4 HBase0.94.2+218 CDH4label.cdhVersionTable.hcatalog
    0.4.0+218 CDH4 Hive0.10.0+78 CDH4 Mahout0.7+15 CDH4 Oozie3.3.0+79 CDH4Pig
    0.10.0+510 CDH4 Sqoop1.4.2+60 CDH4 Sqoop2 (CDH4 only)1.99.1+33 CDH4 Whirr
    0.8.0+26 CDH4 Zookeeper3.4.5+16 CDH4 Impala1.0 Not applicable Cloudera
    Manager Management Daemons4.5.2 Not applicable Cloudera Manager Agent
    4.5.2 Not applicable
    Looks like everything is fine!


    2013/5/8 Serega Sheypak <[email protected] <javascript:>>
    OMG,
    I've used puppet to restart cloudera-scm-agents on all nodes. I did only
    this and nothing more.
    BEFORE I've tried to re-run host upgrade wizard. It didn't help, you can
    see result in previous leter.

    Now Cloudera manager reports that it's distributing parcels for CDH
    4.5.2 and impala 1.0.x
    What;s going on...?



    2013/5/8 Serega Sheypak <[email protected] <javascript:>>
    Hi.
    Prev installation was 4.1.3 and based on RPMs
    The next one upgrade 4.1.3 -> 4.5.2 should be parcel based (???)
    I've tried to re-run host upgrade several times. It doesn't help.

    How can I determine: is my CDH package or parcel based now?



    2013/5/8 Philip Langdale <[email protected] <javascript:>>
    I'd recommend re-running the host upgrade wizard. If this had run
    originally, it should have upgraded and restarted. The incorrect version
    more strongly implies the wizard did not run to completion, so we can't be
    sure the packages were upgraded.

    The button for this is on the Hosts page.

    --phil


    On 7 May 2013 14:02, Darren Lo <[email protected] <javascript:>>wrote:
    The host upgrade wizard should have done this for you. If this
    version mismatch causes you problems, you can probably use cssh to
    simultaneously connect to all 10 hosts and run: sudo service
    cloudera-scm-agent restart


    On Tue, May 7, 2013 at 1:41 PM, Serega Sheypak <[email protected]<javascript:>
    wrote:
    Directly - no! How can I do that? I would like to avoid manual
    intervention to 10 diffrent hosts.


    2013/5/8 Darren Lo <[email protected] <javascript:>>
    Did you restart the agents after upgrading them?


    On Tue, May 7, 2013 at 12:57 PM, Serega Sheypak <
    [email protected] <javascript:>> wrote:
    Hi, I'm trying to upgrade UAT cluster (13 nodes) from CM 4.1.3 to
    CM 4.5.2
    I have run host upgrade wizard and got messages which confused me:

    Inspector ran on all 13 hosts. Individual hosts resolved their
    own hostnames correctly. No errors were found while looking for
    conflicting init scripts. No errors were found while checking
    /etc/hosts. All hosts resolved localhost to 127.0.0.1. All hosts
    checked resolved each other's hostnames correctly. Host clocks
    are approximately in sync (within ten minutes). Host time zones
    are consistent across the cluster. No users or groups are missing. No
    kernel versions that are known to be bad are running. 0 hosts are
    running CDH3 and 13 hosts are running CDH4. There are mismatched
    versions across the system. See details below for details on which hosts
    are running what versions of components. All checked Cloudera
    Management Daemons versions are consistent with the server. Some
    Cloudera Management Agents are installed with a differing version from the
    server. Check package versions in the tables below.

    Here
    *Group 1 (CDH4)* Hosts node01.lol.ru, node02.lol.ru,
    node09.lol.ru, node10.lol.ru, node11.lol.ru *Component* VersionCDH VersionImpala
    Unavailable Not installed or path incorrect HDFS (CDH4 only)
    2.0.0+556 CDH4 Hue Plugins2.1.0+223 CDH4 MapReduce 2 (CDH4 only)
    2.0.0+556 CDH4 HBase0.92.1+165 CDH4 Oozie3.2.0+134 CDH4 Yarn
    (CDH4 only)2.0.0+556 CDH4 Zookeeper3.4.3+32 CDH4 Hue2.1.0+223 CDH4MapReduce 1 (CDH4 only)
    0.20.2+1270 CDH4 HttpFS (CDH4 only)2.0.0+556 CDH4 Hadoop2.0.0+556CDH4Hive
    0.9.0+158 CDH4 Flume NG1.2.0+126 CDH4 Cloudera Manager Management
    Daemons4.5.2 Not applicable *Cloudera Manager Agent* *4.1.3* *Not
    applicable*

    let's go to host node01.and see there:

    [[email protected] ~]$ rpm -qa | grep cloud
    cloudera-manager-agent-4.5.2-1.cm452.p0.327.x86_64
    cloudera-manager-daemons-4.5.2-1.cm452.p0.327.x86_64

    Why Cloudera Manager reports that node01 has *agent-4.1.3* but
    rpm utility says that it's *4.5.2*

    --
    Thanks,
    Darren

    --
    Thanks,
    Darren
  • Herman Chen at May 8, 2013 at 5:12 pm
    Cloudera Manager should have stopped the roles, assuming they were managed
    by CM.

    I suggest that you first do a Stop Cluster, instead of Restart, and examine
    any processes still taking up those ports. Once you sort that out, make
    sure the parcels are activated and then Start Cluster.

    Herman


    On Wed, May 8, 2013 at 1:41 AM, Serega Sheypak wrote:

    Here is another problem. Roles don't start after resrting cluster.
    I suppose that Cloudera Manager tries to start parcel based roles but
    nobody did stop package-based software components.

    12:36:47.815 INFO org.apache.hadoop.mapred.TaskTracker


    SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down TaskTracker at node02.lol.ru/10.66.48.104
    ************************************************************/

    12:36:52.997INFO org.apache.hadoop.mapred.TaskTracker


    STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting TaskTracker
    STARTUP_MSG: host = node02.lol.ru/10.66.48.104

    STARTUP_MSG: args = []
    STARTUP_MSG: version = 2.0.0-mr1-cdh4.2.1
    STARTUP_MSG: classpath = /var/run/cloudera-scm-agent/process/1013-mapreduce-TASKTRACKER:/usr/java/jdk1.6.0_37/lib/tools.jar:/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/lib/hadoop-0.20-mapreduce:/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/lib/hadoop-0.20-mapreduce/hadoop-core-2.0.0-mr1-cdh4.2.1.jar:/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/lib/hadoop-0.20-mapreduce/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/lib/hadoop-0.20-mapreduce/lib/ant-contrib-1.0b3.jar:/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/lib/hadoop-0.20-mapreduce/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/lib/hadoop-0.20-mapreduce/lib/aspectjrt-1.6.5.jar:/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/lib/hadoop-0.20-mapreduce/lib/aspectjtools-1.6.5.jar:/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/lib/hadoop-0.20-mapreduce/lib/avro-1.7.3.jar:/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/lib/hadoop-0.20-mapreduce/lib/avro-compiler-1.7.3.jar


    2:36:55.168WARN org.apache.hadoop.mapred.TaskTracker


    TaskTracker's totalMemoryAllottedForTasks is -1 and reserved physical memory is not configured. TaskMemoryManager is disabled.

    12:36:55.213 INFOorg.apache.hadoop.mapred.IndexCache


    IndexCache created with max memory = 10485760

    12:36:55.229INFO org.apache.hadoop.http.HttpServer


    HttpServer.start() threw a non Bind IOException
    java.net.BindException: Port in use: 0.0.0.0:50060
    at org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:729)

    at org.apache.hadoop.http.HttpServer.start(HttpServer.java:673)
    at org.apache.hadoop.mapred.TaskTracker.<init>(TaskTracker.java:1758)
    at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:4041)
    Caused by: java.net.BindException: Address already in use

    at sun.nio.ch.Net.bind(Native Method)
    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)

    at org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:725)
    ... 3 more

    12:36:55.232ERROR org.apache.hadoop.mapred.TaskTracker


    Can not start task tracker because java.net.BindException: Port in use: 0.0.0.0:50060
    at org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:729)
    at org.apache.hadoop.http.HttpServer.start(HttpServer.java:673)

    at org.apache.hadoop.mapred.TaskTracker.<init>(TaskTracker.java:1758)
    at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:4041)
    Caused by: java.net.BindException: Address already in use
    at sun.nio.ch.Net.bind(Native Method)

    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)

    at org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:725)
    ... 3 more

    12:36:55.236INFO org.apache.hadoop.mapred.TaskTracker


    SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down TaskTracker at node02.lol.ru/10.66.48.104
    ************************************************************/


    среда, 8 мая 2013 г., 10:56:33 UTC+4 пользователь Herman Chen написал:
    It looks like CM upgrade to 4.5.2 was successful. To answer your earlier
    question, CM server/agent/daemons always use packages, never parcels.

    As for the CDH4 services that you have been running since CM 4.1.3, they
    should still be on packages if you have not explicitly migrated them over
    to parcels. The parcel distribution that you saw only distribute the bits,
    but would not impact your services until parcel is actually activated, so
    don't worry.

    Herman


    On Tue, May 7, 2013 at 11:49 PM, Serega Sheypak wrote:

    Now Host inspector reports:
    *Group 1 (CDH4) (it's only one group)* Hosts all my nodes listed here!
    *Component* Version CDH Version Flume NG1.3.0+96 CDH4 MapReduce 1 (CDH4
    only)0.20.2+1359 CDH4 HDFS (CDH4 only)2.0.0+960 CDH4 HttpFS (CDH4 only)
    2.0.0+960 CDH4 MapReduce 2 (CDH4 only)2.0.0+960 CDH4 Yarn (CDH4 only)
    2.0.0+960 CDH4 Hadoop2.0.0+960 CDH4 HBase0.94.2+218 CDH4label.cdhVersionTable.hcatalog
    0.4.0+218 CDH4 Hive0.10.0+78 CDH4 Mahout0.7+15 CDH4 Oozie3.3.0+79 CDH4Pig
    0.10.0+510 CDH4 Sqoop1.4.2+60 CDH4 Sqoop2 (CDH4 only)1.99.1+33 CDH4Whirr
    0.8.0+26 CDH4 Zookeeper3.4.5+16 CDH4 Impala1.0 Not applicable Cloudera
    Manager Management Daemons4.5.2 Not applicable Cloudera Manager Agent
    4.5.2 Not applicable
    Looks like everything is fine!


    2013/5/8 Serega Sheypak <[email protected]>
    OMG,
    I've used puppet to restart cloudera-scm-agents on all nodes. I did
    only this and nothing more.
    BEFORE I've tried to re-run host upgrade wizard. It didn't help, you
    can see result in previous leter.

    Now Cloudera manager reports that it's distributing parcels for CDH
    4.5.2 and impala 1.0.x
    What;s going on...?



    2013/5/8 Serega Sheypak <[email protected]>
    Hi.
    Prev installation was 4.1.3 and based on RPMs
    The next one upgrade 4.1.3 -> 4.5.2 should be parcel based (???)
    I've tried to re-run host upgrade several times. It doesn't help.

    How can I determine: is my CDH package or parcel based now?



    2013/5/8 Philip Langdale <[email protected]>
    I'd recommend re-running the host upgrade wizard. If this had run
    originally, it should have upgraded and restarted. The incorrect version
    more strongly implies the wizard did not run to completion, so we can't be
    sure the packages were upgraded.

    The button for this is on the Hosts page.

    --phil

    On 7 May 2013 14:02, Darren Lo wrote:

    The host upgrade wizard should have done this for you. If this
    version mismatch causes you problems, you can probably use cssh to
    simultaneously connect to all 10 hosts and run: sudo service
    cloudera-scm-agent restart


    On Tue, May 7, 2013 at 1:41 PM, Serega Sheypak <[email protected]
    wrote:
    Directly - no! How can I do that? I would like to avoid manual
    intervention to 10 diffrent hosts.


    2013/5/8 Darren Lo <[email protected]>
    Did you restart the agents after upgrading them?


    On Tue, May 7, 2013 at 12:57 PM, Serega Sheypak <
    [email protected]> wrote:
    Hi, I'm trying to upgrade UAT cluster (13 nodes) from CM 4.1.3 to
    CM 4.5.2
    I have run host upgrade wizard and got messages which confused me:

    Inspector ran on all 13 hosts. Individual hosts resolved their
    own hostnames correctly. No errors were found while looking for
    conflicting init scripts. No errors were found while checking
    /etc/hosts. All hosts resolved localhost to 127.0.0.1. All hosts
    checked resolved each other's hostnames correctly. Host clocks
    are approximately in sync (within ten minutes). Host time zones
    are consistent across the cluster. No users or groups are
    missing. No kernel versions that are known to be bad are running. 0
    hosts are running CDH3 and 13 hosts are running CDH4. There are
    mismatched versions across the system. See details below for details on
    which hosts are running what versions of components. All
    checked Cloudera Management Daemons versions are consistent with the server. Some
    Cloudera Management Agents are installed with a differing version from the
    server. Check package versions in the tables below.

    Here
    *Group 1 (CDH4)* Hosts node01.lol.ru, node02.lol.ru,
    node09.lol.ru, node10.lol.ru, node11.lol.ru *Component* VersionCDH VersionImpala
    Unavailable Not installed or path incorrect HDFS (CDH4 only)
    2.0.0+556 CDH4 Hue Plugins2.1.0+223 CDH4 MapReduce 2 (CDH4 only)
    2.0.0+556 CDH4 HBase0.92.1+165 CDH4 Oozie3.2.0+134 CDH4 Yarn
    (CDH4 only)2.0.0+556 CDH4 Zookeeper3.4.3+32 CDH4 Hue2.1.0+223CDH4MapReduce 1 (CDH4 only)
    0.20.2+1270 CDH4 HttpFS (CDH4 only)2.0.0+556 CDH4 Hadoop2.0.0+556CDH4Hive
    0.9.0+158 CDH4 Flume NG1.2.0+126 CDH4 Cloudera Manager
    Management Daemons4.5.2 Not applicable *Cloudera Manager Agent* *
    4.1.3* *Not applicable*

    let's go to host node01.and see there:

    [[email protected] ~]$ rpm -qa | grep cloud
    cloudera-manager-agent-4.5.2-**1.cm452.p0.327.x86_64
    cloudera-manager-daemons-4.5.**2-1.cm452.p0.327.x86_64

    Why Cloudera Manager reports that node01 has *agent-4.1.3* but
    rpm utility says that it's *4.5.2*

    --
    Thanks,
    Darren

    --
    Thanks,
    Darren
  • Serega Sheypak at May 14, 2013 at 9:46 am
    The problem has been solved.
    1. I've restarted all scm-agents on all CM managed nodes using puppet
    enterprise.
    2. I've restated the whole cluster using CM UI button.
    3. I've ran Host Inspector. It did indicate that everything is OK.

    Thanks.


    2013/5/8 Herman Chen <[email protected]>
    Cloudera Manager should have stopped the roles, assuming they were managed
    by CM.

    I suggest that you first do a Stop Cluster, instead of Restart, and
    examine any processes still taking up those ports. Once you sort that out,
    make sure the parcels are activated and then Start Cluster.

    Herman


    On Wed, May 8, 2013 at 1:41 AM, Serega Sheypak wrote:

    Here is another problem. Roles don't start after resrting cluster.
    I suppose that Cloudera Manager tries to start parcel based roles but
    nobody did stop package-based software components.

    12:36:47.815 INFO org.apache.hadoop.mapred.TaskTracker


    SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down TaskTracker at node02.lol.ru/10.66.48.104

    ************************************************************/

    12:36:52.997INFO org.apache.hadoop.mapred.TaskTracker


    STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting TaskTracker
    STARTUP_MSG: host = node02.lol.ru/10.66.48.104


    STARTUP_MSG: args = []
    STARTUP_MSG: version = 2.0.0-mr1-cdh4.2.1
    STARTUP_MSG: classpath = /var/run/cloudera-scm-agent/process/1013-mapreduce-TASKTRACKER:/usr/java/jdk1.6.0_37/lib/tools.jar:/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/lib/hadoop-0.20-mapreduce:/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/lib/hadoop-0.20-mapreduce/hadoop-core-2.0.0-mr1-cdh4.2.1.jar:/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/lib/hadoop-0.20-mapreduce/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/lib/hadoop-0.20-mapreduce/lib/ant-contrib-1.0b3.jar:/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/lib/hadoop-0.20-mapreduce/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/lib/hadoop-0.20-mapreduce/lib/aspectjrt-1.6.5.jar:/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/lib/hadoop-0.20-mapreduce/lib/aspectjtools-1.6.5.jar:/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/lib/hadoop-0.20-mapreduce/lib/avro-1.7.3.jar:/opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/lib/hadoop-0.20-mapreduce/lib/avro-compiler-1.7.3.jar


    2:36:55.168WARN org.apache.hadoop.mapred.TaskTracker


    TaskTracker's totalMemoryAllottedForTasks is -1 and reserved physical memory is not configured. TaskMemoryManager is disabled.

    12:36:55.213 INFOorg.apache.hadoop.mapred.IndexCache


    IndexCache created with max memory = 10485760

    12:36:55.229INFO org.apache.hadoop.http.HttpServer


    HttpServer.start() threw a non Bind IOException
    java.net.BindException: Port in use: 0.0.0.0:50060
    at org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:729)


    at org.apache.hadoop.http.HttpServer.start(HttpServer.java:673)
    at org.apache.hadoop.mapred.TaskTracker.<init>(TaskTracker.java:1758)
    at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:4041)

    Caused by: java.net.BindException: Address already in use

    at sun.nio.ch.Net.bind(Native Method)
    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)


    at org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:725)
    ... 3 more

    12:36:55.232ERROR org.apache.hadoop.mapred.TaskTracker


    Can not start task tracker because java.net.BindException: Port in use: 0.0.0.0:50060
    at org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:729)
    at org.apache.hadoop.http.HttpServer.start(HttpServer.java:673)


    at org.apache.hadoop.mapred.TaskTracker.<init>(TaskTracker.java:1758)
    at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:4041)
    Caused by: java.net.BindException: Address already in use
    at sun.nio.ch.Net.bind(Native Method)


    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:124)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)


    at org.apache.hadoop.http.HttpServer.openListener(HttpServer.java:725)
    ... 3 more

    12:36:55.236INFO org.apache.hadoop.mapred.TaskTracker


    SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down TaskTracker at node02.lol.ru/10.66.48.104

    ************************************************************/


    среда, 8 мая 2013 г., 10:56:33 UTC+4 пользователь Herman Chen написал:
    It looks like CM upgrade to 4.5.2 was successful. To answer your earlier
    question, CM server/agent/daemons always use packages, never parcels.

    As for the CDH4 services that you have been running since CM 4.1.3, they
    should still be on packages if you have not explicitly migrated them over
    to parcels. The parcel distribution that you saw only distribute the bits,
    but would not impact your services until parcel is actually activated, so
    don't worry.

    Herman


    On Tue, May 7, 2013 at 11:49 PM, Serega Sheypak wrote:

    Now Host inspector reports:
    *Group 1 (CDH4) (it's only one group)* Hosts all my nodes listed here!
    *Component* Version CDH Version Flume NG1.3.0+96 CDH4 MapReduce 1
    (CDH4 only)0.20.2+1359 CDH4 HDFS (CDH4 only)2.0.0+960 CDH4 HttpFS
    (CDH4 only)2.0.0+960 CDH4 MapReduce 2 (CDH4 only)2.0.0+960 CDH4 Yarn
    (CDH4 only)2.0.0+960 CDH4 Hadoop2.0.0+960 CDH4 HBase0.94.2+218 CDH4label.cdhVersionTable.hcatalog
    0.4.0+218 CDH4 Hive0.10.0+78 CDH4 Mahout0.7+15 CDH4 Oozie3.3.0+79 CDH4Pig
    0.10.0+510 CDH4 Sqoop1.4.2+60 CDH4 Sqoop2 (CDH4 only)1.99.1+33 CDH4Whirr
    0.8.0+26 CDH4 Zookeeper3.4.5+16 CDH4 Impala1.0 Not applicable Cloudera
    Manager Management Daemons4.5.2 Not applicable Cloudera Manager Agent
    4.5.2 Not applicable
    Looks like everything is fine!


    2013/5/8 Serega Sheypak <[email protected]>
    OMG,
    I've used puppet to restart cloudera-scm-agents on all nodes. I did
    only this and nothing more.
    BEFORE I've tried to re-run host upgrade wizard. It didn't help, you
    can see result in previous leter.

    Now Cloudera manager reports that it's distributing parcels for CDH
    4.5.2 and impala 1.0.x
    What;s going on...?



    2013/5/8 Serega Sheypak <[email protected]>
    Hi.
    Prev installation was 4.1.3 and based on RPMs
    The next one upgrade 4.1.3 -> 4.5.2 should be parcel based (???)
    I've tried to re-run host upgrade several times. It doesn't help.

    How can I determine: is my CDH package or parcel based now?



    2013/5/8 Philip Langdale <[email protected]>
    I'd recommend re-running the host upgrade wizard. If this had run
    originally, it should have upgraded and restarted. The incorrect version
    more strongly implies the wizard did not run to completion, so we can't be
    sure the packages were upgraded.

    The button for this is on the Hosts page.

    --phil

    On 7 May 2013 14:02, Darren Lo wrote:

    The host upgrade wizard should have done this for you. If this
    version mismatch causes you problems, you can probably use cssh to
    simultaneously connect to all 10 hosts and run: sudo service
    cloudera-scm-agent restart


    On Tue, May 7, 2013 at 1:41 PM, Serega Sheypak <
    [email protected]> wrote:
    Directly - no! How can I do that? I would like to avoid manual
    intervention to 10 diffrent hosts.


    2013/5/8 Darren Lo <[email protected]>
    Did you restart the agents after upgrading them?


    On Tue, May 7, 2013 at 12:57 PM, Serega Sheypak <
    [email protected]> wrote:
    Hi, I'm trying to upgrade UAT cluster (13 nodes) from CM 4.1.3
    to CM 4.5.2
    I have run host upgrade wizard and got messages which confused
    me:

    Inspector ran on all 13 hosts. Individual hosts resolved their
    own hostnames correctly. No errors were found while looking for
    conflicting init scripts. No errors were found while checking
    /etc/hosts. All hosts resolved localhost to 127.0.0.1. All
    hosts checked resolved each other's hostnames correctly. Host
    clocks are approximately in sync (within ten minutes). Host
    time zones are consistent across the cluster. No users or
    groups are missing. No kernel versions that are known to be bad
    are running. 0 hosts are running CDH3 and 13 hosts are running
    CDH4. There are mismatched versions across the system. See
    details below for details on which hosts are running what versions of
    components. All checked Cloudera Management Daemons versions
    are consistent with the server. Some Cloudera Management Agents
    are installed with a differing version from the server. Check package
    versions in the tables below.

    Here
    *Group 1 (CDH4)* Hosts node01.lol.ru, node02.lol.ru,
    node09.lol.ru, node10.lol.ru, node11.lol.ru *Component* VersionCDH VersionImpala
    Unavailable Not installed or path incorrect HDFS (CDH4 only)
    2.0.0+556 CDH4 Hue Plugins2.1.0+223 CDH4 MapReduce 2 (CDH4 only)
    2.0.0+556 CDH4 HBase0.92.1+165 CDH4 Oozie3.2.0+134 CDH4 Yarn
    (CDH4 only)2.0.0+556 CDH4 Zookeeper3.4.3+32 CDH4 Hue2.1.0+223CDH4MapReduce 1 (CDH4 only)
    0.20.2+1270 CDH4 HttpFS (CDH4 only)2.0.0+556 CDH4 Hadoop
    2.0.0+556 CDH4 Hive0.9.0+158 CDH4 Flume NG1.2.0+126 CDH4Cloudera Manager Management Daemons
    4.5.2 Not applicable *Cloudera Manager Agent* *4.1.3* *Not
    applicable*

    let's go to host node01.and see there:

    [[email protected] ~]$ rpm -qa | grep cloud
    cloudera-manager-agent-4.5.2-**1.cm452.p0.327.x86_64
    cloudera-manager-daemons-4.5.**2-1.cm452.p0.327.x86_64

    Why Cloudera Manager reports that node01 has *agent-4.1.3* but
    rpm utility says that it's *4.5.2*

    --
    Thanks,
    Darren

    --
    Thanks,
    Darren

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupscm-users @
categorieshadoop
postedMay 7, '13 at 7:57p
activeMay 14, '13 at 9:46a
posts13
users4
websitecloudera.com
irc#hadoop

People

Translate

site design / logo © 2023 Grokbase