FAQ
I have CDH 4.5.

Problem: a hadoop fs -ls / returns the root directory of the local file
system.

In CM I navigate to Services->hdfs1->instance->datanode(machine12)
     Processes-> Show (Config files/Env)-> core-site.xml
       I see fs.defaultFS is set correctly to hdfs://machine.company.com:8020

Yet when I am on that DN a hadoop fs -ls / returns the root directory of
the local file system.

I searched for core-site.xml/
ls -al /etc/hadoop/conf -> /etc/alternatives/hadoop-conf ->
/opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/etc/hadoop/conf.empty

I edited that core-site.xml, which was empty.
  added the correct fs.defaultFS.

Now hadoop fs -ls / returns the root level of hdfs.

1.)is this the correct core-site.xml that should be used?
2.)how do I determine which core-site.xml or any other config xml file, and
their location, is used?
3.)shouldn't CM be editing the correct core-site.xml? Or maybe it is.?

thanks.
John

To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.

Search Discussions

  • Vikram Srivastava at Feb 11, 2014 at 6:48 am
    John,

    You don't need to manually edit core-site.xml. All the configuration should
    be done via Cloudera Manager UI only. I think you haven't deployed client
    configs for HDFS. You should goto your cluster "Actions" menu (on Home
    page) and run "Deploy Client Configuration". Then run hadoop fs -ls /

    Vikram

    On Mon, Feb 10, 2014 at 10:44 PM, John Meza wrote:

    I have CDH 4.5.

    Problem: a hadoop fs -ls / returns the root directory of the local file
    system.

    In CM I navigate to Services->hdfs1->instance->datanode(machine12)
    Processes-> Show (Config files/Env)-> core-site.xml
    I see fs.defaultFS is set correctly to hdfs://
    machine.company.com:8020

    Yet when I am on that DN a hadoop fs -ls / returns the root directory of
    the local file system.

    I searched for core-site.xml/
    ls -al /etc/hadoop/conf -> /etc/alternatives/hadoop-conf ->
    /opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/etc/hadoop/conf.empty

    I edited that core-site.xml, which was empty.
    added the correct fs.defaultFS.

    Now hadoop fs -ls / returns the root level of hdfs.

    1.)is this the correct core-site.xml that should be used?
    2.)how do I determine which core-site.xml or any other config xml file,
    and their location, is used?
    3.)shouldn't CM be editing the correct core-site.xml? Or maybe it is.?

    thanks.
    John

    To unsubscribe from this group and stop receiving emails from it, send an
    email to scm-users+unsubscribe@cloudera.org.
    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.
  • John Meza at Feb 11, 2014 at 4:04 pm
    sorry, forgot to add scm-users.

    ---------- Forwarded message ----------
    From: John Meza <jmezazap@gmail.com>
    Date: Tue, Feb 11, 2014 at 8:03 AM
    Subject: Re: location of core-site.xml
    To: Vikram Srivastava <vikrams@cloudera.com>


    or can you forward a link that describes how to determine the active config
    file location, and what CM does during the "Deploy Client Configuration"
    action.
    thanks
    John

    On Tue, Feb 11, 2014 at 7:58 AM, John Meza wrote:

    Vikram,
    thanks. that worked.
    I removed my edits to the core-site.xml I modified. the hadoop fs -ls /
    still worked.

    Q: how do I determine which config files are used?
    thanks
    john

    On Mon, Feb 10, 2014 at 10:48 PM, Vikram Srivastava wrote:

    John,

    You don't need to manually edit core-site.xml. All the configuration
    should be done via Cloudera Manager UI only. I think you haven't deployed
    client configs for HDFS. You should goto your cluster "Actions" menu (on
    Home page) and run "Deploy Client Configuration". Then run hadoop fs -ls /

    Vikram

    On Mon, Feb 10, 2014 at 10:44 PM, John Meza wrote:

    I have CDH 4.5.

    Problem: a hadoop fs -ls / returns the root directory of the local file
    system.

    In CM I navigate to Services->hdfs1->instance->datanode(machine12)
    Processes-> Show (Config files/Env)-> core-site.xml
    I see fs.defaultFS is set correctly to hdfs://
    machine.company.com:8020

    Yet when I am on that DN a hadoop fs -ls / returns the root directory of
    the local file system.

    I searched for core-site.xml/
    ls -al /etc/hadoop/conf -> /etc/alternatives/hadoop-conf ->
    /opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/etc/hadoop/conf.empty

    I edited that core-site.xml, which was empty.
    added the correct fs.defaultFS.

    Now hadoop fs -ls / returns the root level of hdfs.

    1.)is this the correct core-site.xml that should be used?
    2.)how do I determine which core-site.xml or any other config xml file,
    and their location, is used?
    3.)shouldn't CM be editing the correct core-site.xml? Or maybe it is.?

    thanks.
    John

    To unsubscribe from this group and stop receiving emails from it, send
    an email to scm-users+unsubscribe@cloudera.org.
    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.
  • Darren Lo at Feb 11, 2014 at 4:52 pm
    Hi John,

    Deploy client config updates things like /etc/hadoop/conf, /etc/hive/conf,
    /etc/hbase/conf, /etc/solr/conf on all hosts with roles for the relevant
    services. If a machine isn't getting its /etc updated, then you probably
    want to add a Gateway role to the relevant service on that host and
    re-deploy client configuration.

    It's usually best to use the cluster-level deploy client configuration
    command, since both HDFS and MapReduce (and YARN) use the same
    /etc/hadoop/conf directory and you want to make sure it's updated correctly.

    Thanks,
    Darren

    On Tue, Feb 11, 2014 at 8:04 AM, John Meza wrote:

    sorry, forgot to add scm-users.

    ---------- Forwarded message ----------
    From: John Meza <jmezazap@gmail.com>
    Date: Tue, Feb 11, 2014 at 8:03 AM
    Subject: Re: location of core-site.xml
    To: Vikram Srivastava <vikrams@cloudera.com>


    or can you forward a link that describes how to determine the active
    config file location, and what CM does during the "Deploy Client
    Configuration" action.
    thanks
    John

    On Tue, Feb 11, 2014 at 7:58 AM, John Meza wrote:

    Vikram,
    thanks. that worked.
    I removed my edits to the core-site.xml I modified. the hadoop fs -ls /
    still worked.

    Q: how do I determine which config files are used?
    thanks
    john


    On Mon, Feb 10, 2014 at 10:48 PM, Vikram Srivastava <vikrams@cloudera.com
    wrote:
    John,

    You don't need to manually edit core-site.xml. All the configuration
    should be done via Cloudera Manager UI only. I think you haven't deployed
    client configs for HDFS. You should goto your cluster "Actions" menu (on
    Home page) and run "Deploy Client Configuration". Then run hadoop fs -ls /

    Vikram

    On Mon, Feb 10, 2014 at 10:44 PM, John Meza wrote:

    I have CDH 4.5.

    Problem: a hadoop fs -ls / returns the root directory of the local file
    system.

    In CM I navigate to Services->hdfs1->instance->datanode(machine12)
    Processes-> Show (Config files/Env)-> core-site.xml
    I see fs.defaultFS is set correctly to hdfs://
    machine.company.com:8020

    Yet when I am on that DN a hadoop fs -ls / returns the root directory
    of the local file system.

    I searched for core-site.xml/
    ls -al /etc/hadoop/conf -> /etc/alternatives/hadoop-conf ->
    /opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/etc/hadoop/conf.empty

    I edited that core-site.xml, which was empty.
    added the correct fs.defaultFS.

    Now hadoop fs -ls / returns the root level of hdfs.

    1.)is this the correct core-site.xml that should be used?
    2.)how do I determine which core-site.xml or any other config xml file,
    and their location, is used?
    3.)shouldn't CM be editing the correct core-site.xml? Or maybe it is.?

    thanks.
    John

    To unsubscribe from this group and stop receiving emails from it, send
    an email to scm-users+unsubscribe@cloudera.org.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to scm-users+unsubscribe@cloudera.org.


    --
    Thanks,
    Darren

    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.
  • John Meza at Feb 11, 2014 at 5:10 pm
    K. thanks for helping out.

    On Tue, Feb 11, 2014 at 8:51 AM, Darren Lo wrote:

    Hi John,

    Deploy client config updates things like /etc/hadoop/conf, /etc/hive/conf,
    /etc/hbase/conf, /etc/solr/conf on all hosts with roles for the relevant
    services. If a machine isn't getting its /etc updated, then you probably
    want to add a Gateway role to the relevant service on that host and
    re-deploy client configuration.

    It's usually best to use the cluster-level deploy client configuration
    command, since both HDFS and MapReduce (and YARN) use the same
    /etc/hadoop/conf directory and you want to make sure it's updated correctly.

    Thanks,
    Darren

    On Tue, Feb 11, 2014 at 8:04 AM, John Meza wrote:

    sorry, forgot to add scm-users.

    ---------- Forwarded message ----------
    From: John Meza <jmezazap@gmail.com>
    Date: Tue, Feb 11, 2014 at 8:03 AM
    Subject: Re: location of core-site.xml
    To: Vikram Srivastava <vikrams@cloudera.com>


    or can you forward a link that describes how to determine the active
    config file location, and what CM does during the "Deploy Client
    Configuration" action.
    thanks
    John

    On Tue, Feb 11, 2014 at 7:58 AM, John Meza wrote:

    Vikram,
    thanks. that worked.
    I removed my edits to the core-site.xml I modified. the hadoop fs -ls /
    still worked.

    Q: how do I determine which config files are used?
    thanks
    john


    On Mon, Feb 10, 2014 at 10:48 PM, Vikram Srivastava <
    vikrams@cloudera.com> wrote:
    John,

    You don't need to manually edit core-site.xml. All the configuration
    should be done via Cloudera Manager UI only. I think you haven't deployed
    client configs for HDFS. You should goto your cluster "Actions" menu (on
    Home page) and run "Deploy Client Configuration". Then run hadoop fs -ls /

    Vikram

    On Mon, Feb 10, 2014 at 10:44 PM, John Meza wrote:

    I have CDH 4.5.

    Problem: a hadoop fs -ls / returns the root directory of the local
    file system.

    In CM I navigate to Services->hdfs1->instance->datanode(machine12)
    Processes-> Show (Config files/Env)-> core-site.xml
    I see fs.defaultFS is set correctly to hdfs://
    machine.company.com:8020

    Yet when I am on that DN a hadoop fs -ls / returns the root directory
    of the local file system.

    I searched for core-site.xml/
    ls -al /etc/hadoop/conf -> /etc/alternatives/hadoop-conf ->
    /opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/etc/hadoop/conf.empty

    I edited that core-site.xml, which was empty.
    added the correct fs.defaultFS.

    Now hadoop fs -ls / returns the root level of hdfs.

    1.)is this the correct core-site.xml that should be used?
    2.)how do I determine which core-site.xml or any other config xml
    file, and their location, is used?
    3.)shouldn't CM be editing the correct core-site.xml? Or maybe it is.?

    thanks.
    John

    To unsubscribe from this group and stop receiving emails from it, send
    an email to scm-users+unsubscribe@cloudera.org.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to scm-users+unsubscribe@cloudera.org.


    --
    Thanks,
    Darren
    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.
  • Tao Xiao at Feb 12, 2014 at 6:25 am
    Hi Darren, Vikram:

    I checked what you said about the configuration files in my CDH cluster and
    got confused.

    At about 13:34, I changed the replication factor from 3 to 4 through
    Cloudera Manager's UI and deployed client configuration. Then I looked up
    all hdfs-site.xml files using the following command:

    [root@hadoop-6 ~]# locate hdfs-site.xml | xargs ls -al
    -rw-r--r-- 1 root root 1579 Feb 12 13:34
    /etc/hadoop/conf.cloudera.hdfs1/hdfs-site.xml
    -rw-r--r-- 1 root root 1464 Jan 27 14:25
    /etc/hadoop/conf.cloudera.mapreduce1/hdfs-site.xml
    -rw-r--r-- 1 root root 1357 Jan 27 16:50
    /etc/hbase/conf.cloudera.hbase1/hdfs-site.xml
    -rwxr-xr-x 1 root root 1023 Nov 21 10:07
    /opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/etc/hadoop/conf.empty/hdfs-site.xml
    -rwxr-xr-x 1 root root 1875 Nov 21 10:07
    /opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/etc/hadoop/conf.pseudo/hdfs-site.xml
    -rwxr-xr-x 1 root root 1875 Nov 21 10:07
    /opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/etc/hadoop/conf.pseudo.mr1/hdfs-site.xml
    -rwxr-xr-x 1 root root 1391 Nov 21 10:07
    /opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/lib/hadoop-0.20-mapreduce/example-confs/conf.pseudo/hdfs-site.xml
    -rwxr-xr-x 1 root root 6453 Nov 21 10:07
    /opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/lib/hadoop-0.20-mapreduce/example-confs/conf.secure/hdfs-site.xml
    -rw-r----- 1 hdfs hdfs 3521 Feb 12 09:32
    /var/run/cloudera-scm-agent/process/17-hdfs-NAMENODE/hdfs-site.xml
    -rw-r----- 1 hdfs hdfs 4237 Feb 12 09:32
    /var/run/cloudera-scm-agent/process/23-hdfs-DATANODE/hdfs-site.xml
    -rw-r----- 1 hbase hbase 1356 Feb 12 09:42
    /var/run/cloudera-scm-agent/process/25-hbase-REGIONSERVER/hdfs-site.xml
    -rw-r----- 1 hbase hbase 1356 Feb 12 09:45
    /var/run/cloudera-scm-agent/process/34-hbase-REGIONSERVER/hdfs-site.xml
    -rw-r----- 1 root root 1579 Feb 12 13:12
    /var/run/cloudera-scm-agent/process/43-deploy-client-config/hadoop-conf/hdfs-site.xml
    -rw-r----- 1 root root 1579 Feb 12 13:14
    /var/run/cloudera-scm-agent/process/48-deploy-client-config/hadoop-conf/hdfs-site.xml
    -rw-r----- 1 root root 1579 Feb 12 13:34
    /var/run/cloudera-scm-agent/process/53-deploy-client-config/hadoop-conf/hdfs-site.xml


    It can be seen that only two files(
    /etc/hadoop/conf.cloudera.hdfs1/hdfs-site.xml and
    /var/run/cloudera-scm-agent/process/53-deploy-client-config/hadoop-conf/hdfs-site.xml)
    are updated at 13:34, and the dfs.replication property in these two files
    is indeed set to 4.

    Actually at this time the configuration changes I made did not take effect
    at all. Because I started HDFS and uploaded a file into hbase and found
    that the replication factor for this file was still 3, not 4. So how to
    make the change take effect?

    You mentioned that "Deploy client config updates things like
    /etc/hadoop/conf, /etc/hive/conf, /etc/hbase/conf, /etc/solr/conf on all
    hosts with roles for the relevant services". But
    /etc/hadoop/conf/hdfs-site.xml is not updated. Is that the reason why the
    replication factor was still 3 ?

    You also mentioned that " If a machine isn't getting its /etc updated, then
    you probably want to add a Gateway role to the relevant service on that
    host and re-deploy client configuration". Is that the reason why
    /etc/hadoop/conf/hdfs-site.xml is not updated? How to add a Gateway role to
    the relevant service? I did see Gateway Conifguration and its three
    sub-properties(Performance, Resource Management and Advanced) in CM UI, but
    I don't know how to "add a Gateway role to the relevant service".




    2014-02-12 1:10 GMT+08:00 John Meza <jmezazap@gmail.com>:
    K. thanks for helping out.

    On Tue, Feb 11, 2014 at 8:51 AM, Darren Lo wrote:

    Hi John,

    Deploy client config updates things like /etc/hadoop/conf,
    /etc/hive/conf, /etc/hbase/conf, /etc/solr/conf on all hosts with roles for
    the relevant services. If a machine isn't getting its /etc updated, then
    you probably want to add a Gateway role to the relevant service on that
    host and re-deploy client configuration.

    It's usually best to use the cluster-level deploy client configuration
    command, since both HDFS and MapReduce (and YARN) use the same
    /etc/hadoop/conf directory and you want to make sure it's updated correctly.

    Thanks,
    Darren

    On Tue, Feb 11, 2014 at 8:04 AM, John Meza wrote:

    sorry, forgot to add scm-users.

    ---------- Forwarded message ----------
    From: John Meza <jmezazap@gmail.com>
    Date: Tue, Feb 11, 2014 at 8:03 AM
    Subject: Re: location of core-site.xml
    To: Vikram Srivastava <vikrams@cloudera.com>


    or can you forward a link that describes how to determine the active
    config file location, and what CM does during the "Deploy Client
    Configuration" action.
    thanks
    John

    On Tue, Feb 11, 2014 at 7:58 AM, John Meza wrote:

    Vikram,
    thanks. that worked.
    I removed my edits to the core-site.xml I modified. the hadoop fs -ls /
    still worked.

    Q: how do I determine which config files are used?
    thanks
    john


    On Mon, Feb 10, 2014 at 10:48 PM, Vikram Srivastava <
    vikrams@cloudera.com> wrote:
    John,

    You don't need to manually edit core-site.xml. All the configuration
    should be done via Cloudera Manager UI only. I think you haven't deployed
    client configs for HDFS. You should goto your cluster "Actions" menu (on
    Home page) and run "Deploy Client Configuration". Then run hadoop fs -ls /

    Vikram

    On Mon, Feb 10, 2014 at 10:44 PM, John Meza wrote:

    I have CDH 4.5.

    Problem: a hadoop fs -ls / returns the root directory of the local
    file system.

    In CM I navigate to Services->hdfs1->instance->datanode(machine12)
    Processes-> Show (Config files/Env)-> core-site.xml
    I see fs.defaultFS is set correctly to hdfs://
    machine.company.com:8020

    Yet when I am on that DN a hadoop fs -ls / returns the root directory
    of the local file system.

    I searched for core-site.xml/
    ls -al /etc/hadoop/conf -> /etc/alternatives/hadoop-conf ->
    /opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/etc/hadoop/conf.empty

    I edited that core-site.xml, which was empty.
    added the correct fs.defaultFS.

    Now hadoop fs -ls / returns the root level of hdfs.

    1.)is this the correct core-site.xml that should be used?
    2.)how do I determine which core-site.xml or any other config xml
    file, and their location, is used?
    3.)shouldn't CM be editing the correct core-site.xml? Or maybe it is.?

    thanks.
    John

    To unsubscribe from this group and stop receiving emails from it,
    send an email to scm-users+unsubscribe@cloudera.org.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to scm-users+unsubscribe@cloudera.org.


    --
    Thanks,
    Darren
    To unsubscribe from this group and stop receiving emails from it, send an
    email to scm-users+unsubscribe@cloudera.org.
    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.
  • Darren Lo at Feb 12, 2014 at 5:04 pm
    Hi Tao,

    Did you run the cluster-level deploy client config, or the HDFS-level
    command? HDFS and MapReduce both use /etc/hadoop/conf, but MapReduce's
    configuration has higher alternatives priority by default
    (update-alternatives --display hadoop-conf). If you only deployed HDFS
    client config, then it will update the unused HDFS copy of the client
    config and your system will still use the stale MapReduce copy.

    To avoid this kind of confusion, it's easiest to use the cluster-level
    deploy client config command.

    The latest beta release of CM 5 will let you know if your client
    configuration is stale, which also helps to avoid this confusion. In your
    scenario, it would have marked the MapReduce service as having stale client
    configuration.

    Thanks,
    Darren

    On Tue, Feb 11, 2014 at 10:25 PM, Tao Xiao wrote:

    Hi Darren, Vikram:

    I checked what you said about the configuration files in my CDH
    cluster and got confused.

    At about 13:34, I changed the replication factor from 3 to 4 through
    Cloudera Manager's UI and deployed client configuration. Then I looked up
    all hdfs-site.xml files using the following command:

    [root@hadoop-6 ~]# locate hdfs-site.xml | xargs ls -al
    -rw-r--r-- 1 root root 1579 Feb 12 13:34
    /etc/hadoop/conf.cloudera.hdfs1/hdfs-site.xml
    -rw-r--r-- 1 root root 1464 Jan 27 14:25
    /etc/hadoop/conf.cloudera.mapreduce1/hdfs-site.xml
    -rw-r--r-- 1 root root 1357 Jan 27 16:50
    /etc/hbase/conf.cloudera.hbase1/hdfs-site.xml
    -rwxr-xr-x 1 root root 1023 Nov 21 10:07
    /opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/etc/hadoop/conf.empty/hdfs-site.xml
    -rwxr-xr-x 1 root root 1875 Nov 21 10:07
    /opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/etc/hadoop/conf.pseudo/hdfs-site.xml
    -rwxr-xr-x 1 root root 1875 Nov 21 10:07
    /opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/etc/hadoop/conf.pseudo.mr1/hdfs-site.xml
    -rwxr-xr-x 1 root root 1391 Nov 21 10:07
    /opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/lib/hadoop-0.20-mapreduce/example-confs/conf.pseudo/hdfs-site.xml
    -rwxr-xr-x 1 root root 6453 Nov 21 10:07
    /opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/lib/hadoop-0.20-mapreduce/example-confs/conf.secure/hdfs-site.xml
    -rw-r----- 1 hdfs hdfs 3521 Feb 12 09:32
    /var/run/cloudera-scm-agent/process/17-hdfs-NAMENODE/hdfs-site.xml
    -rw-r----- 1 hdfs hdfs 4237 Feb 12 09:32
    /var/run/cloudera-scm-agent/process/23-hdfs-DATANODE/hdfs-site.xml
    -rw-r----- 1 hbase hbase 1356 Feb 12 09:42
    /var/run/cloudera-scm-agent/process/25-hbase-REGIONSERVER/hdfs-site.xml
    -rw-r----- 1 hbase hbase 1356 Feb 12 09:45
    /var/run/cloudera-scm-agent/process/34-hbase-REGIONSERVER/hdfs-site.xml
    -rw-r----- 1 root root 1579 Feb 12 13:12
    /var/run/cloudera-scm-agent/process/43-deploy-client-config/hadoop-conf/hdfs-site.xml
    -rw-r----- 1 root root 1579 Feb 12 13:14
    /var/run/cloudera-scm-agent/process/48-deploy-client-config/hadoop-conf/hdfs-site.xml
    -rw-r----- 1 root root 1579 Feb 12 13:34
    /var/run/cloudera-scm-agent/process/53-deploy-client-config/hadoop-conf/hdfs-site.xml


    It can be seen that only two files(
    /etc/hadoop/conf.cloudera.hdfs1/hdfs-site.xml and
    /var/run/cloudera-scm-agent/process/53-deploy-client-config/hadoop-conf/hdfs-site.xml)
    are updated at 13:34, and the dfs.replication property in these two files
    is indeed set to 4.

    Actually at this time the configuration changes I made did not take effect
    at all. Because I started HDFS and uploaded a file into hbase and found
    that the replication factor for this file was still 3, not 4. So how to
    make the change take effect?

    You mentioned that "Deploy client config updates things like
    /etc/hadoop/conf, /etc/hive/conf, /etc/hbase/conf, /etc/solr/conf on all
    hosts with roles for the relevant services". But
    /etc/hadoop/conf/hdfs-site.xml is not updated. Is that the reason why the
    replication factor was still 3 ?

    You also mentioned that " If a machine isn't getting its /etc updated,
    then you probably want to add a Gateway role to the relevant service on
    that host and re-deploy client configuration". Is that the reason why
    /etc/hadoop/conf/hdfs-site.xml is not updated? How to add a Gateway role
    to the relevant service? I did see Gateway Conifguration and its three
    sub-properties(Performance, Resource Management and Advanced) in CM UI, but
    I don't know how to "add a Gateway role to the relevant service".




    2014-02-12 1:10 GMT+08:00 John Meza <jmezazap@gmail.com>:

    K. thanks for helping out.
    On Tue, Feb 11, 2014 at 8:51 AM, Darren Lo wrote:

    Hi John,

    Deploy client config updates things like /etc/hadoop/conf,
    /etc/hive/conf, /etc/hbase/conf, /etc/solr/conf on all hosts with roles for
    the relevant services. If a machine isn't getting its /etc updated, then
    you probably want to add a Gateway role to the relevant service on that
    host and re-deploy client configuration.

    It's usually best to use the cluster-level deploy client configuration
    command, since both HDFS and MapReduce (and YARN) use the same
    /etc/hadoop/conf directory and you want to make sure it's updated correctly.

    Thanks,
    Darren

    On Tue, Feb 11, 2014 at 8:04 AM, John Meza wrote:

    sorry, forgot to add scm-users.

    ---------- Forwarded message ----------
    From: John Meza <jmezazap@gmail.com>
    Date: Tue, Feb 11, 2014 at 8:03 AM
    Subject: Re: location of core-site.xml
    To: Vikram Srivastava <vikrams@cloudera.com>


    or can you forward a link that describes how to determine the active
    config file location, and what CM does during the "Deploy Client
    Configuration" action.
    thanks
    John

    On Tue, Feb 11, 2014 at 7:58 AM, John Meza wrote:

    Vikram,
    thanks. that worked.
    I removed my edits to the core-site.xml I modified. the hadoop fs -ls
    / still worked.

    Q: how do I determine which config files are used?
    thanks
    john


    On Mon, Feb 10, 2014 at 10:48 PM, Vikram Srivastava <
    vikrams@cloudera.com> wrote:
    John,

    You don't need to manually edit core-site.xml. All the configuration
    should be done via Cloudera Manager UI only. I think you haven't deployed
    client configs for HDFS. You should goto your cluster "Actions" menu (on
    Home page) and run "Deploy Client Configuration". Then run hadoop fs -ls /

    Vikram

    On Mon, Feb 10, 2014 at 10:44 PM, John Meza wrote:

    I have CDH 4.5.

    Problem: a hadoop fs -ls / returns the root directory of the local
    file system.

    In CM I navigate to Services->hdfs1->instance->datanode(machine12)
    Processes-> Show (Config files/Env)-> core-site.xml
    I see fs.defaultFS is set correctly to hdfs://
    machine.company.com:8020

    Yet when I am on that DN a hadoop fs -ls / returns the root
    directory of the local file system.

    I searched for core-site.xml/
    ls -al /etc/hadoop/conf -> /etc/alternatives/hadoop-conf ->
    /opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/etc/hadoop/conf.empty

    I edited that core-site.xml, which was empty.
    added the correct fs.defaultFS.

    Now hadoop fs -ls / returns the root level of hdfs.

    1.)is this the correct core-site.xml that should be used?
    2.)how do I determine which core-site.xml or any other config xml
    file, and their location, is used?
    3.)shouldn't CM be editing the correct core-site.xml? Or maybe it
    is.?

    thanks.
    John

    To unsubscribe from this group and stop receiving emails from it,
    send an email to scm-users+unsubscribe@cloudera.org.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to scm-users+unsubscribe@cloudera.org.


    --
    Thanks,
    Darren
    To unsubscribe from this group and stop receiving emails from it, send
    an email to scm-users+unsubscribe@cloudera.org.

    --
    Thanks,
    Darren

    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.
  • Tao Xiao at Feb 13, 2014 at 1:55 am
    Hi Darren,

    I ran the HDFS-level Deploy Client Configuration command yesterday, not the
    cluster-level command. I run the cluster-level command just now and can see
    that the following hdfs-site.xml files are updated correctly:

      /etc/hadoop/conf.cloudera.hdfs1/hdfs-site.xml
      /etc/hadoop/conf.cloudera.mapreduce1/hdfs-site.xml
      /etc/hbase/conf.cloudera.hbase1/hdfs-site.xml
      /var/run/cloudera-scm-agent/process/65-deploy-client-config/hadoop-conf/hdfs-site.xml
      /var/run/cloudera-scm-agent/process/67-deploy-client-config/hbase-conf/hdfs-site.xml
      /var/run/cloudera-scm-agent/process/76-deploy-client-config/hadoop-conf/hdfs-site.xml

    Besides, I uploaded a file into HDFS and its replication factor is indeed 4.

    Thanks



    2014-02-13 1:04 GMT+08:00 Darren Lo <dlo@cloudera.com>:
    Hi Tao,

    Did you run the cluster-level deploy client config, or the HDFS-level
    command? HDFS and MapReduce both use /etc/hadoop/conf, but MapReduce's
    configuration has higher alternatives priority by default
    (update-alternatives --display hadoop-conf). If you only deployed HDFS
    client config, then it will update the unused HDFS copy of the client
    config and your system will still use the stale MapReduce copy.

    To avoid this kind of confusion, it's easiest to use the cluster-level
    deploy client config command.

    The latest beta release of CM 5 will let you know if your client
    configuration is stale, which also helps to avoid this confusion. In your
    scenario, it would have marked the MapReduce service as having stale client
    configuration.

    Thanks,
    Darren

    On Tue, Feb 11, 2014 at 10:25 PM, Tao Xiao wrote:

    Hi Darren, Vikram:

    I checked what you said about the configuration files in my CDH
    cluster and got confused.

    At about 13:34, I changed the replication factor from 3 to 4 through
    Cloudera Manager's UI and deployed client configuration. Then I looked up
    all hdfs-site.xml files using the following command:

    [root@hadoop-6 ~]# locate hdfs-site.xml | xargs ls -al
    -rw-r--r-- 1 root root 1579 Feb 12 13:34
    /etc/hadoop/conf.cloudera.hdfs1/hdfs-site.xml
    -rw-r--r-- 1 root root 1464 Jan 27 14:25
    /etc/hadoop/conf.cloudera.mapreduce1/hdfs-site.xml
    -rw-r--r-- 1 root root 1357 Jan 27 16:50
    /etc/hbase/conf.cloudera.hbase1/hdfs-site.xml
    -rwxr-xr-x 1 root root 1023 Nov 21 10:07
    /opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/etc/hadoop/conf.empty/hdfs-site.xml
    -rwxr-xr-x 1 root root 1875 Nov 21 10:07
    /opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/etc/hadoop/conf.pseudo/hdfs-site.xml
    -rwxr-xr-x 1 root root 1875 Nov 21 10:07
    /opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/etc/hadoop/conf.pseudo.mr1/hdfs-site.xml
    -rwxr-xr-x 1 root root 1391 Nov 21 10:07
    /opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/lib/hadoop-0.20-mapreduce/example-confs/conf.pseudo/hdfs-site.xml
    -rwxr-xr-x 1 root root 6453 Nov 21 10:07
    /opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/lib/hadoop-0.20-mapreduce/example-confs/conf.secure/hdfs-site.xml
    -rw-r----- 1 hdfs hdfs 3521 Feb 12 09:32
    /var/run/cloudera-scm-agent/process/17-hdfs-NAMENODE/hdfs-site.xml
    -rw-r----- 1 hdfs hdfs 4237 Feb 12 09:32
    /var/run/cloudera-scm-agent/process/23-hdfs-DATANODE/hdfs-site.xml
    -rw-r----- 1 hbase hbase 1356 Feb 12 09:42
    /var/run/cloudera-scm-agent/process/25-hbase-REGIONSERVER/hdfs-site.xml
    -rw-r----- 1 hbase hbase 1356 Feb 12 09:45
    /var/run/cloudera-scm-agent/process/34-hbase-REGIONSERVER/hdfs-site.xml
    -rw-r----- 1 root root 1579 Feb 12 13:12
    /var/run/cloudera-scm-agent/process/43-deploy-client-config/hadoop-conf/hdfs-site.xml
    -rw-r----- 1 root root 1579 Feb 12 13:14
    /var/run/cloudera-scm-agent/process/48-deploy-client-config/hadoop-conf/hdfs-site.xml
    -rw-r----- 1 root root 1579 Feb 12 13:34
    /var/run/cloudera-scm-agent/process/53-deploy-client-config/hadoop-conf/hdfs-site.xml


    It can be seen that only two files(
    /etc/hadoop/conf.cloudera.hdfs1/hdfs-site.xml and
    /var/run/cloudera-scm-agent/process/53-deploy-client-config/hadoop-conf/hdfs-site.xml)
    are updated at 13:34, and the dfs.replication property in these two
    files is indeed set to 4.

    Actually at this time the configuration changes I made did not take
    effect at all. Because I started HDFS and uploaded a file into hbase and
    found that the replication factor for this file was still 3, not 4. So how
    to make the change take effect?

    You mentioned that "Deploy client config updates things like
    /etc/hadoop/conf, /etc/hive/conf, /etc/hbase/conf, /etc/solr/conf on all
    hosts with roles for the relevant services". But
    /etc/hadoop/conf/hdfs-site.xml is not updated. Is that the reason why
    the replication factor was still 3 ?

    You also mentioned that " If a machine isn't getting its /etc updated,
    then you probably want to add a Gateway role to the relevant service on
    that host and re-deploy client configuration". Is that the reason why
    /etc/hadoop/conf/hdfs-site.xml is not updated? How to add a Gateway role
    to the relevant service? I did see Gateway Conifguration and its three
    sub-properties(Performance, Resource Management and Advanced) in CM UI, but
    I don't know how to "add a Gateway role to the relevant service".




    2014-02-12 1:10 GMT+08:00 John Meza <jmezazap@gmail.com>:

    K. thanks for helping out.
    On Tue, Feb 11, 2014 at 8:51 AM, Darren Lo wrote:

    Hi John,

    Deploy client config updates things like /etc/hadoop/conf,
    /etc/hive/conf, /etc/hbase/conf, /etc/solr/conf on all hosts with roles for
    the relevant services. If a machine isn't getting its /etc updated, then
    you probably want to add a Gateway role to the relevant service on that
    host and re-deploy client configuration.

    It's usually best to use the cluster-level deploy client configuration
    command, since both HDFS and MapReduce (and YARN) use the same
    /etc/hadoop/conf directory and you want to make sure it's updated correctly.

    Thanks,
    Darren

    On Tue, Feb 11, 2014 at 8:04 AM, John Meza wrote:

    sorry, forgot to add scm-users.

    ---------- Forwarded message ----------
    From: John Meza <jmezazap@gmail.com>
    Date: Tue, Feb 11, 2014 at 8:03 AM
    Subject: Re: location of core-site.xml
    To: Vikram Srivastava <vikrams@cloudera.com>


    or can you forward a link that describes how to determine the active
    config file location, and what CM does during the "Deploy Client
    Configuration" action.
    thanks
    John

    On Tue, Feb 11, 2014 at 7:58 AM, John Meza wrote:

    Vikram,
    thanks. that worked.
    I removed my edits to the core-site.xml I modified. the hadoop fs -ls
    / still worked.

    Q: how do I determine which config files are used?
    thanks
    john


    On Mon, Feb 10, 2014 at 10:48 PM, Vikram Srivastava <
    vikrams@cloudera.com> wrote:
    John,

    You don't need to manually edit core-site.xml. All the configuration
    should be done via Cloudera Manager UI only. I think you haven't deployed
    client configs for HDFS. You should goto your cluster "Actions" menu (on
    Home page) and run "Deploy Client Configuration". Then run hadoop fs -ls /

    Vikram

    On Mon, Feb 10, 2014 at 10:44 PM, John Meza wrote:

    I have CDH 4.5.

    Problem: a hadoop fs -ls / returns the root directory of the local
    file system.

    In CM I navigate to Services->hdfs1->instance->datanode(machine12)
    Processes-> Show (Config files/Env)-> core-site.xml
    I see fs.defaultFS is set correctly to hdfs://
    machine.company.com:8020

    Yet when I am on that DN a hadoop fs -ls / returns the root
    directory of the local file system.

    I searched for core-site.xml/
    ls -al /etc/hadoop/conf -> /etc/alternatives/hadoop-conf ->
    /opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/etc/hadoop/conf.empty

    I edited that core-site.xml, which was empty.
    added the correct fs.defaultFS.

    Now hadoop fs -ls / returns the root level of hdfs.

    1.)is this the correct core-site.xml that should be used?
    2.)how do I determine which core-site.xml or any other config xml
    file, and their location, is used?
    3.)shouldn't CM be editing the correct core-site.xml? Or maybe it
    is.?

    thanks.
    John

    To unsubscribe from this group and stop receiving emails from it,
    send an email to scm-users+unsubscribe@cloudera.org.
    To unsubscribe from this group and stop receiving emails from it,
    send an email to scm-users+unsubscribe@cloudera.org.


    --
    Thanks,
    Darren
    To unsubscribe from this group and stop receiving emails from it, send
    an email to scm-users+unsubscribe@cloudera.org.

    --
    Thanks,
    Darren
    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.
  • Darren Lo at Feb 13, 2014 at 1:56 am
    Glad you got it working!

    On Wed, Feb 12, 2014 at 5:55 PM, Tao Xiao wrote:

    Hi Darren,

    I ran the HDFS-level Deploy Client Configuration command yesterday, not
    the cluster-level command. I run the cluster-level command just now and can
    see that the following hdfs-site.xml files are updated correctly:

    /etc/hadoop/conf.cloudera.hdfs1/hdfs-site.xml
    /etc/hadoop/conf.cloudera.mapreduce1/hdfs-site.xml
    /etc/hbase/conf.cloudera.hbase1/hdfs-site.xml

    /var/run/cloudera-scm-agent/process/65-deploy-client-config/hadoop-conf/hdfs-site.xml

    /var/run/cloudera-scm-agent/process/67-deploy-client-config/hbase-conf/hdfs-site.xml

    /var/run/cloudera-scm-agent/process/76-deploy-client-config/hadoop-conf/hdfs-site.xml

    Besides, I uploaded a file into HDFS and its replication factor is indeed
    4.

    Thanks



    2014-02-13 1:04 GMT+08:00 Darren Lo <dlo@cloudera.com>:

    Hi Tao,
    Did you run the cluster-level deploy client config, or the HDFS-level
    command? HDFS and MapReduce both use /etc/hadoop/conf, but MapReduce's
    configuration has higher alternatives priority by default
    (update-alternatives --display hadoop-conf). If you only deployed HDFS
    client config, then it will update the unused HDFS copy of the client
    config and your system will still use the stale MapReduce copy.

    To avoid this kind of confusion, it's easiest to use the cluster-level
    deploy client config command.

    The latest beta release of CM 5 will let you know if your client
    configuration is stale, which also helps to avoid this confusion. In your
    scenario, it would have marked the MapReduce service as having stale client
    configuration.

    Thanks,
    Darren

    On Tue, Feb 11, 2014 at 10:25 PM, Tao Xiao wrote:

    Hi Darren, Vikram:

    I checked what you said about the configuration files in my CDH
    cluster and got confused.

    At about 13:34, I changed the replication factor from 3 to 4 through
    Cloudera Manager's UI and deployed client configuration. Then I looked up
    all hdfs-site.xml files using the following command:

    [root@hadoop-6 ~]# locate hdfs-site.xml | xargs ls -al
    -rw-r--r-- 1 root root 1579 Feb 12 13:34
    /etc/hadoop/conf.cloudera.hdfs1/hdfs-site.xml
    -rw-r--r-- 1 root root 1464 Jan 27 14:25
    /etc/hadoop/conf.cloudera.mapreduce1/hdfs-site.xml
    -rw-r--r-- 1 root root 1357 Jan 27 16:50
    /etc/hbase/conf.cloudera.hbase1/hdfs-site.xml
    -rwxr-xr-x 1 root root 1023 Nov 21 10:07
    /opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/etc/hadoop/conf.empty/hdfs-site.xml
    -rwxr-xr-x 1 root root 1875 Nov 21 10:07
    /opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/etc/hadoop/conf.pseudo/hdfs-site.xml
    -rwxr-xr-x 1 root root 1875 Nov 21 10:07
    /opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/etc/hadoop/conf.pseudo.mr1/hdfs-site.xml
    -rwxr-xr-x 1 root root 1391 Nov 21 10:07
    /opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/lib/hadoop-0.20-mapreduce/example-confs/conf.pseudo/hdfs-site.xml
    -rwxr-xr-x 1 root root 6453 Nov 21 10:07
    /opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/lib/hadoop-0.20-mapreduce/example-confs/conf.secure/hdfs-site.xml
    -rw-r----- 1 hdfs hdfs 3521 Feb 12 09:32
    /var/run/cloudera-scm-agent/process/17-hdfs-NAMENODE/hdfs-site.xml
    -rw-r----- 1 hdfs hdfs 4237 Feb 12 09:32
    /var/run/cloudera-scm-agent/process/23-hdfs-DATANODE/hdfs-site.xml
    -rw-r----- 1 hbase hbase 1356 Feb 12 09:42
    /var/run/cloudera-scm-agent/process/25-hbase-REGIONSERVER/hdfs-site.xml
    -rw-r----- 1 hbase hbase 1356 Feb 12 09:45
    /var/run/cloudera-scm-agent/process/34-hbase-REGIONSERVER/hdfs-site.xml
    -rw-r----- 1 root root 1579 Feb 12 13:12
    /var/run/cloudera-scm-agent/process/43-deploy-client-config/hadoop-conf/hdfs-site.xml
    -rw-r----- 1 root root 1579 Feb 12 13:14
    /var/run/cloudera-scm-agent/process/48-deploy-client-config/hadoop-conf/hdfs-site.xml
    -rw-r----- 1 root root 1579 Feb 12 13:34
    /var/run/cloudera-scm-agent/process/53-deploy-client-config/hadoop-conf/hdfs-site.xml


    It can be seen that only two files(
    /etc/hadoop/conf.cloudera.hdfs1/hdfs-site.xml and
    /var/run/cloudera-scm-agent/process/53-deploy-client-config/hadoop-conf/hdfs-site.xml)
    are updated at 13:34, and the dfs.replication property in these two
    files is indeed set to 4.

    Actually at this time the configuration changes I made did not take
    effect at all. Because I started HDFS and uploaded a file into hbase and
    found that the replication factor for this file was still 3, not 4. So how
    to make the change take effect?

    You mentioned that "Deploy client config updates things like
    /etc/hadoop/conf, /etc/hive/conf, /etc/hbase/conf, /etc/solr/conf on all
    hosts with roles for the relevant services". But
    /etc/hadoop/conf/hdfs-site.xml is not updated. Is that the reason why
    the replication factor was still 3 ?

    You also mentioned that " If a machine isn't getting its /etc updated,
    then you probably want to add a Gateway role to the relevant service on
    that host and re-deploy client configuration". Is that the reason why
    /etc/hadoop/conf/hdfs-site.xml is not updated? How to add a Gateway
    role to the relevant service? I did see Gateway Conifguration and its three
    sub-properties(Performance, Resource Management and Advanced) in CM UI, but
    I don't know how to "add a Gateway role to the relevant service".




    2014-02-12 1:10 GMT+08:00 John Meza <jmezazap@gmail.com>:

    K. thanks for helping out.
    On Tue, Feb 11, 2014 at 8:51 AM, Darren Lo wrote:

    Hi John,

    Deploy client config updates things like /etc/hadoop/conf,
    /etc/hive/conf, /etc/hbase/conf, /etc/solr/conf on all hosts with roles for
    the relevant services. If a machine isn't getting its /etc updated, then
    you probably want to add a Gateway role to the relevant service on that
    host and re-deploy client configuration.

    It's usually best to use the cluster-level deploy client configuration
    command, since both HDFS and MapReduce (and YARN) use the same
    /etc/hadoop/conf directory and you want to make sure it's updated correctly.

    Thanks,
    Darren

    On Tue, Feb 11, 2014 at 8:04 AM, John Meza wrote:

    sorry, forgot to add scm-users.

    ---------- Forwarded message ----------
    From: John Meza <jmezazap@gmail.com>
    Date: Tue, Feb 11, 2014 at 8:03 AM
    Subject: Re: location of core-site.xml
    To: Vikram Srivastava <vikrams@cloudera.com>


    or can you forward a link that describes how to determine the active
    config file location, and what CM does during the "Deploy Client
    Configuration" action.
    thanks
    John

    On Tue, Feb 11, 2014 at 7:58 AM, John Meza wrote:

    Vikram,
    thanks. that worked.
    I removed my edits to the core-site.xml I modified. the hadoop fs
    -ls / still worked.

    Q: how do I determine which config files are used?
    thanks
    john


    On Mon, Feb 10, 2014 at 10:48 PM, Vikram Srivastava <
    vikrams@cloudera.com> wrote:
    John,

    You don't need to manually edit core-site.xml. All the
    configuration should be done via Cloudera Manager UI only. I think you
    haven't deployed client configs for HDFS. You should goto your cluster
    "Actions" menu (on Home page) and run "Deploy Client Configuration". Then
    run hadoop fs -ls /

    Vikram

    On Mon, Feb 10, 2014 at 10:44 PM, John Meza wrote:

    I have CDH 4.5.

    Problem: a hadoop fs -ls / returns the root directory of the local
    file system.

    In CM I navigate to Services->hdfs1->instance->datanode(machine12)
    Processes-> Show (Config files/Env)-> core-site.xml
    I see fs.defaultFS is set correctly to hdfs://
    machine.company.com:8020

    Yet when I am on that DN a hadoop fs -ls / returns the root
    directory of the local file system.

    I searched for core-site.xml/
    ls -al /etc/hadoop/conf -> /etc/alternatives/hadoop-conf ->
    /opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/etc/hadoop/conf.empty

    I edited that core-site.xml, which was empty.
    added the correct fs.defaultFS.

    Now hadoop fs -ls / returns the root level of hdfs.

    1.)is this the correct core-site.xml that should be used?
    2.)how do I determine which core-site.xml or any other config xml
    file, and their location, is used?
    3.)shouldn't CM be editing the correct core-site.xml? Or maybe it
    is.?

    thanks.
    John

    To unsubscribe from this group and stop receiving emails from it,
    send an email to scm-users+unsubscribe@cloudera.org.
    To unsubscribe from this group and stop receiving emails from it,
    send an email to scm-users+unsubscribe@cloudera.org.


    --
    Thanks,
    Darren
    To unsubscribe from this group and stop receiving emails from it, send
    an email to scm-users+unsubscribe@cloudera.org.

    --
    Thanks,
    Darren

    --
    Thanks,
    Darren

    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupscm-users @
categorieshadoop
postedFeb 11, '14 at 6:45a
activeFeb 13, '14 at 1:56a
posts9
users4
websitecloudera.com
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase