On Tue, Feb 11, 2014 at 8:51 AM, Darren Lo wrote:
Hi John,
Deploy client config updates things like /etc/hadoop/conf, /etc/hive/conf,
/etc/hbase/conf, /etc/solr/conf on all hosts with roles for the relevant
services. If a machine isn't getting its /etc updated, then you probably
want to add a Gateway role to the relevant service on that host and
re-deploy client configuration.
It's usually best to use the cluster-level deploy client configuration
command, since both HDFS and MapReduce (and YARN) use the same
/etc/hadoop/conf directory and you want to make sure it's updated correctly.
Thanks,
Darren
--
Thanks,
Darren
To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.Hi John,
Deploy client config updates things like /etc/hadoop/conf, /etc/hive/conf,
/etc/hbase/conf, /etc/solr/conf on all hosts with roles for the relevant
services. If a machine isn't getting its /etc updated, then you probably
want to add a Gateway role to the relevant service on that host and
re-deploy client configuration.
It's usually best to use the cluster-level deploy client configuration
command, since both HDFS and MapReduce (and YARN) use the same
/etc/hadoop/conf directory and you want to make sure it's updated correctly.
Thanks,
Darren
On Tue, Feb 11, 2014 at 8:04 AM, John Meza wrote:
sorry, forgot to add scm-users.
---------- Forwarded message ----------
From: John Meza <jmezazap@gmail.com>
Date: Tue, Feb 11, 2014 at 8:03 AM
Subject: Re: location of core-site.xml
To: Vikram Srivastava <vikrams@cloudera.com>
or can you forward a link that describes how to determine the active
config file location, and what CM does during the "Deploy Client
Configuration" action.
thanks
John
an email to scm-users+unsubscribe@cloudera.org.
sorry, forgot to add scm-users.
---------- Forwarded message ----------
From: John Meza <jmezazap@gmail.com>
Date: Tue, Feb 11, 2014 at 8:03 AM
Subject: Re: location of core-site.xml
To: Vikram Srivastava <vikrams@cloudera.com>
or can you forward a link that describes how to determine the active
config file location, and what CM does during the "Deploy Client
Configuration" action.
thanks
John
On Tue, Feb 11, 2014 at 7:58 AM, John Meza wrote:
Vikram,
thanks. that worked.
I removed my edits to the core-site.xml I modified. the hadoop fs -ls /
still worked.
Q: how do I determine which config files are used?
thanks
john
On Mon, Feb 10, 2014 at 10:48 PM, Vikram Srivastava <
vikrams@cloudera.com> wrote:
To unsubscribe from this group and stop receiving emails from it, sendVikram,
thanks. that worked.
I removed my edits to the core-site.xml I modified. the hadoop fs -ls /
still worked.
Q: how do I determine which config files are used?
thanks
john
On Mon, Feb 10, 2014 at 10:48 PM, Vikram Srivastava <
vikrams@cloudera.com> wrote:
John,
You don't need to manually edit core-site.xml. All the configuration
should be done via Cloudera Manager UI only. I think you haven't deployed
client configs for HDFS. You should goto your cluster "Actions" menu (on
Home page) and run "Deploy Client Configuration". Then run hadoop fs -ls /
Vikram
You don't need to manually edit core-site.xml. All the configuration
should be done via Cloudera Manager UI only. I think you haven't deployed
client configs for HDFS. You should goto your cluster "Actions" menu (on
Home page) and run "Deploy Client Configuration". Then run hadoop fs -ls /
Vikram
On Mon, Feb 10, 2014 at 10:44 PM, John Meza wrote:
I have CDH 4.5.
Problem: a hadoop fs -ls / returns the root directory of the local
file system.
In CM I navigate to Services->hdfs1->instance->datanode(machine12)
Processes-> Show (Config files/Env)-> core-site.xml
I see fs.defaultFS is set correctly to hdfs://
machine.company.com:8020
Yet when I am on that DN a hadoop fs -ls / returns the root directory
of the local file system.
I searched for core-site.xml/
ls -al /etc/hadoop/conf -> /etc/alternatives/hadoop-conf ->
/opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/etc/hadoop/conf.empty
I edited that core-site.xml, which was empty.
added the correct fs.defaultFS.
Now hadoop fs -ls / returns the root level of hdfs.
1.)is this the correct core-site.xml that should be used?
2.)how do I determine which core-site.xml or any other config xml
file, and their location, is used?
3.)shouldn't CM be editing the correct core-site.xml? Or maybe it is.?
thanks.
John
To unsubscribe from this group and stop receiving emails from it, send
an email to scm-users+unsubscribe@cloudera.org.
I have CDH 4.5.
Problem: a hadoop fs -ls / returns the root directory of the local
file system.
In CM I navigate to Services->hdfs1->instance->datanode(machine12)
Processes-> Show (Config files/Env)-> core-site.xml
I see fs.defaultFS is set correctly to hdfs://
machine.company.com:8020
Yet when I am on that DN a hadoop fs -ls / returns the root directory
of the local file system.
I searched for core-site.xml/
ls -al /etc/hadoop/conf -> /etc/alternatives/hadoop-conf ->
/opt/cloudera/parcels/CDH-4.5.0-1.cdh4.5.0.p0.30/etc/hadoop/conf.empty
I edited that core-site.xml, which was empty.
added the correct fs.defaultFS.
Now hadoop fs -ls / returns the root level of hdfs.
1.)is this the correct core-site.xml that should be used?
2.)how do I determine which core-site.xml or any other config xml
file, and their location, is used?
3.)shouldn't CM be editing the correct core-site.xml? Or maybe it is.?
thanks.
John
To unsubscribe from this group and stop receiving emails from it, send
an email to scm-users+unsubscribe@cloudera.org.
an email to scm-users+unsubscribe@cloudera.org.
--
Thanks,
Darren