Cloudera Manager helps manage Hadoop and HBase client configuration by
deploying them under /etc/hadoop/conf and /etc/hbase/conf through
alternatives, so those two links should point to the correct dir after
you run the "Deploy Client Configuration" command in CM. Note that
only hosts with a Gateway role are included in the client
configurations deployment.

If the hbase-site.xml under /etc/hbase/conf is empty, CM most likely
has not deployed hbase client configs after you added HBase service,
so you'd have to run the command manually under Actions menu.

As for tweaking hadoop-env.sh to include HBase classpath, CM currently
does not support customization of the generated hadoop-env.sh, so
you'd have to manually make those changes.

On Fri, Dec 28, 2012 at 8:31 AM, Mark Grover wrote:
Moving to scm-users@cloudera.org (bcc: cdh-user) since we are getting into
Cloudera Manager specifics.

Perhaps, there are some symlinks/alternatives that point to the appropriate
conf directory?

Cloudera Manager will manage the hbase-site.xml for you. There is most
likely a place in the Manager UI where you can change the configuration.
That's one of the things the manager is there for - managing

I will let the people of the Cloudera Manager list confirm.


On Fri, Dec 28, 2012 at 8:17 AM, Nicolas Maillard
Thank you Mark for a quick and precise answer.
Since I have installed through cloudera manager I have a couple of
directories available in /etc/hadoop

All of these have a hadoop-env.sh, Am I right in using the "conf"
directory or shoud I use conf.cloudera.mapreduce1.

As well in /etc/hbase/conf
the hase-site.xml file is empty where else can the hbase-site.xml file be

On Fri, Dec 28, 2012 at 5:02 PM, Mark Grover wrote:

Great question! And, the answer is at

On Fri, Dec 28, 2012 at 7:54 AM, Nicolas Maillard
Hi everyone

I have written my first hbase map reduce task, essentially a test to
load a file from HDFS and fill and Hbase table with it, much like importTSv.
I would now like to run my task but I am confused as to how make hadoop
aware of hbase.
Of course I get noclass def found erros right now when I try
hadoop jar myjar.jar class

What is the best practice for this issue.
Should I open up the hadoop-env.sh and use Hadoop-classpath to add the
hbase libs and files. I would have to do this on every slave?
Should copy the libs and files I need back in the hadoop lib directory?
Should I set the classpath in a shell file before I run my command line?
Shouldl I export the whole classpath in the bashrc?

All of the above seem weird and would need to be replicated on the
slaves as well as master? So is there insight on what the best practice is?





Search Discussions

Discussion Posts


Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 2 of 3 | next ›
Discussion Overview
groupscm-users @
postedDec 28, '12 at 4:32p
activeDec 28, '12 at 6:15p



site design / logo © 2022 Grokbase