FAQ
Moving to scm-users@cloudera.org (bcc: cdh-user) since we are getting into
Cloudera Manager specifics.

Perhaps, there are some symlinks/alternatives that point to the appropriate
conf directory?

Cloudera Manager will manage the hbase-site.xml for you. There is most
likely a place in the Manager UI where you can change the configuration.
That's one of the things the manager is there for - managing
configuration:-)

I will let the people of the Cloudera Manager list confirm.

Mark
On Fri, Dec 28, 2012 at 8:17 AM, Nicolas Maillard wrote:

Thank you Mark for a quick and precise answer.
Since I have installed through cloudera manager I have a couple of
directories available in /etc/hadoop
conf
conf.cloudera.hdfs1
conf.cloudera.mapreduce1

All of these have a hadoop-env.sh, Am I right in using the "conf"
directory or shoud I use conf.cloudera.mapreduce1.

As well in /etc/hbase/conf
the hase-site.xml file is empty where else can the hbase-site.xml file be
located.

regards
On Fri, Dec 28, 2012 at 5:02 PM, Mark Grover wrote:

Nicolas,
Great question! And, the answer is at
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapred/package-summary.html#classpath

On Fri, Dec 28, 2012 at 7:54 AM, Nicolas Maillard <
nicolas.maillard@fifty-five.com> wrote:
Hi everyone

I have written my first hbase map reduce task, essentially a test to
load a file from HDFS and fill and Hbase table with it, much like importTSv.
I would now like to run my task but I am confused as to how make hadoop
aware of hbase.
Of course I get noclass def found erros right now when I try
hadoop jar myjar.jar class

What is the best practice for this issue.
Should I open up the hadoop-env.sh and use Hadoop-classpath to add the
hbase libs and files. I would have to do this on every slave?
Should copy the libs and files I need back in the hadoop lib directory?
Should I set the classpath in a shell file before I run my command line?
Shouldl I export the whole classpath in the bashrc?

All of the above seem weird and would need to be replicated on the
slaves as well as master? So is there insight on what the best practice is?

regards

--


--


--


Search Discussions

  • Herman Chen at Dec 28, 2012 at 5:15 pm
    Nicolas,

    Cloudera Manager helps manage Hadoop and HBase client configuration by
    deploying them under /etc/hadoop/conf and /etc/hbase/conf through
    alternatives, so those two links should point to the correct dir after
    you run the "Deploy Client Configuration" command in CM. Note that
    only hosts with a Gateway role are included in the client
    configurations deployment.

    If the hbase-site.xml under /etc/hbase/conf is empty, CM most likely
    has not deployed hbase client configs after you added HBase service,
    so you'd have to run the command manually under Actions menu.

    As for tweaking hadoop-env.sh to include HBase classpath, CM currently
    does not support customization of the generated hadoop-env.sh, so
    you'd have to manually make those changes.

    Herman
    On Fri, Dec 28, 2012 at 8:31 AM, Mark Grover wrote:
    Moving to scm-users@cloudera.org (bcc: cdh-user) since we are getting into
    Cloudera Manager specifics.

    Perhaps, there are some symlinks/alternatives that point to the appropriate
    conf directory?

    Cloudera Manager will manage the hbase-site.xml for you. There is most
    likely a place in the Manager UI where you can change the configuration.
    That's one of the things the manager is there for - managing
    configuration:-)

    I will let the people of the Cloudera Manager list confirm.

    Mark

    On Fri, Dec 28, 2012 at 8:17 AM, Nicolas Maillard
    wrote:
    Thank you Mark for a quick and precise answer.
    Since I have installed through cloudera manager I have a couple of
    directories available in /etc/hadoop
    conf
    conf.cloudera.hdfs1
    conf.cloudera.mapreduce1

    All of these have a hadoop-env.sh, Am I right in using the "conf"
    directory or shoud I use conf.cloudera.mapreduce1.

    As well in /etc/hbase/conf
    the hase-site.xml file is empty where else can the hbase-site.xml file be
    located.

    regards
    On Fri, Dec 28, 2012 at 5:02 PM, Mark Grover wrote:

    Nicolas,
    Great question! And, the answer is at
    http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapred/package-summary.html#classpath

    On Fri, Dec 28, 2012 at 7:54 AM, Nicolas Maillard
    wrote:
    Hi everyone

    I have written my first hbase map reduce task, essentially a test to
    load a file from HDFS and fill and Hbase table with it, much like importTSv.
    I would now like to run my task but I am confused as to how make hadoop
    aware of hbase.
    Of course I get noclass def found erros right now when I try
    hadoop jar myjar.jar class

    What is the best practice for this issue.
    Should I open up the hadoop-env.sh and use Hadoop-classpath to add the
    hbase libs and files. I would have to do this on every slave?
    Should copy the libs and files I need back in the hadoop lib directory?
    Should I set the classpath in a shell file before I run my command line?
    Shouldl I export the whole classpath in the bashrc?

    All of the above seem weird and would need to be replicated on the
    slaves as well as master? So is there insight on what the best practice is?

    regards

    --


    --


    --

  • bc Wong at Dec 28, 2012 at 6:15 pm

    On Fri, Dec 28, 2012 at 8:31 AM, Mark Grover wrote:

    Moving to scm-users@cloudera.org (bcc: cdh-user) since we are getting
    into Cloudera Manager specifics.

    Perhaps, there are some symlinks/alternatives that point to the
    appropriate conf directory?

    Cloudera Manager will manage the hbase-site.xml for you. There is most
    likely a place in the Manager UI where you can change the configuration.
    That's one of the things the manager is there for - managing
    configuration:-)

    I will let the people of the Cloudera Manager list confirm.

    Mark

    On Fri, Dec 28, 2012 at 8:17 AM, Nicolas Maillard <
    nicolas.maillard@fifty-five.com> wrote:
    Thank you Mark for a quick and precise answer.
    Since I have installed through cloudera manager I have a couple of
    directories available in /etc/hadoop
    conf
    conf.cloudera.hdfs1
    conf.cloudera.mapreduce1

    All of these have a hadoop-env.sh, Am I right in using the "conf"
    directory or shoud I use conf.cloudera.mapreduce1.
    If you inspect them, I'd bet that /etc/hadoop/conf points to alternatives,
    which points back to /etc/hadoop/conf.cloudera.mapreduce1. In general,
    client apps should use /etc/hadoop/conf. And as the admin, you (with CM)
    manage the /etc/alternatives.

    As well in /etc/hbase/conf
    the hase-site.xml file is empty where else can the hbase-site.xml file be
    located.
    Try deploy the hbase client config in CM. Or you can just deploy config at
    the cluster level. CM will auto deploy the hbase config to any host that
    participates in the hbase service (Master, RS, or Gateway).

    Cheers,
    bc

    regards
    On Fri, Dec 28, 2012 at 5:02 PM, Mark Grover wrote:

    Nicolas,
    Great question! And, the answer is at
    http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapred/package-summary.html#classpath

    On Fri, Dec 28, 2012 at 7:54 AM, Nicolas Maillard <
    nicolas.maillard@fifty-five.com> wrote:
    Hi everyone

    I have written my first hbase map reduce task, essentially a test to
    load a file from HDFS and fill and Hbase table with it, much like importTSv.
    I would now like to run my task but I am confused as to how make hadoop
    aware of hbase.
    Of course I get noclass def found erros right now when I try
    hadoop jar myjar.jar class

    What is the best practice for this issue.
    Should I open up the hadoop-env.sh and use Hadoop-classpath to add the
    hbase libs and files. I would have to do this on every slave?
    Should copy the libs and files I need back in the hadoop lib directory?
    Should I set the classpath in a shell file before I run my command line?
    Shouldl I export the whole classpath in the bashrc?

    All of the above seem weird and would need to be replicated on the
    slaves as well as master? So is there insight on what the best practice is?

    regards

    --


    --


    --


Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupscm-users @
categorieshadoop
postedDec 28, '12 at 4:32p
activeDec 28, '12 at 6:15p
posts3
users3
websitecloudera.com
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase