FAQ
I had a somewhat difficult time figuring out how to get hbase started.
In the end, it was pretty simple. Here are the steps:

1. Download hadoop from svn, untar to directory say ~/hadooptrunk and
compile through ant.
2. Move the build hadoop-xx directory to where you want to run it,
say ~/hadoop
3. Set the hadoop tmp directory in hadoop-site.xml (as default all
other variables should be file)
4. Copy scripts from ~/hadooptrunk/src/contrib/hbase/bin to
~/hadoop/src/contrib/hbase/bin
5. Format hadoop dfs through ~/hadoop/bin/hadoop namenode -format
6. Start the dfs through ~/hadoop/bin/start-dfs.sh (logs are
viewable in ~/hadoop/logs by default, don't need mapreduce for hbase)
7. Go to the hbase directory ~/hadoop/src/contrib/hbase
8. Hbase default values are fine for now, start hbase with
~/hadoop/src/contrib/hbase/bin/start-hbase.sh (logs are viewable
in ~/hadoop/logs by default)
9. Enter hbase shell with ~/hadoop/src/contrib/hbase/bin/hbase shell
10. Have fun with Hbase
11. Stop the hbase servers with
~/hadoop/src/contrib/hbase/bin/stop-hbase.sh. Wait until the
servers are finished stopping.
12. Stop the hadoop dfs with ~/hadoop/bin/stop-dfs.sh

Hope this helps.

Dennis Kubes

Search Discussions

  • Andrzej Bialecki at Oct 22, 2007 at 3:13 pm

    Dennis Kubes wrote:
    I had a somewhat difficult time figuring out how to get hbase started.
    In the end, it was pretty simple. Here are the steps:

    1. Download hadoop from svn, untar to directory say ~/hadooptrunk and
    compile through ant.
    2. Move the build hadoop-xx directory to where you want to run it,
    say ~/hadoop
    3. Set the hadoop tmp directory in hadoop-site.xml (as default all
    other variables should be file)
    4. Copy scripts from ~/hadooptrunk/src/contrib/hbase/bin to
    ~/hadoop/src/contrib/hbase/bin
    5. Format hadoop dfs through ~/hadoop/bin/hadoop namenode -format
    6. Start the dfs through ~/hadoop/bin/start-dfs.sh (logs are
    viewable in ~/hadoop/logs by default, don't need mapreduce for hbase)
    7. Go to the hbase directory ~/hadoop/src/contrib/hbase
    8. Hbase default values are fine for now, start hbase with
    ~/hadoop/src/contrib/hbase/bin/start-hbase.sh (logs are viewable
    in ~/hadoop/logs by default)
    9. Enter hbase shell with ~/hadoop/src/contrib/hbase/bin/hbase shell
    10. Have fun with Hbase
    11. Stop the hbase servers with
    ~/hadoop/src/contrib/hbase/bin/stop-hbase.sh. Wait until the
    servers are finished stopping.
    12. Stop the hadoop dfs with ~/hadoop/bin/stop-dfs.sh

    Hope this helps.
    Did you try to run it with LocalFS / Cygwin, and if so, did you notice
    any peculiarities? I tried this once, and first the start-hbase.sh
    script wouldn't work (missing log files? it looked like some variables
    in paths were expanded in a wrong way), and then when I started the
    master and a regionserver by hand, it would complain about missing map
    files and all requests would time out ... I gave up after that and moved
    to HDFS.

    --
    Best regards,
    Andrzej Bialecki <><
    ___. ___ ___ ___ _ _ __________________________________
    [__ || __|__/|__||\/| Information Retrieval, Semantic Web
    ___|||__|| \| || | Embedded Unix, System Integration
    http://www.sigram.com Contact: info at sigram dot com
  • Holger Stenzhorn at Oct 30, 2007 at 12:36 pm
    Hi,

    I looked into the issue just now and found the solution to make it work.
    ...and I hope this fix will enter the Subversion repository quickly! :-)

    So, to make HBase run on Cygwin you only need to change the following lines
    in the file hbase-daemon.sh:

    export HADOOP_LOGFILE=hbase-$HADOOP_IDENT_STRING-$command-`hostname`.log
    and
    log=$HADOOP_LOG_DIR/hbase-$HADOOP_IDENT_STRING-$command-`hostname`.out

    to

    export HADOOP_LOGFILE=hbase-$HADOOP_IDENT_STRING-$command-$HOSTNAME.log
    and
    log=$HADOOP_LOG_DIR/hbase-$HADOOP_IDENT_STRING-$command-$HOSTNAME.out

    This fix is exactly the same as done for hadoop-daemon.sh (and introduced
    into the Subversion repository already).

    Cheers,
    Holger



    Andrzej Bialecki wrote:
    Dennis Kubes wrote:
    I had a somewhat difficult time figuring out how to get hbase started.
    In the end, it was pretty simple. Here are the steps:

    1. Download hadoop from svn, untar to directory say ~/hadooptrunk and
    compile through ant.
    2. Move the build hadoop-xx directory to where you want to run it,
    say ~/hadoop
    3. Set the hadoop tmp directory in hadoop-site.xml (as default all
    other variables should be file)
    4. Copy scripts from ~/hadooptrunk/src/contrib/hbase/bin to
    ~/hadoop/src/contrib/hbase/bin
    5. Format hadoop dfs through ~/hadoop/bin/hadoop namenode -format
    6. Start the dfs through ~/hadoop/bin/start-dfs.sh (logs are
    viewable in ~/hadoop/logs by default, don't need mapreduce for
    hbase)
    7. Go to the hbase directory ~/hadoop/src/contrib/hbase
    8. Hbase default values are fine for now, start hbase with
    ~/hadoop/src/contrib/hbase/bin/start-hbase.sh (logs are viewable
    in ~/hadoop/logs by default)
    9. Enter hbase shell with ~/hadoop/src/contrib/hbase/bin/hbase shell
    10. Have fun with Hbase
    11. Stop the hbase servers with
    ~/hadoop/src/contrib/hbase/bin/stop-hbase.sh. Wait until the
    servers are finished stopping.
    12. Stop the hadoop dfs with ~/hadoop/bin/stop-dfs.sh

    Hope this helps.
    Did you try to run it with LocalFS / Cygwin, and if so, did you notice
    any peculiarities? I tried this once, and first the start-hbase.sh
    script wouldn't work (missing log files? it looked like some variables
    in paths were expanded in a wrong way), and then when I started the
    master and a regionserver by hand, it would complain about missing map
    files and all requests would time out ... I gave up after that and moved
    to HDFS.

    --
    Best regards,
    Andrzej Bialecki <><
    ___. ___ ___ ___ _ _ __________________________________
    [__ || __|__/|__||\/| Information Retrieval, Semantic Web
    ___|||__|| \| || | Embedded Unix, System Integration
    http://www.sigram.com Contact: info at sigram dot com

    --
    View this message in context: http://www.nabble.com/How-to-Setup-Hbase-in-10-mintues-tf4668631.html#a13487605
    Sent from the Hadoop Users mailing list archive at Nabble.com.
  • Doug Cutting at Oct 30, 2007 at 4:58 pm

    Holger Stenzhorn wrote:
    This fix is exactly the same as done for hadoop-daemon.sh (and introduced
    into the Subversion repository already).
    Which begs the question: could HBase use hadoop-daemon.sh directly? If
    not, could hadoop-daemon.sh be modified to support HBase? Maintaining
    two slightly different versions of something makes maintenance painful.

    Doug
  • Jim Kellerman at Oct 30, 2007 at 5:26 pm
    They are indeed quite similar but do have some significant differences. Some
    of the commands take different arguments and there are different commands run
    in each.

    If HBase were a part of hadoop proper instead of a contrib project (like
    record io which was moved from contrib into the main hadoop tree), I would
    be more inclined to merge the scripts since they would have more in common.

    For now, I think keeping them separate is probably the right thing to do.

    ---
    Jim Kellerman

    -----Original Message-----
    From: Doug Cutting
    Sent: Tuesday, October 30, 2007 9:53 AM
    To: hadoop-user@lucene.apache.org
    Subject: Re: How to Setup Hbase in 10 mintues

    Holger Stenzhorn wrote:
    This fix is exactly the same as done for hadoop-daemon.sh (and
    introduced into the Subversion repository already).
    Which begs the question: could HBase use hadoop-daemon.sh
    directly? If not, could hadoop-daemon.sh be modified to
    support HBase? Maintaining two slightly different versions
    of something makes maintenance painful.

    Doug
  • Michael Stack at Oct 23, 2007 at 7:06 am
    Hope you don't mind my saving below into the wiki here:
    http://wiki.apache.org/lucene-hadoop/Hbase/10Minutes.
    St.Ack

    Dennis Kubes wrote:
    I had a somewhat difficult time figuring out how to get hbase
    started. In the end, it was pretty simple. Here are the steps:

    1. Download hadoop from svn, untar to directory say ~/hadooptrunk and
    compile through ant.
    2. Move the build hadoop-xx directory to where you want to run it,
    say ~/hadoop
    3. Set the hadoop tmp directory in hadoop-site.xml (as default all
    other variables should be file)
    4. Copy scripts from ~/hadooptrunk/src/contrib/hbase/bin to
    ~/hadoop/src/contrib/hbase/bin
    5. Format hadoop dfs through ~/hadoop/bin/hadoop namenode -format
    6. Start the dfs through ~/hadoop/bin/start-dfs.sh (logs are
    viewable in ~/hadoop/logs by default, don't need mapreduce for
    hbase)
    7. Go to the hbase directory ~/hadoop/src/contrib/hbase
    8. Hbase default values are fine for now, start hbase with
    ~/hadoop/src/contrib/hbase/bin/start-hbase.sh (logs are viewable
    in ~/hadoop/logs by default)
    9. Enter hbase shell with ~/hadoop/src/contrib/hbase/bin/hbase shell
    10. Have fun with Hbase
    11. Stop the hbase servers with
    ~/hadoop/src/contrib/hbase/bin/stop-hbase.sh. Wait until the
    servers are finished stopping.
    12. Stop the hadoop dfs with ~/hadoop/bin/stop-dfs.sh

    Hope this helps.

    Dennis Kubes
  • Jim Kellerman at Oct 26, 2007 at 7:02 pm

    Andrzej Bialecki wrote:
    Did you try to run it with LocalFS / Cygwin, and if so, did you
    notice any peculiarities? I tried this once, and first the
    start-hbase.sh script wouldn't work (missing log files? it
    looked like some variables in paths were expanded in a wrong
    way), and then when I started the master and a regionserver
    by hand, it would complain about missing map files and all
    requests would time out ... I gave up after that and moved
    to HDFS.
    Andrzej,

    This does now work on trunk since HADOOP-2084 was committed.
    I have updated the 10minutes page
    http://wiki.apache.org/lucene-hadoop/Hbase/10Minutes

    so that it takes you through all the steps you need whether you
    want to run on the local file system or on top of HDFS.

    ---
    Jim Kellerman, Senior Engineer; Powerset
    jim@powerset.com

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedOct 22, '07 at 4:10a
activeOct 30, '07 at 5:26p
posts7
users6
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase