Grokbase Groups HBase user July 2011
FAQ
Hello,


I have a dataset which is several terabytes in size. I would like to query
this data using hbase (sql). Would I need to setup mapreduce to use hbase?
Currently the data is stored in hdfs and I am using `hdfs -cat ` to get the
data and pipe it into stdin.



--
--- Get your facts first, then you can distort them as you please.--

Search Discussions

  • Doug Meil at Jul 12, 2011 at 12:10 pm
    Hi there-

    I think you might want to start with the Hbase book, and specifically this
    entry...

    http://hbase.apache.org/book.html#faq.sql

    ... and then this one...

    http://hbase.apache.org/book.html#datamodel

    .. then this one.

    http://hbase.apache.org/book.html#mapreduce

    MapReduce is not required with Hbase, but it is extremely useful.


    On 7/12/11 6:06 AM, "Rita" wrote:

    Hello,


    I have a dataset which is several terabytes in size. I would like to query
    this data using hbase (sql). Would I need to setup mapreduce to use hbase?
    Currently the data is stored in hdfs and I am using `hdfs -cat ` to get
    the
    data and pipe it into stdin.



    --
    --- Get your facts first, then you can distort them as you please.--
  • Rita at Jul 12, 2011 at 12:28 pm
    Thanks for the response.

    Can you give me some use cases of hbase and hdfs only? I am
    really hesitant to implement MR because for batch jobs we already use
    torque.


    On Tue, Jul 12, 2011 at 8:11 AM, Doug Meil wrote:


    Hi there-

    I think you might want to start with the Hbase book, and specifically this
    entry...

    http://hbase.apache.org/book.html#faq.sql

    ... and then this one...

    http://hbase.apache.org/book.html#datamodel

    .. then this one.

    http://hbase.apache.org/book.html#mapreduce

    MapReduce is not required with Hbase, but it is extremely useful.


    On 7/12/11 6:06 AM, "Rita" wrote:

    Hello,


    I have a dataset which is several terabytes in size. I would like to query
    this data using hbase (sql). Would I need to setup mapreduce to use hbase?
    Currently the data is stored in hdfs and I am using `hdfs -cat ` to get
    the
    data and pipe it into stdin.



    --
    --- Get your facts first, then you can distort them as you please.--

    --
    --- Get your facts first, then you can distort them as you please.--
  • Doug Meil at Jul 12, 2011 at 12:56 pm
    Hi there-

    I think you probably want to start with these links...

    http://hadoop.apache.org/hdfs/
    http://hadoop.apache.org/mapreduce/




    On 7/12/11 8:27 AM, "Rita" wrote:

    Thanks for the response.

    Can you give me some use cases of hbase and hdfs only? I am
    really hesitant to implement MR because for batch jobs we already use
    torque.



    On Tue, Jul 12, 2011 at 8:11 AM, Doug Meil
    wrote:
    Hi there-

    I think you might want to start with the Hbase book, and specifically
    this
    entry...

    http://hbase.apache.org/book.html#faq.sql

    ... and then this one...

    http://hbase.apache.org/book.html#datamodel

    .. then this one.

    http://hbase.apache.org/book.html#mapreduce

    MapReduce is not required with Hbase, but it is extremely useful.


    On 7/12/11 6:06 AM, "Rita" wrote:

    Hello,


    I have a dataset which is several terabytes in size. I would like to query
    this data using hbase (sql). Would I need to setup mapreduce to use hbase?
    Currently the data is stored in hdfs and I am using `hdfs -cat ` to get
    the
    data and pipe it into stdin.



    --
    --- Get your facts first, then you can distort them as you please.--

    --
    --- Get your facts first, then you can distort them as you please.--

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupuser @
categorieshbase, hadoop
postedJul 12, '11 at 10:07a
activeJul 12, '11 at 12:56p
posts4
users2
websitehbase.apache.org

2 users in discussion

Doug Meil: 2 posts Rita: 2 posts

People

Translate

site design / logo © 2023 Grokbase