FAQ
The latest CDH3 beta includes security changes that currently HBase 0.90 and trunk don't incorporate. Of course we can help out with clear HBase issues, but for security exceptions or similar, what about that? Do we draw a line? Where?

I've looked over the CDH3B3 installation documentation but have not installed it nor do presently use it.

If we draw a line, then as an ASF community we should have a fallback option somewhere in ASF-land for the user to try. Vanilla Hadoop is not sufficient for HBase. Therefore, I propose we make a Hadoop 0.20-append tarball available.

Best regards,

- Andy

Problems worthy of attack prove their worth by hitting back.
- Piet Hein (via Tom White)

Search Discussions

  • Bill Graham at Dec 22, 2010 at 7:42 am
    Hi Andrew,

    Just to make sure I'm clear, are you saying that HBase 0.90.0 is
    incompatible with CDH3b3 due to the security changes?

    We're just getting going with HBase and have been running 0.90.0rc1 on
    an un-patched version of Hadoop in dev. We were planning on upgrading
    to CDH3b3 to get the sync patches.

    thanks,
    Bill
    On Tue, Dec 21, 2010 at 6:44 PM, Andrew Purtell wrote:
    The latest CDH3 beta includes security changes that currently HBase 0.90 and trunk don't incorporate. Of course we can help out with clear HBase issues, but for security exceptions or similar, what about that? Do we draw a line? Where?

    I've looked over the CDH3B3 installation documentation but have not installed it nor do presently use it.

    If we draw a line, then as an ASF community we should have a fallback option somewhere in ASF-land for the user to try. Vanilla Hadoop is not sufficient for HBase. Therefore, I propose we make a Hadoop 0.20-append tarball available.

    Best regards,

    - Andy

    Problems worthy of attack prove their worth by hitting back.
    - Piet Hein (via Tom White)


  • Andrew Purtell at Dec 22, 2010 at 10:17 am
    Bill,

    I believe using CDH3B*2* will get you what you want.

    ASF HBase 0.90 will not be compatible with secure versions of Hadoop, which includes CDH3B3.

    Best regards,

    - Andy

    Problems worthy of attack prove their worth by hitting back.
    - Piet Hein (via Tom White)


    --- On Tue, 12/21/10, Bill Graham wrote:
    From: Bill Graham <billgraham@gmail.com>
    Subject: Re: provide a 0.20-append tarball?
    To: user@hbase.apache.org, apurtell@apache.org
    Cc: dev@hbase.apache.org
    Date: Tuesday, December 21, 2010, 11:41 PM
    Hi Andrew,

    Just to make sure I'm clear, are you saying that HBase 0.90.0 is
    incompatible with CDH3b3 due to the security changes?

    We're just getting going with HBase and have been running 0.90.0rc1 on
    an un-patched version of Hadoop in dev. We were planning on upgrading
    to CDH3b3 to get the sync patches.

    thanks,
    Bill
    On Tue, Dec 21, 2010 at 6:44 PM, Andrew Purtell wrote:
    The latest CDH3 beta includes security changes that
    currently HBase 0.90 and trunk don't incorporate. Of course
    we can help out with clear HBase issues, but for security
    exceptions or similar, what about that? Do we draw a line?
    Where?
    I've looked over the CDH3B3 installation documentation
    but have not installed it nor do presently use it.
    If we draw a line, then as an ASF community we should
    have a fallback option somewhere in ASF-land for the user to
    try. Vanilla Hadoop is not sufficient for HBase. Therefore,
    I propose we make a Hadoop 0.20-append tarball available.
    Best regards,

    - Andy

    Problems worthy of attack prove their worth by hitting back.
    - Piet Hein (via Tom White)


  • Friso van Vollenhoven at Dec 22, 2010 at 2:48 pm
    If you do not enable any of the security features in CDH3b3 or another flavor of secure Hadoop, it should just work, right?

    Friso


    On 22 dec 2010, at 11:16, Andrew Purtell wrote:

    Bill,

    I believe using CDH3B*2* will get you what you want.

    ASF HBase 0.90 will not be compatible with secure versions of Hadoop, which includes CDH3B3.

    Best regards,

    - Andy

    Problems worthy of attack prove their worth by hitting back.
    - Piet Hein (via Tom White)


    --- On Tue, 12/21/10, Bill Graham wrote:
    From: Bill Graham <billgraham@gmail.com>
    Subject: Re: provide a 0.20-append tarball?
    To: user@hbase.apache.org, apurtell@apache.org
    Cc: dev@hbase.apache.org
    Date: Tuesday, December 21, 2010, 11:41 PM
    Hi Andrew,

    Just to make sure I'm clear, are you saying that HBase 0.90.0 is
    incompatible with CDH3b3 due to the security changes?

    We're just getting going with HBase and have been running 0.90.0rc1 on
    an un-patched version of Hadoop in dev. We were planning on upgrading
    to CDH3b3 to get the sync patches.

    thanks,
    Bill

    On Tue, Dec 21, 2010 at 6:44 PM, Andrew Purtell <apurtell@apache.org>
    wrote:
    The latest CDH3 beta includes security changes that
    currently HBase 0.90 and trunk don't incorporate. Of course
    we can help out with clear HBase issues, but for security
    exceptions or similar, what about that? Do we draw a line?
    Where?
    I've looked over the CDH3B3 installation documentation
    but have not installed it nor do presently use it.
    If we draw a line, then as an ASF community we should
    have a fallback option somewhere in ASF-land for the user to
    try. Vanilla Hadoop is not sufficient for HBase. Therefore,
    I propose we make a Hadoop 0.20-append tarball available.
    Best regards,

    - Andy

    Problems worthy of attack prove their worth by hitting back.
    - Piet Hein (via Tom White)


  • Todd Lipcon at Dec 22, 2010 at 5:05 pm
    Hi all,

    The only thing you need to do to run an 0.90 release is to sub out the
    hadoop-core jar with the one from your version of CDH. The issue isn't the
    security stuff (which we now shim on the hbase side, thanks to the Trend
    guys!) but rather a different protocol version number for the data transfer
    protocol.

    Of course once a final 0.90 release is out, we'll have a CDH release off
    this tree which "just works"

    -Todd
    On Wed, Dec 22, 2010 at 3:56 AM, Friso van Vollenhoven wrote:

    If you do not enable any of the security features in CDH3b3 or another
    flavor of secure Hadoop, it should just work, right?

    Friso


    On 22 dec 2010, at 11:16, Andrew Purtell wrote:

    Bill,

    I believe using CDH3B*2* will get you what you want.

    ASF HBase 0.90 will not be compatible with secure versions of Hadoop,
    which includes CDH3B3.
    Best regards,

    - Andy

    Problems worthy of attack prove their worth by hitting back.
    - Piet Hein (via Tom White)


    --- On Tue, 12/21/10, Bill Graham wrote:
    From: Bill Graham <billgraham@gmail.com>
    Subject: Re: provide a 0.20-append tarball?
    To: user@hbase.apache.org, apurtell@apache.org
    Cc: dev@hbase.apache.org
    Date: Tuesday, December 21, 2010, 11:41 PM
    Hi Andrew,

    Just to make sure I'm clear, are you saying that HBase 0.90.0 is
    incompatible with CDH3b3 due to the security changes?

    We're just getting going with HBase and have been running 0.90.0rc1 on
    an un-patched version of Hadoop in dev. We were planning on upgrading
    to CDH3b3 to get the sync patches.

    thanks,
    Bill

    On Tue, Dec 21, 2010 at 6:44 PM, Andrew Purtell <apurtell@apache.org>
    wrote:
    The latest CDH3 beta includes security changes that
    currently HBase 0.90 and trunk don't incorporate. Of course
    we can help out with clear HBase issues, but for security
    exceptions or similar, what about that? Do we draw a line?
    Where?
    I've looked over the CDH3B3 installation documentation
    but have not installed it nor do presently use it.
    If we draw a line, then as an ASF community we should
    have a fallback option somewhere in ASF-land for the user to
    try. Vanilla Hadoop is not sufficient for HBase. Therefore,
    I propose we make a Hadoop 0.20-append tarball available.
    Best regards,

    - Andy

    Problems worthy of attack prove their worth by hitting back.
    - Piet Hein (via Tom White)



    --
    Todd Lipcon
    Software Engineer, Cloudera
  • Stack at Dec 22, 2010 at 7:14 pm
    (I moved this topic to dev@hbase.apache.org from user).
    On Wed, Dec 22, 2010 at 10:12 AM, Gary Helmling wrote:
    What you have there currently seems accurate.  So I don't think it needs to
    mention HBASE-3194 directly.  Maybe add a note that we try to support Hadoop
    0.20.x variants incorporating security features as well (CDH3B3 and Y!
    Hadoop 0.20.S)? Done.
    From a user standpoint it does seem a bit complicated though and doesn't
    help the criticism that we have "too many moving parts".

    ...

    But that would be akin to naming a "preferred" Hadoop distribution for the
    project (we may all have our own preferences anyway), which doesn't seem
    like a place we're at yet.
    Well, the 'preferred' is the Apache distribution. Thats what we
    should bundle, etc.

    That said, I get that you were trying to talk more to our current
    predicament where our setup is more involved than it should be for new
    users because the 'preferred' Hadoop is a source only distribution and
    many will be running other than the Apache distribution anyways. In
    general we need to do work -- documentation, introspecting shims, etc
    -- to make it so HBase is more 'universal' and can more easily be
    deployed atop any Hadoop whether Cloudera's CDH, newer versions of
    Hadoop (hadoop 0.21, 0.22, 0.20s), or vendor X's implementation of
    HDFS, Ceph, etc.

    I like the idea of linking off to Cloudera doc. -- or any other
    vendor's doc -- from ours for install (Todd?). Or, maybe better would
    be linking to a wiki page that vendors can edit as they wish?

    Let me ask Dhruba what he thinks about making a 0.20-append release
    (He's the release manager). Will also sound out the hadoop pmc since
    they'll have an opinion.

    St.Ack
  • Stack at Dec 22, 2010 at 11:41 pm

    On Wed, Dec 22, 2010 at 11:14 AM, Stack wrote:
    Let me ask Dhruba what he thinks about making a 0.20-append release
    (He's the release manager).  Will also sound out the hadoop pmc since
    they'll have an opinion.
    I asked Dhruba. He's fine w/ a release off tip of branch--0.20-append.

    I just wrote a message to general up on hadoop to gauge what hadoopers
    think of the idea.

    St.Ack
  • Ryan Rawson at Dec 23, 2010 at 10:49 pm
    Looks like the fight does not go well. A lot of hdfs developers are
    concerned that it would detract resources. I'm not sure who's
    resources.

    I hope my 13-15 month commented helped... I've heard "wait for the
    next" version before and I am not interested in it. If that indeed
    worked, a year ago we'd have a stable working sync/hlog recovery
    support.

    -ryan
    On Wed, Dec 22, 2010 at 3:41 PM, Stack wrote:
    On Wed, Dec 22, 2010 at 11:14 AM, Stack wrote:
    Let me ask Dhruba what he thinks about making a 0.20-append release
    (He's the release manager).  Will also sound out the hadoop pmc since
    they'll have an opinion.
    I asked Dhruba.  He's fine w/ a release off tip of branch--0.20-append.

    I just wrote a message to general up on hadoop to gauge what hadoopers
    think of the idea.

    St.Ack
  • Ted Dunning at Dec 23, 2010 at 10:54 pm
    If you have a PMC member who is willing to be release manager, what is the
    beef?
    On Thu, Dec 23, 2010 at 2:49 PM, Ryan Rawson wrote:

    Looks like the fight does not go well. A lot of hdfs developers are
    concerned that it would detract resources. I'm not sure who's
    resources.

    I hope my 13-15 month commented helped... I've heard "wait for the
    next" version before and I am not interested in it. If that indeed
    worked, a year ago we'd have a stable working sync/hlog recovery
    support.

    -ryan
    On Wed, Dec 22, 2010 at 3:41 PM, Stack wrote:
    On Wed, Dec 22, 2010 at 11:14 AM, Stack wrote:
    Let me ask Dhruba what he thinks about making a 0.20-append release
    (He's the release manager). Will also sound out the hadoop pmc since
    they'll have an opinion.
    I asked Dhruba. He's fine w/ a release off tip of branch--0.20-append.

    I just wrote a message to general up on hadoop to gauge what hadoopers
    think of the idea.

    St.Ack
  • Andrew Purtell at Dec 24, 2010 at 12:53 am
    Were the comments of actual HDFS developers the majority of them?

    I probably helped less but I have larger concerns about the long term viability of Hadoop of which the situation with -append is only one symptom. Somebody needs to say something.


    Best regards,

    - Andy

    Problems worthy of attack prove their worth by hitting back.
    - Piet Hein (via Tom White)


    --- On Thu, 12/23/10, Ryan Rawson wrote:
    From: Ryan Rawson <ryanobjc@gmail.com>
    Subject: Re: provide a 0.20-append tarball?
    To: dev@hbase.apache.org
    Date: Thursday, December 23, 2010, 2:49 PM
    Looks like the fight does not go
    well.  A lot of hdfs developers are
    concerned that it would detract resources.  I'm not
    sure who's resources.

    I hope my 13-15 month commented helped... I've heard "wait for the
    next" version before and I am not interested in it.  If that indeed
    worked, a year ago we'd have a stable working sync/hlog recovery
    support.

    -ryan
    On Wed, Dec 22, 2010 at 3:41 PM, Stack wrote:
    On Wed, Dec 22, 2010 at 11:14 AM, Stack wrote:
    Let me ask Dhruba what he thinks about making a
    0.20-append release
    (He's the release manager).  Will also sound out
    the hadoop pmc since
    they'll have an opinion.
    I asked Dhruba.  He's fine w/ a release off tip of
    branch--0.20-append.
    I just wrote a message to general up on hadoop to
    gauge what hadoopers
    think of the idea.

    St.Ack
  • Jonathan Gray at Dec 22, 2010 at 10:56 am
    For CDH specific compatibility issues beyond general stuff like does it work, I think users should be pointed to Cloudera for support.

    For security issues, they probably won't find much help here so would need to go to Cloudera or some stuff might be answerable on the HDFS lists since security is in Apache 0.22 branch.

    We also want to support HDFS 0.22 at some point so being aware of potential issues is at least somewhat relevant for the community at large even if not using CDH.


    The line that I believe should be drawn, and I think most have agreed, is that we should be compatible with and ship with an official Apache Hadoop release. Having something Apache to ship with was one of the motivations of the 20-append branch.

    So yeah, the plan is to do an official 20-append Apache release. Some effort needs to go into it, so when is an open question. Don't know if there's a release manager yet, I think Dhruba might have stated so when we made the branch but don't exactly recall. However there is one very common theme in the 20-append open JIRAs :)

    https://issues.apache.org/jira/secure/IssueNavigator.jspa?reset=true&mode=hide&sorter/order=DESC&sorter/field=priority&resolution=-1&pid=12310942&fixfor=12315103

    There are currently 15 open issues listed. The blocker, at least, is already committed to 20-append and is just open because it's not in trunk yet, so the number may be off a bit.

    JG
    -----Original Message-----
    From: Andrew Purtell
    Sent: Tuesday, December 21, 2010 6:45 PM
    To: dev@hbase.apache.org
    Cc: user@hbase.apache.org
    Subject: provide a 0.20-append tarball?

    The latest CDH3 beta includes security changes that currently HBase 0.90 and
    trunk don't incorporate. Of course we can help out with clear HBase issues,
    but for security exceptions or similar, what about that? Do we draw a line?
    Where?

    I've looked over the CDH3B3 installation documentation but have not
    installed it nor do presently use it.

    If we draw a line, then as an ASF community we should have a fallback option
    somewhere in ASF-land for the user to try. Vanilla Hadoop is not sufficient
    for HBase. Therefore, I propose we make a Hadoop 0.20-append tarball
    available.

    Best regards,

    - Andy

    Problems worthy of attack prove their worth by hitting back.
    - Piet Hein (via Tom White)

  • Ryan Rawson at Dec 22, 2010 at 11:09 am
    There is nothing in the ASF that requires us to depend solely on ASF
    releases. As long as we are good with the licensing guidelines. But
    that is the legal situation, not the community situation which is of
    course more complex.

    My stand on this is the one I have always taken. I need a HBase that
    works, well, and with no data loss.

    -ryan
    On Wed, Dec 22, 2010 at 2:55 AM, Jonathan Gray wrote:
    For CDH specific compatibility issues beyond general stuff like does it work, I think users should be pointed to Cloudera for support.

    For security issues, they probably won't find much help here so would need to go to Cloudera or some stuff might be answerable on the HDFS lists since security is in Apache 0.22 branch.

    We also want to support HDFS 0.22 at some point so being aware of potential issues is at least somewhat relevant for the community at large even if not using CDH.


    The line that I believe should be drawn, and I think most have agreed, is that we should be compatible with and ship with an official Apache Hadoop release.  Having something Apache to ship with was one of the motivations of the 20-append branch.

    So yeah, the plan is to do an official 20-append Apache release.  Some effort needs to go into it, so when is an open question.  Don't know if there's a release manager yet, I think Dhruba might have stated so when we made the branch but don't exactly recall.  However there is one very common theme in the 20-append open JIRAs :)

    https://issues.apache.org/jira/secure/IssueNavigator.jspa?reset=true&mode=hide&sorter/order=DESC&sorter/field=priority&resolution=-1&pid=12310942&fixfor=12315103

    There are currently 15 open issues listed.  The blocker, at least, is already committed to 20-append and is just open because it's not in trunk yet, so the number may be off a bit.

    JG
    -----Original Message-----
    From: Andrew Purtell
    Sent: Tuesday, December 21, 2010 6:45 PM
    To: dev@hbase.apache.org
    Cc: user@hbase.apache.org
    Subject: provide a 0.20-append tarball?

    The latest CDH3 beta includes security changes that currently HBase 0.90 and
    trunk don't incorporate. Of course we can help out with clear HBase issues,
    but for security exceptions or similar, what about that? Do we draw a line?
    Where?

    I've looked over the CDH3B3 installation documentation but have not
    installed it nor do presently use it.

    If we draw a line, then as an ASF community we should have a fallback option
    somewhere in ASF-land for the user to try. Vanilla Hadoop is not sufficient
    for HBase. Therefore, I propose we make a Hadoop 0.20-append tarball
    available.

    Best regards,

    - Andy

    Problems worthy of attack prove their worth by hitting back.
    - Piet Hein (via Tom White)

  • Jonathan Gray at Dec 22, 2010 at 11:28 am
    Agreed it's not an ASF requirement. Just stating my opinion and sense from others.

    An additional motivation and benefit of the Apache 20-append repository and release is a de facto record of what patches are actually required to achieve the requirement you just laid out (works well with HBase and no data loss). CDH goes beyond that.

    What I'd recommend to someone today for production is a different story. There has not been sufficient testing on the 20-append branch at this point so I would personally recommend CDH3b2 to most users.

    JG
    -----Original Message-----
    From: Ryan Rawson
    Sent: Wednesday, December 22, 2010 3:09 AM
    To: dev@hbase.apache.org
    Cc: apurtell@apache.org
    Subject: Re: provide a 0.20-append tarball?

    There is nothing in the ASF that requires us to depend solely on ASF releases.
    As long as we are good with the licensing guidelines. But that is the legal
    situation, not the community situation which is of course more complex.

    My stand on this is the one I have always taken. I need a HBase that works,
    well, and with no data loss.

    -ryan
    On Wed, Dec 22, 2010 at 2:55 AM, Jonathan Gray wrote:
    For CDH specific compatibility issues beyond general stuff like does it work,
    I think users should be pointed to Cloudera for support.
    For security issues, they probably won't find much help here so would
    need to go to Cloudera or some stuff might be answerable on the HDFS lists
    since security is in Apache 0.22 branch.
    We also want to support HDFS 0.22 at some point so being aware of
    potential issues is at least somewhat relevant for the community at large
    even if not using CDH.

    The line that I believe should be drawn, and I think most have agreed, is
    that we should be compatible with and ship with an official Apache Hadoop
    release.  Having something Apache to ship with was one of the motivations
    of the 20-append branch.
    So yeah, the plan is to do an official 20-append Apache release.  Some
    effort needs to go into it, so when is an open question.  Don't know
    if there's a release manager yet, I think Dhruba might have stated so
    when we made the branch but don't exactly recall.  However there is
    one very common theme in the 20-append open JIRAs :)

    https://issues.apache.org/jira/secure/IssueNavigator.jspa?reset=true&m
    ode=hide&sorter/order=DESC&sorter/field=priority&resolution=- 1&pid=123
    10942&fixfor=12315103

    There are currently 15 open issues listed.  The blocker, at least, is already
    committed to 20-append and is just open because it's not in trunk yet, so the
    number may be off a bit.
    JG
    -----Original Message-----
    From: Andrew Purtell
    Sent: Tuesday, December 21, 2010 6:45 PM
    To: dev@hbase.apache.org
    Cc: user@hbase.apache.org
    Subject: provide a 0.20-append tarball?

    The latest CDH3 beta includes security changes that currently HBase
    0.90 and trunk don't incorporate. Of course we can help out with
    clear HBase issues, but for security exceptions or similar, what about that?
    Do we draw a line?
    Where?

    I've looked over the CDH3B3 installation documentation but have not
    installed it nor do presently use it.

    If we draw a line, then as an ASF community we should have a fallback
    option somewhere in ASF-land for the user to try. Vanilla Hadoop is
    not sufficient for HBase. Therefore, I propose we make a Hadoop
    0.20-append tarball available.

    Best regards,

    - Andy

    Problems worthy of attack prove their worth by hitting back.
    - Piet Hein (via Tom White)

  • Todd Lipcon at Dec 22, 2010 at 5:08 pm

    On Wed, Dec 22, 2010 at 2:55 AM, Jonathan Gray wrote:

    For CDH specific compatibility issues beyond general stuff like does it
    work, I think users should be pointed to Cloudera for support.

    For security issues, they probably won't find much help here so would need
    to go to Cloudera or some stuff might be answerable on the HDFS lists since
    security is in Apache 0.22 branch.

    We also want to support HDFS 0.22 at some point so being aware of potential
    issues is at least somewhat relevant for the community at large even if not
    using CDH.
    Yep, I wholeheartedly agree with Jonathan. If something is obviously
    CDH-specific, HBase devs should feel free to punt to the
    cdh-user@cloudera.org list. That's what we've got that list for :)

    But Jonathan's other point is also quite valid - 90+% of what's in CDH is
    going to be in 0.22. So spending a few minutes to look into an issue will
    save us time later, too.
    So yeah, the plan is to do an official 20-append Apache release. Some
    effort needs to go into it, so when is an open question. Don't know if
    there's a release manager yet, I think Dhruba might have stated so when we
    made the branch but don't exactly recall. However there is one very common
    theme in the 20-append open JIRAs :)


    https://issues.apache.org/jira/secure/IssueNavigator.jspa?reset=true&mode=hide&sorter/order=DESC&sorter/field=priority&resolution=-1&pid=12310942&fixfor=12315103

    There are currently 15 open issues listed. The blocker, at least, is
    already committed to 20-append and is just open because it's not in trunk
    yet, so the number may be off a bit.

    JG
    -----Original Message-----
    From: Andrew Purtell
    Sent: Tuesday, December 21, 2010 6:45 PM
    To: dev@hbase.apache.org
    Cc: user@hbase.apache.org
    Subject: provide a 0.20-append tarball?

    The latest CDH3 beta includes security changes that currently HBase 0.90 and
    trunk don't incorporate. Of course we can help out with clear HBase issues,
    but for security exceptions or similar, what about that? Do we draw a line?
    Where?

    I've looked over the CDH3B3 installation documentation but have not
    installed it nor do presently use it.

    If we draw a line, then as an ASF community we should have a fallback option
    somewhere in ASF-land for the user to try. Vanilla Hadoop is not
    sufficient
    for HBase. Therefore, I propose we make a Hadoop 0.20-append tarball
    available.

    Best regards,

    - Andy

    Problems worthy of attack prove their worth by hitting back.
    - Piet Hein (via Tom White)



    --
    Todd Lipcon
    Software Engineer, Cloudera
  • Stack at Dec 22, 2010 at 5:17 pm

    On Tue, Dec 21, 2010 at 6:44 PM, Andrew Purtell wrote:
    If we draw a line, then as an ASF community we should have a fallback option somewhere in ASF-land for the user to try. Vanilla Hadoop is not sufficient for HBase. Therefore, I propose we make a Hadoop 0.20-append tarball available.
    What you thinking Andrew? I was thinking we should push for a release
    on the 0.20-append branch. I'm not so sure how well that would go
    over in hadoop land. We could take a few soundings. Were you
    thinking something else?
    St.Ack
  • Andrew Purtell at Dec 22, 2010 at 6:34 pm

    From: Stack
    If we draw a line, then as an ASF community we should
    have a fallback option somewhere in ASF-land for the user to
    try. Vanilla Hadoop is not sufficient for HBase. Therefore,
    I propose we make a Hadoop 0.20-append tarball available.
    What you thinking Andrew?  I was thinking we should push for a
    release on the 0.20-append branch.
    Yes, this is what I was thinking.

    - Andy

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupdev @
categorieshbase, hadoop
postedDec 22, '10 at 2:45a
activeDec 24, '10 at 12:53a
posts16
users8
websitehbase.apache.org

People

Translate

site design / logo © 2022 Grokbase