|
Todd Lipcon |
at May 14, 2010 at 4:04 am
|
⇧ |
| |
On Thu, May 13, 2010 at 8:56 PM, Raghava Mutharaju wrote:
Hello Todd,
Thank you for the reply. In the cluster I use here, apache Hadoop is
installed. So I have to use that. I am trying out HBase on my laptop first.
Even though I install CDH2, it won't be useful because on the cluster, I
have to work with apache Hadoop. Since version 0.21 is still in
development,
there should be a HDFS-630 patch for the current stable release of Hadoop
isn't it?
No, it was not considered for release in Hadoop 0.20.X because it breaks
wire compatibility, and though I've done a workaround to avoid issues
stemming from that, it would be unlikely to pass a backport vote.
-Todd
On Thu, May 13, 2010 at 11:50 PM, Todd Lipcon wrote:Hi Raghava,
Yes, that's a patch targeted at 0.20, but I'm not certain whether it
applies
on the vanilla 0.20 code or not. If you'd like a version of Hadoop that
already has it applied and tested, I'd recommend using Cloudera's CDH2.
-Todd
On Thu, May 13, 2010 at 7:59 PM, Raghava Mutharaju <
m.vijayaraghava@gmail.com> wrote:
Hello all,
I am trying to install HBase and while going through the
requirements
(link below), it asked me to apply HDFS-630 patch. The latest 2 patches are
for Hadoop 0.21. I am using version 0.20. For this version, should I apply
Todd Lipcon's patch at
https://issues.apache.org/jira/secure/attachment/12430230/hdfs-630-0.20.txt.
Would this be the right patch to apply? The directory structures have
changed from 0.20 to 0.21.
http://hadoop.apache.org/hbase/docs/current/api/overview-summary.html#requirements
Thank you.
Regards,
Raghava.
--
Todd Lipcon
Software Engineer, Cloudera
--
Todd Lipcon
Software Engineer, Cloudera