So it's definitely a case of HDFS not being able to recover the image.
Maybe this is better directed toward another list, but has anyone had
issues with this, or any suggestions for trying to eradicate this?
2011-04-26 17:15:56,898 INFO org.apache.hadoop.hdfs.server.common.Storage:
Recovering storage directory /var/lib/hadoop-0.20/cache/hadoop/dfs/name from
2011-04-26 17:15:56,905 INFO org.apache.hadoop.hdfs.server.common.Storage:
Number of files = 204
2011-04-26 17:15:57,020 INFO org.apache.hadoop.hdfs.server.common.Storage:
Number of files under construction = 0
2011-04-26 17:15:57,021 INFO org.apache.hadoop.hdfs.server.common.Storage:
Image file of size 26833 loaded in 0 seconds.
2011-04-26 17:15:57,257 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Invalid opcode, reached
end of edit log Number of transactions found 528
2011-04-26 17:15:57,258 INFO org.apache.hadoop.hdfs.server.common.Storage:
Edits file /var/lib/hadoop-0.20/cache/hadoop/dfs/name/current/edits of size
1049092 edits # 528 loaded in 0 seconds.
2011-04-26 17:15:57,265 ERROR org.apache.hadoop.hdfs.server.common.Storage:
Unable to save image for /var/lib/hadoop-0.20/cache/hadoop/dfs/name
java.io.IOException: saveLeases found path /hbase/base_tmp/.logs/
but no matching entry in namespace.
2011-04-26 17:15:57,273 WARN org.apache.hadoop.hdfs.server.common.Storage:
FSImage:processIOError: removing storage:
2011-04-26 17:15:57,274 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
FSImage in 1553 msecs
On Tue, Apr 26, 2011 at 5:19 PM, Jonathan Bender
Wow, this is more intense than I thought...as soon as I load HBase again,
my HDFS filesystem reverts back to an older snapshot essentially. As in, I
don't see any of the changes I had made since that time, in the hbase table
I'm using CDH3 beta 4, which I believe stores its local hbase data in a
different directory--not entirely sure where though.
I'm not entirely sure what happened to mess this up, but it seems pretty
On Tue, Apr 26, 2011 at 5:07 PM, Himanshu Vashishtha <
Could it be the /tmp/hbase-<userID> directory that is playing the culprit.
just a wild guess though.
On Tue, Apr 26, 2011 at 5:56 PM, Jean-Daniel Cryans wrote:
Unless HBase was running when you wiped that out (and even then), I
don't see how this could happen. Could you match those blocks to the
files using fsck and figure when the files were created and if they
were part of the old install?
On Tue, Apr 26, 2011 at 4:53 PM, Jonathan Bender
Hi all, I'm having a strange error which I can't exactly figure out.
After wiping my /hbase HDFS directory to do a fresh install, I am getting
"MISSING BLOCKS" in this /hbase directory, which cause HDFS to start up in
safe mode. This doesn't happen until I start my region servers, so I have a
feeling there is some kind of corrupted metadata that is being loaded from
these region servers.
Is there a graceful way to wipe the HBase directory clean? Any local
directories on the region servers /master / ZK server that I should be
wiping as well?