Grokbase Groups HBase dev April 2010
FAQ
Hi,

I set my Hbase' table families to a relativly small MAX_FILESIZE value
of 10Mb (to get many regions fast), which triggers a

"CompactSplitThread:IOException: Could not complete write to file..."

after some time - with a lost region (lost until restart of that RS). It
does not happen on any compaction/split though, I estimate in 1 of 20 cases.

I am loading small records at a rate of 100..600 per second to a 20 node
cluster (20x16Gb,4Core). LZO compression. Hbase 0.20.3.
dfs.datanode.socket.write.timeout=0 if that matters.

Has somebody an idea, why this underlaying hdfs error occurs (as
explained by Todd in the hadoop-common list)?

Thx,
Al

Am 06.04.2010 17:43, schrieb Todd Lipcon:
Hi Al,

Usually this indicates that the file was renamed or deleted while it was
still being created by the client. Unfortunately it's not the most
descriptive :)

-Todd
On Tue, Apr 6, 2010 at 5:36 AM, Al Lias wrote:

Hi all,

this warning is written in FSFileSystem.java/completeFileInternal().
It
makes the calling code in NameNode.java throwing an IOException.

FSFileSystem.java
...
if (fileBlocks == null ) {
NameNode.stateChangeLog.warn(
"DIR* NameSystem.completeFile: "
+ "failed to complete " + src
+ " because dir.getFileBlocks() is null " +
" and pendingFile is " +
((pendingFile == null) ? "null" :
("from " + pendingFile.getClientMachine()))
);
...

What is the meaning of this warning? Any Idea what could have gone wrong
in such a case?

(This popped up through hbase, but as this code is in HDFS, I am asking
this list)
...

Search Discussions

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupdev @
categorieshbase, hadoop
postedApr 6, '10 at 7:09p
activeApr 6, '10 at 7:09p
posts1
users1
websitehbase.apache.org

1 user in discussion

Al Lias: 1 post

People

Translate

site design / logo © 2022 Grokbase