FAQ
Hello,
We are occasionally seeing that due to some reason (we are using a scribe
client) some files stay open for write even after the writing process has long
died. Is there a way on the HDFS side that we can do to flush and close these
files without having to restart the namenode?
Is this a problem with 0.20 and fixed in 0.21?
-Ayon
See My Photos on Flickr
Also check out my Blog for answers to commonly asked questions.

Search Discussions

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouphdfs-user @
categorieshadoop
postedApr 20, '11 at 7:06a
activeApr 20, '11 at 7:06a
posts1
users1
websitehadoop.apache.org...
irc#hadoop

1 user in discussion

Ayon Sinha: 1 post

People

Translate

site design / logo © 2022 Grokbase