FAQ
[ https://issues.apache.org/jira/browse/HDFS-138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Robert Chansler resolved HDFS-138.
----------------------------------

Resolution: Duplicate

HDFS-457 is a close approximation.
data node process should not die if one dir goes bad
----------------------------------------------------

Key: HDFS-138
URL: https://issues.apache.org/jira/browse/HDFS-138
Project: Hadoop HDFS
Issue Type: Bug
Reporter: Allen Wittenauer

When multiple directories are configured for the data node process to use to store blocks, it currently exits when one of them is not writable. Instead, it should either completely ignore that directory or attempt to continue reading and then marking it unusable if reads fail.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Search Discussions

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouphdfs-dev @
categorieshadoop
postedNov 11, '09 at 4:56p
activeNov 11, '09 at 4:56p
posts1
users1
websitehadoop.apache.org...
irc#hadoop

1 user in discussion

Robert Chansler (JIRA): 1 post

People

Translate

site design / logo © 2022 Grokbase