FAQ
[ https://issues.apache.org/jira/browse/HDFS-70?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Harsh J resolved HDFS-70.
-------------------------

Resolution: Won't Fix

HADOOP-266 was resolved as a Won't Fix and the DN currently works OK with the way it analyzes the exception classnames and determines if it has to shutdown.

Marking this one as Won't Fix as well, following HADOOP-266 :)
Data node should shutdown when a "critical" error is returned by the name node
------------------------------------------------------------------------------

Key: HDFS-70
URL: https://issues.apache.org/jira/browse/HDFS-70
Project: Hadoop HDFS
Issue Type: Bug
Reporter: Konstantin Shvachko
Assignee: Sameer Paranjpye
Priority: Minor

Currently data node does not distinguish between critical and non critical exceptions.
Any exception is treated as a signal to sleep and then try again. See
org.apache.hadoop.dfs.DataNode.run()
This is happening because RPC always throws the same RemoteException.
In some cases (like UnregisteredDatanodeException, IncorrectVersionException) the data
node should shutdown rather than retry.
This logic naturally belongs to the
org.apache.hadoop.dfs.DataNode.offerService()
but can be reasonably implemented (without examining the RemoteException.className
field) after HADOOP-266 (2) is fixed.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Search Discussions

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouphdfs-dev @
categorieshadoop
postedJul 17, '11 at 6:27p
activeJul 17, '11 at 6:27p
posts1
users1
websitehadoop.apache.org...
irc#hadoop

1 user in discussion

Harsh J (JIRA): 1 post

People

Translate

site design / logo © 2022 Grokbase