Project: Hadoop HDFS
Issue Type: New Feature
Affects Versions: 0.21.0
Reporter: Steve Loughran
In clusters where the datanode disks are hot swappable, you need to be able to swap out a disk on a live datanode without taking down the datanode. You don't want to decommission the whole node as that is overkill. on a system with 4 1TB HDDs, giving 3 TB of datanode storage, a decommissioning and restart will consume up to 6 TB of bandwidth. If a single disk were swapped in then there would only be 1TB of data to recover over the network. More importantly, if that data could be moved to free space on the same machine, the recommissioning could take place at disk rates, not network speeds.
# Maybe have a way of decommissioning a single disk on the DN; the files could be moved to space on the other disks or the other machines in the rack.
# There may not be time to use that option, in which case pulling out the disk would be done with no warning, a new disk inserted.
# The DN needs to see that a disk has been replaced (or react to some ops request telling it this), and start using the new disk again -pushing back data, rebuilding the balance.
To complicate the process, assume there is a live TT on the system, running jobs against the data. The TT would probably need to be paused while the work takes place, any ongoing work handled somehow. Halting the TT and then restarting it after the replacement disk went in is probably simplest.
The more disks you add to a node, the more this scenario becomes a need.
This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.