FAQ
System: HDFS dirs across cluster as dataone, datatwo, datathree

I recently had an issue where I lost a slave that resulted in a large amount of under replicated blocks.

The replication was quite slow on the uptake so I thought running a hadoop balancer would help.

This seemed to exacerbate the situation so I killed the balancer.

Hadoop then proceeded to write all new data to dataone across each slave. It would wait until the dataone dir was at 100% then move to the next slave in sequence. datatwo and datathree were completely ignored.

DFS showed <10% free and was quickly diving.

I ended up restarting the entire cluster (DFS and MapRed) and things started acting normal again (writing to all three replicants).

Has anyone experienced this or have any idea why it would happen?

Thanks for the help.

Sent from my iPhone

Search Discussions

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouphdfs-user @
categorieshadoop
postedMay 19, '11 at 1:38a
activeMay 19, '11 at 1:38a
posts1
users1
websitehadoop.apache.org...
irc#hadoop

1 user in discussion

Highpointe: 1 post

People

Translate

site design / logo © 2022 Grokbase