FAQ
Are you seeing any exceptions because of the disk being at 99% capacity?

Hadoop should do something sane here and write new data to the disk with
more capacity. That said, it is ideal to be balanced. As far as I know,
there is no way to balance an individual DataNode's hard drives (Hadoop does
round-robin scheduling when writing data).

Alex
On Mon, Jun 22, 2009 at 10:12 AM, Kris Jirapinyo wrote:

Hi all,
How does one handle a mount running out of space for HDFS? We have two
disks mounted on /mnt and /mnt2 respectively on one of the machines that
are
used for HDFS, and /mnt is at 99% while /mnt2 is at 30%. Is there a way to
tell the machine to balance itself out? I know for the cluster, you can
balance it using start-balancer.sh but I don't think that it will tell the
individual machine to balance itself out. Our "hack" right now would be
just to delete the data on /mnt, since we have replication of 3x, we should
be OK. But I'd prefer not to do that. Any thoughts?

Search Discussions

Discussion Posts

Previous

Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 2 of 8 | next ›
Discussion Overview
groupcommon-user @
categorieshadoop
postedJun 22, '09 at 5:21p
activeJun 22, '09 at 10:25p
posts8
users7
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase