FAQ
I don't know for sure, but running the rebalancer might do this for you.

<
http://hadoop.apache.org/common/docs/r0.20.0/hdfs_user_guide.html#Rebalancer
>

Alex
On Thu, Aug 27, 2009 at 9:18 AM, Michael Thomas wrote:

dfs.replication is only used by the client at the time the files are
written. Changing this setting will not automatically change the
replication level on existing files. To do that, you need to use the
hadoop cli:

hadoop fs -setrep -R 1 /

--Mike


Vladimir Klimontovich wrote:
This will happen automatically.
On Aug 27, 2009, at 6:04 PM, Andy Liu wrote:

I'm running a test Hadoop cluster, which had a dfs.replication value
of 3.
I'm now running out of disk space, so I've reduced dfs.replication to
1 and
restarted my datanodes. Is there a way to free up the over-replicated
blocks, or does this happen automatically at some point?

Thanks,
Andy
---
Vladimir Klimontovich,
skype: klimontovich
GoogleTalk/Jabber: klimontovich@gmail.com
Cell phone: +7926 890 2349

Search Discussions

Discussion Posts

Previous

Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 6 of 8 | next ›
Discussion Overview
groupcommon-user @
categorieshadoop
postedAug 27, '09 at 2:00p
activeOct 13, '09 at 10:41a
posts8
users6
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase