<
http://hadoop.apache.org/common/docs/r0.20.0/hdfs_user_guide.html#Rebalancer
>
Alex
On Thu, Aug 27, 2009 at 9:18 AM, Michael Thomas wrote:
dfs.replication is only used by the client at the time the files are
written. Changing this setting will not automatically change the
replication level on existing files. To do that, you need to use the
hadoop cli:
hadoop fs -setrep -R 1 /
--Mike
Vladimir Klimontovich wrote:
dfs.replication is only used by the client at the time the files are
written. Changing this setting will not automatically change the
replication level on existing files. To do that, you need to use the
hadoop cli:
hadoop fs -setrep -R 1 /
--Mike
Vladimir Klimontovich wrote:
This will happen automatically.
Vladimir Klimontovich,
skype: klimontovich
GoogleTalk/Jabber: klimontovich@gmail.com
Cell phone: +7926 890 2349
On Aug 27, 2009, at 6:04 PM, Andy Liu wrote:
I'm running a test Hadoop cluster, which had a dfs.replication value
of 3.
I'm now running out of disk space, so I've reduced dfs.replication to
1 and
restarted my datanodes. Is there a way to free up the over-replicated
blocks, or does this happen automatically at some point?
Thanks,
Andy
---I'm running a test Hadoop cluster, which had a dfs.replication value
of 3.
I'm now running out of disk space, so I've reduced dfs.replication to
1 and
restarted my datanodes. Is there a way to free up the over-replicated
blocks, or does this happen automatically at some point?
Thanks,
Andy
Vladimir Klimontovich,
skype: klimontovich
GoogleTalk/Jabber: klimontovich@gmail.com
Cell phone: +7926 890 2349