|
Tony Li Xu |
at Sep 19, 2013 at 2:38 pm
|
⇧ |
| |
Hi Jagan:
Sorry click "Send" too fast.
In CM, you can set the “dfs.replication”, which controls the default block
replication. The number of replications to make when the file is created.
The default value is used if a replication number is not specified. The
default value is 3. If you change this value, the replication factor of
exisitng blocks/files won’t be affected. Also, after you change
“dfs.replication”, make sure you “dedeploy client configuration” since this
is a client property.
Check out this if you want to update existing file's replication factor:
http://tonylixu.blogspot.ca/2013/09/hadoop-play-with-replication-factor.html--
Tony
On Thu, Sep 19, 2013 at 10:37 AM, Tony Li Xu wrote:Hi Jagan:
On Thu, Sep 19, 2013 at 3:16 AM, wrote:
Team ,
I had installed CDH4.2 using cloudera manager4.5 and now cluster is
running with replication factor 2 . I need to change replication factor by
1 using cloudera manager .
How this will impact to running cluster and I don't want to loose any
existing HDFS data .
In this scenario , how replication factor will work . please do needful
Thanks & Regards,
Jagan M
To unsubscribe from this group and stop receiving emails from it, send an
email to scm-users+unsubscribe@cloudera.org.
To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.