FAQ
When a block is severely under replicated at creation time, a request for block replication should be scheduled immediately
---------------------------------------------------------------------------------------------------------------------------

Key: HADOOP-3292
URL: https://issues.apache.org/jira/browse/HADOOP-3292
Project: Hadoop Core
Issue Type: Improvement
Components: dfs
Reporter: Runping Qi



During writing a block to data nodes, if the dfs client detects a bad data node in the write pipeline, it will re-construct a new data pipeline,
excluding the detected bad data node. This implies that when the client finishes writing the block, the number of the replicas for the block
may be lower than the intended replication factor. If the ratio of the number of replicas to the intended replication factor is lower than
certain threshold (say 0.68), then the client should send a request to the name node to replicate that block immediately.


--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Search Discussions

  • dhruba borthakur (JIRA) at Apr 22, 2008 at 12:34 am
    [ https://issues.apache.org/jira/browse/HADOOP-3292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12591138#action_12591138 ]

    dhruba borthakur commented on HADOOP-3292:
    ------------------------------------------

    When a client gets a new block from the namenode, it can tell the namenode the number of replicas of the previous block that it successfully wrote to. If this number is smaller than the target replication factor for the file, the namenode can immediately schedule replication for it,
    When a block is severely under replicated at creation time, a request for block replication should be scheduled immediately
    ---------------------------------------------------------------------------------------------------------------------------

    Key: HADOOP-3292
    URL: https://issues.apache.org/jira/browse/HADOOP-3292
    Project: Hadoop Core
    Issue Type: Improvement
    Components: dfs
    Reporter: Runping Qi

    During writing a block to data nodes, if the dfs client detects a bad data node in the write pipeline, it will re-construct a new data pipeline,
    excluding the detected bad data node. This implies that when the client finishes writing the block, the number of the replicas for the block
    may be lower than the intended replication factor. If the ratio of the number of replicas to the intended replication factor is lower than
    certain threshold (say 0.68), then the client should send a request to the name node to replicate that block immediately.
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-dev @
categorieshadoop
postedApr 21, '08 at 11:06p
activeApr 22, '08 at 12:34a
posts2
users1
websitehadoop.apache.org...
irc#hadoop

1 user in discussion

dhruba borthakur (JIRA): 2 posts

People

Translate

site design / logo © 2022 Grokbase