|
lohit vijayarenu (JIRA) |
at May 5, 2008 at 6:52 pm
|
⇧ |
| |
[
https://issues.apache.org/jira/browse/HADOOP-3058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12594299#action_12594299 ]
lohit vijayarenu commented on HADOOP-3058:
------------------------------------------
Yes, I also agree that these adds more operations. The metrics frequently updated are
- filesTotal which are updated whenever we add/delete new files.
- blocksTotal which are updated whenever we add/delete new blocks
I guess it should be fine in the above case.
Few metrics are replaced by updating a global variables regarding the DFS capacity. These were updated on each heart beat once, which should be fine.
Another set of operations are done by ReplicationMonitor in ComputeDatanodeWork(), which should also be fine.
Hadoop DFS to report more replication metrics
---------------------------------------------
Key: HADOOP-3058
URL:
https://issues.apache.org/jira/browse/HADOOP-3058Project: Hadoop Core
Issue Type: Improvement
Components: dfs, metrics
Reporter: Marco Nicosia
Assignee: lohit vijayarenu
Priority: Minor
Attachments: HADOOP-3058-2.patch, HADOOP-3058.patch
Currently, the namenode and each datanode reports 'blocksreplicatedpersec.'
We'd like to be able to graph pending replications, vs number of under replicated blocks, vs. replications per second, so that we can get a better idea of the replication activity within the DFS.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.