FAQ
Hi all,
I am using hadoop 0.20.2 cdh3 version. The old method to set max
concurrent mapper/reducer in code no longer works. I saw a patch about this,
but the current status is "won't fixed". Is there any update about this? I
am using Fair Scheduler, should I use Capacity Scheduler instead?
https://issues.apache.org/jira/browse/HADOOP-5170

Thanks,
chenmin liang

Search Discussions

  • Arun Murthy at Jul 22, 2011 at 4:50 pm
    Moving to mapreduce-dev@, bcc general@.

    Yes, as described in the bug, the CS has high-ram jobs which is a
    better model for shared multi-tenant clusters. The hadoop-0.20.203
    release from Apache has the most current and tested version of the
    CapacityScheduler.

    Arun

    Sent from my iPhone
    On Jul 22, 2011, at 9:36 AM, Liang Chenmin wrote:

    Hi all,
    I am using hadoop 0.20.2 cdh3 version. The old method to set max
    concurrent mapper/reducer in code no longer works. I saw a patch about this,
    but the current status is "won't fixed". Is there any update about this? I
    am using Fair Scheduler, should I use Capacity Scheduler instead?
    https://issues.apache.org/jira/browse/HADOOP-5170

    Thanks,
    chenmin liang

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupgeneral @
categorieshadoop
postedJul 22, '11 at 4:36p
activeJul 22, '11 at 4:50p
posts2
users2
websitehadoop.apache.org
irc#hadoop

2 users in discussion

Liang Chenmin: 1 post Arun Murthy: 1 post

People

Translate

site design / logo © 2022 Grokbase