FAQ
Hi Guys,

We have using CDH4.0.1, we having quite monitoring with fair scheduler with
not handle with more than one lakh mapper.

Is the Bug or configuration tweaking please guide me guys.

My scenario:

We have already having research and tech pool, We configured equal weight=1
in this manner. When tech pool more than one Lakh mapper job assigned means
hole cluster fully using by tech job, The same time Research got very low
mapper.
Below One Lakh mapper working good, fair scheduler equally split the
resources. More than one lakh. Not proper with Fair scheduler Algo.

please guide me, How to debug.

-Dhanasekaran


Did I learn something today? If not, I wasted it.

--

Search Discussions

  • Karthik Kambatla at Feb 8, 2013 at 9:47 pm
    Hi Dhanasekaran

    Are you guys using MR1? Can you give us more details about your setup -
    number of machines, number of slots (map/reduce), and may be share your
    fair-scheduler configuration?

    Thanks
    Karthik

    On Fri, Feb 8, 2013 at 1:26 AM, Dhanasekaran Anbalagan
    wrote:
    Hi Guys,

    We have using CDH4.0.1, we having quite monitoring with fair scheduler
    with not handle with more than one lakh mapper.

    Is the Bug or configuration tweaking please guide me guys.

    My scenario:

    We have already having research and tech pool, We configured equal
    weight=1 in this manner. When tech pool more than one Lakh mapper
    job assigned means hole cluster fully using by tech job, The same time
    Research got very low mapper.
    Below One Lakh mapper working good, fair scheduler equally split the
    resources. More than one lakh. Not proper with Fair scheduler Algo.

    please guide me, How to debug.

    -Dhanasekaran


    Did I learn something today? If not, I wasted it.

    --


    --

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcdh-user @
categorieshadoop
postedFeb 8, '13 at 10:24a
activeFeb 8, '13 at 9:47p
posts2
users2
websitecloudera.com
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase