FAQ
Hi Pallavi,

This doesn't sound right. Can you visit
http://jobtracker:50030/scheduler?advanced and maybe send a screenshot? And
also upload the allocations.xml file you're using?

It sounds like you've managed to set either userMaxJobsDefault or
maxRunningJobs for that user to 1.

-Todd
On Thu, Jan 14, 2010 at 9:05 PM, Pallavi Palleti wrote:

Hi all,

I am experimenting with fair scheduler in a cluster of 10 machines. The
users are given default values("0") for minMaps and minReduces in fair
scheduler parameters. When I tried to run two jobs using the same username,
the fair scheduler is giving 100% fair share to first job(needs 2 mappers)
and the second job(needs10 mappers) is in waiting mode though the cluster is
totally idle. Allowing these jobs to run simultaneously would take only 10%
of total available mappers. However, the second job is not allowed to run
till the first job is over. It would be great if some one can suggest some
parameter tuning which can allow efficient utilization of cluster. Efficient
I mean, allowing jobs to run when the cluster is idle rather letting them in
waiting mode. I am not sure whether setting "minMaps, minReduces" for each
user would resolve the issue. Kindly clarify.

Thanks
Pallavi

Search Discussions

Discussion Posts

Previous

Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 2 of 5 | next ›
Discussion Overview
groupmapreduce-user @
categorieshadoop
postedJan 15, '10 at 5:06a
activeJan 18, '10 at 5:06a
posts5
users2
websitehadoop.apache.org...
irc#hadoop

2 users in discussion

Pallavi Palleti: 3 posts Todd Lipcon: 2 posts

People

Translate

site design / logo © 2021 Grokbase