Yes Devaraj,
From the logs, looks it failed to create /jobtracker/jobsInfo


code snippet:

if (!fs.exists(path)) {
if (!fs.mkdirs(path, new FsPermission(JOB_STATUS_STORE_DIR_PERMISSION))) {
throw new IOException(
"CompletedJobStatusStore mkdirs failed to create "
+ path.toString());
}

@ Arun, Can you check, you have correct permission as Devaraj said?


2011-09-22 15:53:57.598::INFO: Started SelectChannelConnector@0.0.0.0:50030
11/09/22 15:53:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
11/09/22 15:53:57 WARN conf.Configuration: mapred.task.cache.levels is deprecated. Instead, use mapreduce.jobtracker.taskcache.levels
11/09/22 15:53:57 WARN mapred.SimulatorJobTracker: Error starting tracker: java.io.IOException: CompletedJobStatusStore mkdirs failed to create /jobtracker/jobsInfo
at org.apache.hadoop.mapred.CompletedJobStatusStore.(JobTracker.java:4684)
at org.apache.hadoop.mapred.SimulatorJobTracker.(SimulatorJobTracker.java:100)
at org.apache.hadoop.mapred.SimulatorEngine.init(SimulatorEngine.java:210)
at org.apache.hadoop.mapred.SimulatorEngine.init(SimulatorEngine.java:184)
at org.apache.hadoop.mapred.SimulatorEngine.run(SimulatorEngine.java:292)
at org.apache.hadoop.mapred.SimulatorEngine.run(SimulatorEngine.java:323)

I cc'ed to Mapreduce user mailing list as well.

Regards,
Uma

----- Original Message -----
From: Devaraj K <devaraj.k@huawei.com>
Date: Thursday, September 22, 2011 6:01 pm
Subject: RE: Making Mumak work with capacity scheduler
To: common-user@hadoop.apache.org
Hi Arun,

I have gone through the logs. Mumak simulator is trying to
start the job
tracker and job tracking is failing to start because it is not able to
create "/jobtracker/jobsinfo" directory.

I think the directory doesn't have enough permissions. Please check
thepermissions or any other reason why it is failing to create the
dir.


Devaraj K


-----Original Message-----
From: arun k
Sent: Thursday, September 22, 2011 3:57 PM
To: common-user@hadoop.apache.org
Subject: Re: Making Mumak work with capacity scheduler

Hi Uma !

u got me right !
Actually without any patch when i modified appropriate mapred-
site.xml and
capacity-scheduler.xml and copied capaciy jar accordingly.
I am able to see see queues in Jobracker GUI but both the queues
show same
set of job's execution.
I ran with trace and topology files from test/data :
$bin/mumak.sh trace_file topology_file
Is it because i am not submitting jobs to a particular queue ?
If so how can i do it ?
Got hadoop-0.22 from
http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.22/
builded all three components but when i give
arun@arun-Presario-C500-RU914PA-ACJ:~/hadoop22/branch-
0.22/mapreduce/src/contrib/mumak$
bin/mumak.sh src/test/data/19-jobs.trace.json.gz
src/test/data/19-jobs.topology.json.gz
it gets stuck at some point. Log is here
<http://pastebin.com/9SNUHLFy>
Thanks,
Arun





On Wed, Sep 21, 2011 at 2:03 PM, Uma Maheswara Rao G 72686 <
maheswara@huawei.com> wrote:
Hello Arun,
If you want to apply MAPREDUCE-1253 on 21 version,
applying patch directly using commands may not work because of
codebase> changes.
So, you take the patch and apply the lines in your code base
manually. I
am not sure any otherway for this.

Did i understand wrongly your intention?

Regards,
Uma


----- Original Message -----
From: ArunKumar <arunk786@gmail.com>
Date: Wednesday, September 21, 2011 1:52 pm
Subject: Re: Making Mumak work with capacity scheduler
To: hadoop-user@lucene.apache.org
Hi Uma !

Mumak is not part of stable versions yet. It comes from Hadoop-
0.21 onwards.
Can u describe in detail "You may need to merge them logically (
back port
them)" ?
I don't get it .

Arun


On Wed, Sep 21, 2011 at 12:07 PM, Uma Maheswara Rao G [via
Lucene] <
ml-node+s472066n3354668h87@n3.nabble.com> wrote:
Looks that patchs are based on 0.22 version. So, you can not
apply them
directly.
You may need to merge them logically ( back port them).

one more point to note here 0.21 version of hadoop is not a
stable version.
Presently 0.20xx versions are stable.

Regards,
Uma
----- Original Message -----
From: ArunKumar <[hidden
email]<http://user/SendEmail.jtp?type=node&node=3354668&i=0>>>
Date: Wednesday, September 21, 2011 12:01 pm
Subject: Re: Making Mumak work with capacity scheduler
To: [hidden email]
<http://user/SendEmail.jtp?type=node&node=3354668&i=1>>
Hi Uma !

I am applying patch to mumak in hadoop-0.21 version.


Arun

On Wed, Sep 21, 2011 at 11:55 AM, Uma Maheswara Rao G [via
Lucene] <
[hidden email]
<http://user/SendEmail.jtp?type=node&node=3354668&i=2>>> wrote:
Hello Arun,

On which code base you are trying to apply the patch.
Code should match to apply the patch.

Regards,
Uma

----- Original Message -----
From: ArunKumar <[hidden
email]<http://user/SendEmail.jtp?type=node&node=3354652&i=0>>>
Date: Wednesday, September 21, 2011 11:33 am
Subject: Making Mumak work with capacity scheduler
To: [hidden email]
<http://user/SendEmail.jtp?type=node&node=3354652&i=1>>
Hi !

I have set up mumak and able to run it in terminal and in
eclipse.> > > > I have modified the mapred-site.xml and
capacity-
scheduler.xml as
necessary.I tried to apply patch MAPREDUCE-1253-
20100804.patch in
follows> > > > {HADOOP_HOME}contrib/mumak$patch -p0 <
patch_file_location> > > > but i get error
"3 out of 3 HUNK failed."

Thanks,
Arun



--
View this message in context:
http://lucene.472066.n3.nabble.com/Making-Mumak-work-
with-
capacity-
scheduler-tp3354615p3354615.html
Sent from the Hadoop lucene-users mailing list archive at
Nabble.com.> >

------------------------------
If you reply to this email, your message will be added
to the
discussion> below:
http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-
capacity-scheduler-tp3354615p3354652.html
To unsubscribe from Making Mumak work with capacity
scheduler,> > > > click here<
capacity-
scheduler-tp3354615p3354660.html
Sent from the Hadoop lucene-users mailing list archive at
Nabble.com.>
------------------------------
If you reply to this email, your message will be added to the
discussion> below:
http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-
capacity-scheduler-tp3354615p3354668.html
To unsubscribe from Making Mumak work with capacity scheduler,
click here<
http://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscrib
e_by_code&node=3354615&code=YXJ1bms3ODZAZ21haWwuY29tfDMzNTQ2MTV8NzA5NTc4MTY3
capacity-
scheduler-tp3354615p3354818.html
Sent from the Hadoop lucene-users mailing list archive at
Nabble.com.>

Search Discussions

  • Arun k at Sep 23, 2011 at 4:42 am
    Hi !

    I have changed he permissions for hadoop extract and /jobstory and
    /history/done dir recursively:
    $chmod -R 777 branch-0.22
    $chmod -R logs
    $chmod -R jobracker
    but still i get the same problem.
    The permissions are like this <http://pastebin.com/sw3UPM8t>
    The log is here <http://pastebin.com/CztUPywB>.
    I am able to run as sudo.

    Arun
    On Thu, Sep 22, 2011 at 7:19 PM, Uma Maheswara Rao G 72686 wrote:

    Yes Devaraj,
    From the logs, looks it failed to create /jobtracker/jobsInfo



    code snippet:

    if (!fs.exists(path)) {
    if (!fs.mkdirs(path, new
    FsPermission(JOB_STATUS_STORE_DIR_PERMISSION))) {
    throw new IOException(
    "CompletedJobStatusStore mkdirs failed to create "
    + path.toString());
    }

    @ Arun, Can you check, you have correct permission as Devaraj said?


    2011-09-22 15:53:57.598::INFO: Started
    SelectChannelConnector@0.0.0.0:50030
    11/09/22 15:53:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
    processName=JobTracker, sessionId=
    11/09/22 15:53:57 WARN conf.Configuration: mapred.task.cache.levels is
    deprecated. Instead, use mapreduce.jobtracker.taskcache.levels
    11/09/22 15:53:57 WARN mapred.SimulatorJobTracker: Error starting tracker:
    java.io.IOException: CompletedJobStatusStore mkdirs failed to create
    /jobtracker/jobsInfo
    at
    org.apache.hadoop.mapred.CompletedJobStatusStore.<init>(CompletedJobStatusStore.java:83)
    at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:4684)
    at
    org.apache.hadoop.mapred.SimulatorJobTracker.<init>(SimulatorJobTracker.java:81)
    at
    org.apache.hadoop.mapred.SimulatorJobTracker.startTracker(SimulatorJobTracker.java:100)
    at
    org.apache.hadoop.mapred.SimulatorEngine.init(SimulatorEngine.java:210)
    at
    org.apache.hadoop.mapred.SimulatorEngine.init(SimulatorEngine.java:184)
    at
    org.apache.hadoop.mapred.SimulatorEngine.run(SimulatorEngine.java:292)
    at
    org.apache.hadoop.mapred.SimulatorEngine.run(SimulatorEngine.java:323)

    I cc'ed to Mapreduce user mailing list as well.

    Regards,
    Uma

    ----- Original Message -----
    From: Devaraj K <devaraj.k@huawei.com>
    Date: Thursday, September 22, 2011 6:01 pm
    Subject: RE: Making Mumak work with capacity scheduler
    To: common-user@hadoop.apache.org
    Hi Arun,

    I have gone through the logs. Mumak simulator is trying to
    start the job
    tracker and job tracking is failing to start because it is not able to
    create "/jobtracker/jobsinfo" directory.

    I think the directory doesn't have enough permissions. Please check
    thepermissions or any other reason why it is failing to create the
    dir.


    Devaraj K


    -----Original Message-----
    From: arun k
    Sent: Thursday, September 22, 2011 3:57 PM
    To: common-user@hadoop.apache.org
    Subject: Re: Making Mumak work with capacity scheduler

    Hi Uma !

    u got me right !
    Actually without any patch when i modified appropriate mapred-
    site.xml and
    capacity-scheduler.xml and copied capaciy jar accordingly.
    I am able to see see queues in Jobracker GUI but both the queues
    show same
    set of job's execution.
    I ran with trace and topology files from test/data :
    $bin/mumak.sh trace_file topology_file
    Is it because i am not submitting jobs to a particular queue ?
    If so how can i do it ?
    Got hadoop-0.22 from
    http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.22/
    builded all three components but when i give
    arun@arun-Presario-C500-RU914PA-ACJ:~/hadoop22/branch-
    0.22/mapreduce/src/contrib/mumak$
    bin/mumak.sh src/test/data/19-jobs.trace.json.gz
    src/test/data/19-jobs.topology.json.gz
    it gets stuck at some point. Log is here
    <http://pastebin.com/9SNUHLFy>
    Thanks,
    Arun





    On Wed, Sep 21, 2011 at 2:03 PM, Uma Maheswara Rao G 72686 <
    maheswara@huawei.com> wrote:
    Hello Arun,
    If you want to apply MAPREDUCE-1253 on 21 version,
    applying patch directly using commands may not work because of
    codebase> changes.
    So, you take the patch and apply the lines in your code base
    manually. I
    am not sure any otherway for this.

    Did i understand wrongly your intention?

    Regards,
    Uma


    ----- Original Message -----
    From: ArunKumar <arunk786@gmail.com>
    Date: Wednesday, September 21, 2011 1:52 pm
    Subject: Re: Making Mumak work with capacity scheduler
    To: hadoop-user@lucene.apache.org
    Hi Uma !

    Mumak is not part of stable versions yet. It comes from Hadoop-
    0.21 onwards.
    Can u describe in detail "You may need to merge them logically (
    back port
    them)" ?
    I don't get it .

    Arun


    On Wed, Sep 21, 2011 at 12:07 PM, Uma Maheswara Rao G [via
    Lucene] <
    ml-node+s472066n3354668h87@n3.nabble.com> wrote:
    Looks that patchs are based on 0.22 version. So, you can not
    apply them
    directly.
    You may need to merge them logically ( back port them).

    one more point to note here 0.21 version of hadoop is not a
    stable version.
    Presently 0.20xx versions are stable.

    Regards,
    Uma
    ----- Original Message -----
    From: ArunKumar <[hidden
    email]<http://user/SendEmail.jtp?type=node&node=3354668&i=0>>>
    Date: Wednesday, September 21, 2011 12:01 pm
    Subject: Re: Making Mumak work with capacity scheduler
    To: [hidden email]
    <http://user/SendEmail.jtp?type=node&node=3354668&i=1>>
    Hi Uma !

    I am applying patch to mumak in hadoop-0.21 version.


    Arun

    On Wed, Sep 21, 2011 at 11:55 AM, Uma Maheswara Rao G [via
    Lucene] <
    [hidden email]
    <http://user/SendEmail.jtp?type=node&node=3354668&i=2>>> wrote:
    Hello Arun,

    On which code base you are trying to apply the patch.
    Code should match to apply the patch.

    Regards,
    Uma

    ----- Original Message -----
    From: ArunKumar <[hidden
    email]<http://user/SendEmail.jtp?type=node&node=3354652&i=0>>>
    Date: Wednesday, September 21, 2011 11:33 am
    Subject: Making Mumak work with capacity scheduler
    To: [hidden email]
    <http://user/SendEmail.jtp?type=node&node=3354652&i=1>>
    Hi !

    I have set up mumak and able to run it in terminal and in
    eclipse.> > > > I have modified the mapred-site.xml and
    capacity-
    scheduler.xml as
    necessary.I tried to apply patch MAPREDUCE-1253-
    20100804.patch in
    follows> > > > {HADOOP_HOME}contrib/mumak$patch -p0 <
    patch_file_location> > > > but i get error
    "3 out of 3 HUNK failed."

    Thanks,
    Arun



    --
    View this message in context:
    http://lucene.472066.n3.nabble.com/Making-Mumak-work-
    with-
    capacity-
    scheduler-tp3354615p3354615.html
    Sent from the Hadoop lucene-users mailing list archive at
    Nabble.com.> >

    ------------------------------
    If you reply to this email, your message will be added
    to the
    discussion> below:
    http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-
    capacity-scheduler-tp3354615p3354652.html
    To unsubscribe from Making Mumak work with capacity
    scheduler,> > > > click here<
    capacity-
    scheduler-tp3354615p3354660.html
    Sent from the Hadoop lucene-users mailing list archive at
    Nabble.com.>
    ------------------------------
    If you reply to this email, your message will be added to the
    discussion> below:
    http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-
    capacity-scheduler-tp3354615p3354668.html
    To unsubscribe from Making Mumak work with capacity scheduler,
    click here<
    http://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscrib
    e_by_code&node=3354615&code=YXJ1bms3ODZAZ21haWwuY29tfDMzNTQ2MTV8NzA5NTc4MTY3
    capacity-
    scheduler-tp3354615p3354818.html
    Sent from the Hadoop lucene-users mailing list archive at
    Nabble.com.>
  • Arun k at Sep 23, 2011 at 6:28 am
    Hi guys !

    I have run mumak as sudo. It works fine.
    i am trying to run jobtrace in test/data with capacity scheduler.
    I have done :
    1> Build contrib/capacity-scheduler
    2>Copied hadoop-*-capacity-jar from build/contrib/capacity_scheduler to lib/
    3>added mapred.jobtracker.taskScheduler and mapred.queue.names in
    mapred-site.xml
    4>In conf/capacity-scheduler
    set the propoery value for 2 queues
    mapred.capacity-scheduler.queue.default.capacity 20
    mapred.capacity-scheduler.queue.myqueue2.capacity 80

    When i run mumak.sh
    i see in console
    11/09/23 11:51:19 INFO mapred.QueueHierarchyBuilder: No capacity specified
    for queue default
    11/09/23 11:51:19 INFO mapred.QueueHierarchyBuilder: Created a jobQueue
    default and added it as a child to
    11/09/23 11:51:19 INFO mapred.QueueHierarchyBuilder: No capacity specified
    for queue myqueue2
    11/09/23 11:51:19 INFO mapred.QueueHierarchyBuilder: Created a jobQueue
    myqueue2 and added it as a child to
    11/09/23 11:51:19 INFO mapred.AbstractQueue: Total capacity to be
    distributed among the others are 100.0
    11/09/23 11:51:19 INFO mapred.AbstractQueue: Capacity share for un
    configured queue default is 50.0
    11/09/23 11:51:19 INFO mapred.AbstractQueue: Capacity share for un
    configured queue myqueue2 is 50.0
    11/09/23 11:51:19 INFO mapred.CapacityTaskScheduler: Capacity scheduler
    started successfully

    2 Q's :

    1> In web GUI of Jobtracker i see both he queues but "CAPACITIES ARE
    REFLECTED"
    2> All the jobs by defaul are submitted to "default" queue. How can i submit
    jobs to various queues in mumak ?

    Regards,
    Arun
    On Fri, Sep 23, 2011 at 10:12 AM, arun k wrote:

    Hi !

    I have changed he permissions for hadoop extract and /jobstory and
    /history/done dir recursively:
    $chmod -R 777 branch-0.22
    $chmod -R logs
    $chmod -R jobracker
    but still i get the same problem.
    The permissions are like this <http://pastebin.com/sw3UPM8t>
    The log is here <http://pastebin.com/CztUPywB>.
    I am able to run as sudo.

    Arun

    On Thu, Sep 22, 2011 at 7:19 PM, Uma Maheswara Rao G 72686 <
    maheswara@huawei.com> wrote:
    Yes Devaraj,
    From the logs, looks it failed to create /jobtracker/jobsInfo



    code snippet:

    if (!fs.exists(path)) {
    if (!fs.mkdirs(path, new
    FsPermission(JOB_STATUS_STORE_DIR_PERMISSION))) {
    throw new IOException(
    "CompletedJobStatusStore mkdirs failed to create "
    + path.toString());
    }

    @ Arun, Can you check, you have correct permission as Devaraj said?


    2011-09-22 15:53:57.598::INFO: Started
    SelectChannelConnector@0.0.0.0:50030
    11/09/22 15:53:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
    processName=JobTracker, sessionId=
    11/09/22 15:53:57 WARN conf.Configuration: mapred.task.cache.levels is
    deprecated. Instead, use mapreduce.jobtracker.taskcache.levels
    11/09/22 15:53:57 WARN mapred.SimulatorJobTracker: Error starting tracker:
    java.io.IOException: CompletedJobStatusStore mkdirs failed to create
    /jobtracker/jobsInfo
    at
    org.apache.hadoop.mapred.CompletedJobStatusStore.<init>(CompletedJobStatusStore.java:83)
    at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:4684)
    at
    org.apache.hadoop.mapred.SimulatorJobTracker.<init>(SimulatorJobTracker.java:81)
    at
    org.apache.hadoop.mapred.SimulatorJobTracker.startTracker(SimulatorJobTracker.java:100)
    at
    org.apache.hadoop.mapred.SimulatorEngine.init(SimulatorEngine.java:210)
    at
    org.apache.hadoop.mapred.SimulatorEngine.init(SimulatorEngine.java:184)
    at
    org.apache.hadoop.mapred.SimulatorEngine.run(SimulatorEngine.java:292)
    at
    org.apache.hadoop.mapred.SimulatorEngine.run(SimulatorEngine.java:323)

    I cc'ed to Mapreduce user mailing list as well.

    Regards,
    Uma

    ----- Original Message -----
    From: Devaraj K <devaraj.k@huawei.com>
    Date: Thursday, September 22, 2011 6:01 pm
    Subject: RE: Making Mumak work with capacity scheduler
    To: common-user@hadoop.apache.org
    Hi Arun,

    I have gone through the logs. Mumak simulator is trying to
    start the job
    tracker and job tracking is failing to start because it is not able to
    create "/jobtracker/jobsinfo" directory.

    I think the directory doesn't have enough permissions. Please check
    thepermissions or any other reason why it is failing to create the
    dir.


    Devaraj K


    -----Original Message-----
    From: arun k
    Sent: Thursday, September 22, 2011 3:57 PM
    To: common-user@hadoop.apache.org
    Subject: Re: Making Mumak work with capacity scheduler

    Hi Uma !

    u got me right !
    Actually without any patch when i modified appropriate mapred-
    site.xml and
    capacity-scheduler.xml and copied capaciy jar accordingly.
    I am able to see see queues in Jobracker GUI but both the queues
    show same
    set of job's execution.
    I ran with trace and topology files from test/data :
    $bin/mumak.sh trace_file topology_file
    Is it because i am not submitting jobs to a particular queue ?
    If so how can i do it ?
    Got hadoop-0.22 from
    http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.22/
    builded all three components but when i give
    arun@arun-Presario-C500-RU914PA-ACJ:~/hadoop22/branch-
    0.22/mapreduce/src/contrib/mumak$
    bin/mumak.sh src/test/data/19-jobs.trace.json.gz
    src/test/data/19-jobs.topology.json.gz
    it gets stuck at some point. Log is here
    <http://pastebin.com/9SNUHLFy>
    Thanks,
    Arun





    On Wed, Sep 21, 2011 at 2:03 PM, Uma Maheswara Rao G 72686 <
    maheswara@huawei.com> wrote:
    Hello Arun,
    If you want to apply MAPREDUCE-1253 on 21 version,
    applying patch directly using commands may not work because of
    codebase> changes.
    So, you take the patch and apply the lines in your code base
    manually. I
    am not sure any otherway for this.

    Did i understand wrongly your intention?

    Regards,
    Uma


    ----- Original Message -----
    From: ArunKumar <arunk786@gmail.com>
    Date: Wednesday, September 21, 2011 1:52 pm
    Subject: Re: Making Mumak work with capacity scheduler
    To: hadoop-user@lucene.apache.org
    Hi Uma !

    Mumak is not part of stable versions yet. It comes from Hadoop-
    0.21 onwards.
    Can u describe in detail "You may need to merge them logically (
    back port
    them)" ?
    I don't get it .

    Arun


    On Wed, Sep 21, 2011 at 12:07 PM, Uma Maheswara Rao G [via
    Lucene] <
    ml-node+s472066n3354668h87@n3.nabble.com> wrote:
    Looks that patchs are based on 0.22 version. So, you can not
    apply them
    directly.
    You may need to merge them logically ( back port them).

    one more point to note here 0.21 version of hadoop is not a
    stable version.
    Presently 0.20xx versions are stable.

    Regards,
    Uma
    ----- Original Message -----
    From: ArunKumar <[hidden
    email]<http://user/SendEmail.jtp?type=node&node=3354668&i=0>>>
    Date: Wednesday, September 21, 2011 12:01 pm
    Subject: Re: Making Mumak work with capacity scheduler
    To: [hidden email]
    <http://user/SendEmail.jtp?type=node&node=3354668&i=1>>
    Hi Uma !

    I am applying patch to mumak in hadoop-0.21 version.


    Arun

    On Wed, Sep 21, 2011 at 11:55 AM, Uma Maheswara Rao G [via
    Lucene] <
    [hidden email]
    <http://user/SendEmail.jtp?type=node&node=3354668&i=2>>> wrote:
    Hello Arun,

    On which code base you are trying to apply the patch.
    Code should match to apply the patch.

    Regards,
    Uma

    ----- Original Message -----
    From: ArunKumar <[hidden
    email]<http://user/SendEmail.jtp?type=node&node=3354652&i=0>>>
    Date: Wednesday, September 21, 2011 11:33 am
    Subject: Making Mumak work with capacity scheduler
    To: [hidden email]
    <http://user/SendEmail.jtp?type=node&node=3354652&i=1>>
    Hi !

    I have set up mumak and able to run it in terminal and in
    eclipse.> > > > I have modified the mapred-site.xml and
    capacity-
    scheduler.xml as
    necessary.I tried to apply patch MAPREDUCE-1253-
    20100804.patch in
    follows> > > > {HADOOP_HOME}contrib/mumak$patch -p0 <
    patch_file_location> > > > but i get error
    "3 out of 3 HUNK failed."

    Thanks,
    Arun



    --
    View this message in context:
    http://lucene.472066.n3.nabble.com/Making-Mumak-work-
    with-
    capacity-
    scheduler-tp3354615p3354615.html
    Sent from the Hadoop lucene-users mailing list archive at
    Nabble.com.> >

    ------------------------------
    If you reply to this email, your message will be added
    to the
    discussion> below:
    http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-
    capacity-scheduler-tp3354615p3354652.html
    To unsubscribe from Making Mumak work with capacity
    scheduler,> > > > click here<
    capacity-
    scheduler-tp3354615p3354660.html
    Sent from the Hadoop lucene-users mailing list archive at
    Nabble.com.>
    ------------------------------
    If you reply to this email, your message will be added to the
    discussion> below:
    http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-
    capacity-scheduler-tp3354615p3354668.html
    To unsubscribe from Making Mumak work with capacity scheduler,
    click here<
    http://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscrib
    e_by_code&node=3354615&code=YXJ1bms3ODZAZ21haWwuY29tfDMzNTQ2MTV8NzA5NTc4MTY3
    capacity-
    scheduler-tp3354615p3354818.html
    Sent from the Hadoop lucene-users mailing list archive at
    Nabble.com.>
  • Arun k at Sep 23, 2011 at 6:30 am
    Sorry ,

    1Q: In web GUI of Jobtracker i see both he queues but "CAPACITIES ARE NOT
    REFLECTED"
    2Q:All the jobs by defaul are submitted to "default" queue. How can i submit
    jobs to various queues in mumak ?


    regards,
    Arun
    On Fri, Sep 23, 2011 at 11:57 AM, arun k wrote:

    Hi guys !

    I have run mumak as sudo. It works fine.
    i am trying to run jobtrace in test/data with capacity scheduler.
    I have done :
    1> Build contrib/capacity-scheduler
    2>Copied hadoop-*-capacity-jar from build/contrib/capacity_scheduler to
    lib/
    3>added mapred.jobtracker.taskScheduler and mapred.queue.names in
    mapred-site.xml
    4>In conf/capacity-scheduler
    set the propoery value for 2 queues
    mapred.capacity-scheduler.queue.default.capacity 20
    mapred.capacity-scheduler.queue.myqueue2.capacity 80

    When i run mumak.sh
    i see in console
    11/09/23 11:51:19 INFO mapred.QueueHierarchyBuilder: No capacity specified
    for queue default
    11/09/23 11:51:19 INFO mapred.QueueHierarchyBuilder: Created a jobQueue
    default and added it as a child to
    11/09/23 11:51:19 INFO mapred.QueueHierarchyBuilder: No capacity specified
    for queue myqueue2
    11/09/23 11:51:19 INFO mapred.QueueHierarchyBuilder: Created a jobQueue
    myqueue2 and added it as a child to
    11/09/23 11:51:19 INFO mapred.AbstractQueue: Total capacity to be
    distributed among the others are 100.0
    11/09/23 11:51:19 INFO mapred.AbstractQueue: Capacity share for un
    configured queue default is 50.0
    11/09/23 11:51:19 INFO mapred.AbstractQueue: Capacity share for un
    configured queue myqueue2 is 50.0
    11/09/23 11:51:19 INFO mapred.CapacityTaskScheduler: Capacity scheduler
    started successfully

    2 Q's :

    1> In web GUI of Jobtracker i see both he queues but "CAPACITIES ARE
    REFLECTED"
    2> All the jobs by defaul are submitted to "default" queue. How can i
    submit jobs to various queues in mumak ?

    Regards,
    Arun

    On Fri, Sep 23, 2011 at 10:12 AM, arun k wrote:

    Hi !

    I have changed he permissions for hadoop extract and /jobstory and
    /history/done dir recursively:
    $chmod -R 777 branch-0.22
    $chmod -R logs
    $chmod -R jobracker
    but still i get the same problem.
    The permissions are like this <http://pastebin.com/sw3UPM8t>
    The log is here <http://pastebin.com/CztUPywB>.
    I am able to run as sudo.

    Arun

    On Thu, Sep 22, 2011 at 7:19 PM, Uma Maheswara Rao G 72686 <
    maheswara@huawei.com> wrote:
    Yes Devaraj,
    From the logs, looks it failed to create /jobtracker/jobsInfo



    code snippet:

    if (!fs.exists(path)) {
    if (!fs.mkdirs(path, new
    FsPermission(JOB_STATUS_STORE_DIR_PERMISSION))) {
    throw new IOException(
    "CompletedJobStatusStore mkdirs failed to create "
    + path.toString());
    }

    @ Arun, Can you check, you have correct permission as Devaraj said?


    2011-09-22 15:53:57.598::INFO: Started
    SelectChannelConnector@0.0.0.0:50030
    11/09/22 15:53:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
    processName=JobTracker, sessionId=
    11/09/22 15:53:57 WARN conf.Configuration: mapred.task.cache.levels is
    deprecated. Instead, use mapreduce.jobtracker.taskcache.levels
    11/09/22 15:53:57 WARN mapred.SimulatorJobTracker: Error starting
    tracker: java.io.IOException: CompletedJobStatusStore mkdirs failed to
    create /jobtracker/jobsInfo
    at
    org.apache.hadoop.mapred.CompletedJobStatusStore.<init>(CompletedJobStatusStore.java:83)
    at
    org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:4684)
    at
    org.apache.hadoop.mapred.SimulatorJobTracker.<init>(SimulatorJobTracker.java:81)
    at
    org.apache.hadoop.mapred.SimulatorJobTracker.startTracker(SimulatorJobTracker.java:100)
    at
    org.apache.hadoop.mapred.SimulatorEngine.init(SimulatorEngine.java:210)
    at
    org.apache.hadoop.mapred.SimulatorEngine.init(SimulatorEngine.java:184)
    at
    org.apache.hadoop.mapred.SimulatorEngine.run(SimulatorEngine.java:292)
    at
    org.apache.hadoop.mapred.SimulatorEngine.run(SimulatorEngine.java:323)

    I cc'ed to Mapreduce user mailing list as well.

    Regards,
    Uma

    ----- Original Message -----
    From: Devaraj K <devaraj.k@huawei.com>
    Date: Thursday, September 22, 2011 6:01 pm
    Subject: RE: Making Mumak work with capacity scheduler
    To: common-user@hadoop.apache.org
    Hi Arun,

    I have gone through the logs. Mumak simulator is trying to
    start the job
    tracker and job tracking is failing to start because it is not able to
    create "/jobtracker/jobsinfo" directory.

    I think the directory doesn't have enough permissions. Please check
    thepermissions or any other reason why it is failing to create the
    dir.


    Devaraj K


    -----Original Message-----
    From: arun k
    Sent: Thursday, September 22, 2011 3:57 PM
    To: common-user@hadoop.apache.org
    Subject: Re: Making Mumak work with capacity scheduler

    Hi Uma !

    u got me right !
    Actually without any patch when i modified appropriate mapred-
    site.xml and
    capacity-scheduler.xml and copied capaciy jar accordingly.
    I am able to see see queues in Jobracker GUI but both the queues
    show same
    set of job's execution.
    I ran with trace and topology files from test/data :
    $bin/mumak.sh trace_file topology_file
    Is it because i am not submitting jobs to a particular queue ?
    If so how can i do it ?
    Got hadoop-0.22 from
    http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.22/
    builded all three components but when i give
    arun@arun-Presario-C500-RU914PA-ACJ:~/hadoop22/branch-
    0.22/mapreduce/src/contrib/mumak$
    bin/mumak.sh src/test/data/19-jobs.trace.json.gz
    src/test/data/19-jobs.topology.json.gz
    it gets stuck at some point. Log is here
    <http://pastebin.com/9SNUHLFy>
    Thanks,
    Arun





    On Wed, Sep 21, 2011 at 2:03 PM, Uma Maheswara Rao G 72686 <
    maheswara@huawei.com> wrote:
    Hello Arun,
    If you want to apply MAPREDUCE-1253 on 21 version,
    applying patch directly using commands may not work because of
    codebase> changes.
    So, you take the patch and apply the lines in your code base
    manually. I
    am not sure any otherway for this.

    Did i understand wrongly your intention?

    Regards,
    Uma


    ----- Original Message -----
    From: ArunKumar <arunk786@gmail.com>
    Date: Wednesday, September 21, 2011 1:52 pm
    Subject: Re: Making Mumak work with capacity scheduler
    To: hadoop-user@lucene.apache.org
    Hi Uma !

    Mumak is not part of stable versions yet. It comes from Hadoop-
    0.21 onwards.
    Can u describe in detail "You may need to merge them logically (
    back port
    them)" ?
    I don't get it .

    Arun


    On Wed, Sep 21, 2011 at 12:07 PM, Uma Maheswara Rao G [via
    Lucene] <
    ml-node+s472066n3354668h87@n3.nabble.com> wrote:
    Looks that patchs are based on 0.22 version. So, you can not
    apply them
    directly.
    You may need to merge them logically ( back port them).

    one more point to note here 0.21 version of hadoop is not a
    stable version.
    Presently 0.20xx versions are stable.

    Regards,
    Uma
    ----- Original Message -----
    From: ArunKumar <[hidden
    email]<http://user/SendEmail.jtp?type=node&node=3354668&i=0>>>
    Date: Wednesday, September 21, 2011 12:01 pm
    Subject: Re: Making Mumak work with capacity scheduler
    To: [hidden email]
    <http://user/SendEmail.jtp?type=node&node=3354668&i=1>>
    Hi Uma !

    I am applying patch to mumak in hadoop-0.21 version.


    Arun

    On Wed, Sep 21, 2011 at 11:55 AM, Uma Maheswara Rao G [via
    Lucene] <
    [hidden email]
    <http://user/SendEmail.jtp?type=node&node=3354668&i=2>>> wrote:
    Hello Arun,

    On which code base you are trying to apply the patch.
    Code should match to apply the patch.

    Regards,
    Uma

    ----- Original Message -----
    From: ArunKumar <[hidden
    email]<http://user/SendEmail.jtp?type=node&node=3354652&i=0>>>
    Date: Wednesday, September 21, 2011 11:33 am
    Subject: Making Mumak work with capacity scheduler
    To: [hidden email]
    <http://user/SendEmail.jtp?type=node&node=3354652&i=1>>
    Hi !

    I have set up mumak and able to run it in terminal and in
    eclipse.> > > > I have modified the mapred-site.xml and
    capacity-
    scheduler.xml as
    necessary.I tried to apply patch MAPREDUCE-1253-
    20100804.patch in
    follows> > > > {HADOOP_HOME}contrib/mumak$patch -p0 <
    patch_file_location> > > > but i get error
    "3 out of 3 HUNK failed."

    Thanks,
    Arun



    --
    View this message in context:
    http://lucene.472066.n3.nabble.com/Making-Mumak-work-
    with-
    capacity-
    scheduler-tp3354615p3354615.html
    Sent from the Hadoop lucene-users mailing list archive at
    Nabble.com.> >

    ------------------------------
    If you reply to this email, your message will be added
    to the
    discussion> below:
    http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-
    capacity-scheduler-tp3354615p3354652.html
    To unsubscribe from Making Mumak work with capacity
    scheduler,> > > > click here<
    capacity-
    scheduler-tp3354615p3354660.html
    Sent from the Hadoop lucene-users mailing list archive at
    Nabble.com.>
    ------------------------------
    If you reply to this email, your message will be added to the
    discussion> below:
    http://lucene.472066.n3.nabble.com/Making-Mumak-work-with-
    capacity-scheduler-tp3354615p3354668.html
    To unsubscribe from Making Mumak work with capacity scheduler,
    click here<
    http://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscrib
    e_by_code&node=3354615&code=YXJ1bms3ODZAZ21haWwuY29tfDMzNTQ2MTV8NzA5NTc4MTY3
    capacity-
    scheduler-tp3354615p3354818.html
    Sent from the Hadoop lucene-users mailing list archive at
    Nabble.com.>

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupmapreduce-user @
categorieshadoop
postedSep 22, '11 at 1:49p
activeSep 23, '11 at 6:30a
posts4
users2
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase