You can use RunningJob.killTask(TaskAttemptID taskId, boolean shouldFail)
API to kill the task.
Clients can get hold of RunningJob via the JobClient and then use
running-job for killing the task etc.
Refer API doc :http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/mapred/Ru
From: Aleksandr Elbakyan
Sent: Thursday, August 04, 2011 5:10 AM
Subject: Re: Kill Task Programmatically
You can just throw run time exception. In that case it will fail :)
--- On Wed, 8/3/11, Adam Shook wrote:
From: Adam Shook <firstname.lastname@example.org>
Subject: Kill Task Programmatically
To: "email@example.com" <firstname.lastname@example.org>
Date: Wednesday, August 3, 2011, 3:33 PM
Is there any way I can programmatically kill or fail a task, preferably from
inside a Mapper or Reducer?
At any time during a map or reduce task, I have a use case where I know it
won't succeed based solely on the machine it is running on. It is rare, but
I would prefer to kill the task and have Hadoop start it up on a different
machine as usual instead of waiting for the 10 minute default timeout.
I suppose the speculative execution could take care of it, but I would
rather not rely on it if I am able to kill it myself.