|
Jason hadoop |
at Apr 27, 2009 at 4:21 pm
|
⇧ |
| |
You will need to figure out why your task crashed,
Check the task logs, there may be some messages there, that give you a hint
as to what is going on.
you can enable saving failed task logs and then run the task standalone in
the isolation runner.
chapter 7 of my book (alpha available) provides details on this, hoping the
failure repeats in the controlled environment.
You could unlimit the core dump size, via hadoop-env.sh *ulimit -c unlimited
*, but that will require that the failed task logs be available as the core
will be in the task working directory.
On Mon, Apr 27, 2009 at 1:30 AM, Rakhi Khatwani wrote:Thanks Jason,
is there any way we can avoid this exception??
Thanks,
Raakhi
On Mon, Apr 27, 2009 at 1:20 PM, jason hadoop <jason.hadoop@gmail.com
wrote:
The jvm had a hard failure and crashed
On Sun, Apr 26, 2009 at 11:34 PM, Rakhi Khatwani
wrote:
Hi,
In one of the map tasks, i get the following exception:
java.io.IOException: Task process exit with nonzero status of 255.
at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:424)
java.io.IOException: Task process exit with nonzero status of 255.
at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:424)
what could be the reason?
Thanks,
Raakhi
--
Alpha Chapters of my book on Hadoop are available
http://www.apress.com/book/view/9781430219422 --
Alpha Chapters of my book on Hadoop are available
http://www.apress.com/book/view/9781430219422