memory at configuration time. When running on EC2 large instance, we could
run two mappers per node in parallel. The problem is that each mapper can
only work well the first time it is configured/initialized. If a job
attempts to run a mapper for a second time due to multiple input files or
one large input file is split into multiple chucks, we run into this 134
error from time to time. This problem only persists on 64bit JVM. When we
move to 32bit platforms, this issue is gone. In both cases, we use most
recent SUN JAVA 1.6 on centos.
Anybody knows what's wrong?
--
View this message in context: http://www.nabble.com/Weird-%27java.io.IOException%3A-Task-process-exit-with-nonzero-status-of-134%27-problem-tp25107532p25107532.html
Sent from the Hadoop core-dev mailing list archive at Nabble.com.
View this message in context: http://www.nabble.com/Weird-%27java.io.IOException%3A-Task-process-exit-with-nonzero-status-of-134%27-problem-tp25107532p25107532.html
Sent from the Hadoop core-dev mailing list archive at Nabble.com.