FAQ
we are running some hadoop map only job where each mapper takes 1-2 G of
memory at configuration time. When running on EC2 large instance, we could
run two mappers per node in parallel. The problem is that each mapper can
only work well the first time it is configured/initialized. If a job
attempts to run a mapper for a second time due to multiple input files or
one large input file is split into multiple chucks, we run into this 134
error from time to time. This problem only persists on 64bit JVM. When we
move to 32bit platforms, this issue is gone. In both cases, we use most
recent SUN JAVA 1.6 on centos.

Anybody knows what's wrong?
--
View this message in context: http://www.nabble.com/Weird-%27java.io.IOException%3A-Task-process-exit-with-nonzero-status-of-134%27-problem-tp25107532p25107532.html
Sent from the Hadoop core-dev mailing list archive at Nabble.com.

Search Discussions

  • Todd Lipcon at Aug 24, 2009 at 5:20 pm
    Hi Tony,
    Signal 134 usually means the JVM crashed hard. So, you're looking at either
    a JVM bug or simply an OutOfMemory situation. There are two possibilities
    that might explain why you see the issue on the 64-bit JVM and not the
    32-bit:

    1) There could be a bug present in the 64-bit but not the 32-bit. Are you
    running the same exact Java release or is your 32-bit possibly newer?

    2) The 64-bit JVM will take more heap size for the same program than the
    32-bit. This is due to the extra overhead of object references in a 64-bit
    heap. There's a "Compressed Object Pointers" option recently introduced that
    can reduce this overhead, but it's not enabled by default as of yet.

    You should be able to look at the stderr output of the tasks that fail to
    deduce what's going on.

    -Todd
    On Sun, Aug 23, 2009 at 2:20 PM, tony_l wrote:


    we are running some hadoop map only job where each mapper takes 1-2 G of
    memory at configuration time. When running on EC2 large instance, we could
    run two mappers per node in parallel. The problem is that each mapper can
    only work well the first time it is configured/initialized. If a job
    attempts to run a mapper for a second time due to multiple input files or
    one large input file is split into multiple chucks, we run into this 134
    error from time to time. This problem only persists on 64bit JVM. When we
    move to 32bit platforms, this issue is gone. In both cases, we use most
    recent SUN JAVA 1.6 on centos.

    Anybody knows what's wrong?
    --
    View this message in context:
    http://www.nabble.com/Weird-%27java.io.IOException%3A-Task-process-exit-with-nonzero-status-of-134%27-problem-tp25107532p25107532.html
    Sent from the Hadoop core-dev mailing list archive at Nabble.com.
  • Indoos at Aug 29, 2009 at 3:04 am
    Hi,
    Todd, right on target!!

    Tony, heap usage would be at least 30% extra in 64 bit as compared to 32
    bit.
    Increasing the SWAP size might help in bypassing the out of memory error,
    but does impact processing speed.

    Todd is referring using -XX:+UseCompressedOops option. You might have to
    find if your JVM version suports it, else you might have to upgrade.
    Here is some more information about it at
    http://blog.juma.me.uk/2008/10/14/32-bit-or-64-bit-jvm-how-about-a-hybrid/
    Another useful option may be looking at the garbage collection tuning along
    with compressed opt option.

    -Sanjay


    Todd Lipcon-4 wrote:
    Hi Tony,
    Signal 134 usually means the JVM crashed hard. So, you're looking at
    either
    a JVM bug or simply an OutOfMemory situation. There are two possibilities
    that might explain why you see the issue on the 64-bit JVM and not the
    32-bit:

    1) There could be a bug present in the 64-bit but not the 32-bit. Are you
    running the same exact Java release or is your 32-bit possibly newer?

    2) The 64-bit JVM will take more heap size for the same program than the
    32-bit. This is due to the extra overhead of object references in a 64-bit
    heap. There's a "Compressed Object Pointers" option recently introduced
    that
    can reduce this overhead, but it's not enabled by default as of yet.

    You should be able to look at the stderr output of the tasks that fail to
    deduce what's going on.

    -Todd
    On Sun, Aug 23, 2009 at 2:20 PM, tony_l wrote:


    we are running some hadoop map only job where each mapper takes 1-2 G of
    memory at configuration time. When running on EC2 large instance, we
    could
    run two mappers per node in parallel. The problem is that each mapper can
    only work well the first time it is configured/initialized. If a job
    attempts to run a mapper for a second time due to multiple input files or
    one large input file is split into multiple chucks, we run into this 134
    error from time to time. This problem only persists on 64bit JVM. When we
    move to 32bit platforms, this issue is gone. In both cases, we use most
    recent SUN JAVA 1.6 on centos.

    Anybody knows what's wrong?
    --
    View this message in context:
    http://www.nabble.com/Weird-%27java.io.IOException%3A-Task-process-exit-with-nonzero-status-of-134%27-problem-tp25107532p25107532.html
    Sent from the Hadoop core-dev mailing list archive at Nabble.com.
    --
    View this message in context: http://www.nabble.com/Weird-%27java.io.IOException%3A-Task-process-exit-with-nonzero-status-of-134%27-problem-tp25107532p25199621.html
    Sent from the Hadoop core-dev mailing list archive at Nabble.com.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-dev @
categorieshadoop
postedAug 23, '09 at 9:21p
activeAug 29, '09 at 3:04a
posts3
users3
websitehadoop.apache.org...
irc#hadoop

3 users in discussion

Indoos: 1 post Tony_l: 1 post Todd Lipcon: 1 post

People

Translate

site design / logo © 2022 Grokbase