FAQ
Is the native build failing on ARM (where gcc doesn't support -m32) a
known issue, and is there a workaround or fix pending?

$ ant -Dcompile.native=true
...
[exec] make  all-am
[exec] make[1]: Entering directory
`/home/trobinson/dev/hadoop-common/build/native/Linux-arm-32'
[exec] /bin/bash ./libtool  --tag=CC   --mode=compile gcc
-DHAVE_CONFIG_H -I. -I/home/trobinson/dev/hadoop-common/src/native
-I/usr/lib/jvm/java-6-openjdk/include
-I/usr/lib/jvm/java-6-openjdk/include/linux
-I/home/trobinson/dev/hadoop-common/src/native/src
-Isrc/org/apache/hadoop/io/compress/zlib
-Isrc/org/apache/hadoop/security -Isrc/org/apache/hadoop/io/nativeio/
-g -Wall -fPIC -O2 -m32 -g -O2 -MT ZlibCompressor.lo -MD -MP -MF
.deps/ZlibCompressor.Tpo -c -o ZlibCompressor.lo `test -f
'src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c' || echo
'/home/trobinson/dev/hadoop-common/src/native/'`src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
[exec] libtool: compile:  gcc -DHAVE_CONFIG_H -I.
-I/home/trobinson/dev/hadoop-common/src/native
-I/usr/lib/jvm/java-6-openjdk/include
-I/usr/lib/jvm/java-6-openjdk/include/linux
-I/home/trobinson/dev/hadoop-common/src/native/src
-Isrc/org/apache/hadoop/io/compress/zlib
-Isrc/org/apache/hadoop/security -Isrc/org/apache/hadoop/io/nativeio/
-g -Wall -fPIC -O2 -m32 -g -O2 -MT ZlibCompressor.lo -MD -MP -MF
.deps/ZlibCompressor.Tpo -c
/home/trobinson/dev/hadoop-common/src/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
-fPIC -DPIC -o .libs/ZlibCompressor.o
[exec] make[1]: Leaving directory
`/home/trobinson/dev/hadoop-common/build/native/Linux-arm-32'
[exec] cc1: error: unrecognized command line option "-m32"
[exec] make[1]: *** [ZlibCompressor.lo] Error 1
[exec] make: *** [all] Error 2

This closest issue I can find is
https://issues.apache.org/jira/browse/HADOOP-6258 (Native compilation
assumes gcc), as well as other issues regarding where and how to
specify -m32/64. However, there doesn't seem to be a specific issue
covering build failure on systems using gcc where the gcc target does
not support -m32/64 (such as ARM).

I've attached a patch that disables specifying -m$(JVM_DATA_MODEL)
when $host_cpu starts with "arm". (For instance, host_cpu = armv7l for
my system.) To any maintainers on this list, please let me know if
you'd like me to open a new issue and/or attach this patch to an
issue.

Thanks,
Trevor

Search Discussions

  • Aaron T. Myers at May 11, 2011 at 12:30 am
    Hi Trevor,

    Thanks a lot for identifying a fix for this issue. Please do file a JIRA
    under the HADOOP project at issues.apache.org. This mailing list doesn't
    deliver attachments, so we can't see the patch you provided. Please attach
    the patch to the JIRA that you file.

    --
    Aaron T. Myers
    Software Engineer, Cloudera


    On Tue, May 10, 2011 at 5:13 PM, Trevor Robinson wrote:

    Is the native build failing on ARM (where gcc doesn't support -m32) a
    known issue, and is there a workaround or fix pending?

    $ ant -Dcompile.native=true
    ...
    [exec] make all-am
    [exec] make[1]: Entering directory
    `/home/trobinson/dev/hadoop-common/build/native/Linux-arm-32'
    [exec] /bin/bash ./libtool --tag=CC --mode=compile gcc
    -DHAVE_CONFIG_H -I. -I/home/trobinson/dev/hadoop-common/src/native
    -I/usr/lib/jvm/java-6-openjdk/include
    -I/usr/lib/jvm/java-6-openjdk/include/linux
    -I/home/trobinson/dev/hadoop-common/src/native/src
    -Isrc/org/apache/hadoop/io/compress/zlib
    -Isrc/org/apache/hadoop/security -Isrc/org/apache/hadoop/io/nativeio/
    -g -Wall -fPIC -O2 -m32 -g -O2 -MT ZlibCompressor.lo -MD -MP -MF
    .deps/ZlibCompressor.Tpo -c -o ZlibCompressor.lo `test -f
    'src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c' || echo

    '/home/trobinson/dev/hadoop-common/src/native/'`src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
    [exec] libtool: compile: gcc -DHAVE_CONFIG_H -I.
    -I/home/trobinson/dev/hadoop-common/src/native
    -I/usr/lib/jvm/java-6-openjdk/include
    -I/usr/lib/jvm/java-6-openjdk/include/linux
    -I/home/trobinson/dev/hadoop-common/src/native/src
    -Isrc/org/apache/hadoop/io/compress/zlib
    -Isrc/org/apache/hadoop/security -Isrc/org/apache/hadoop/io/nativeio/
    -g -Wall -fPIC -O2 -m32 -g -O2 -MT ZlibCompressor.lo -MD -MP -MF
    .deps/ZlibCompressor.Tpo -c

    /home/trobinson/dev/hadoop-common/src/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
    -fPIC -DPIC -o .libs/ZlibCompressor.o
    [exec] make[1]: Leaving directory
    `/home/trobinson/dev/hadoop-common/build/native/Linux-arm-32'
    [exec] cc1: error: unrecognized command line option "-m32"
    [exec] make[1]: *** [ZlibCompressor.lo] Error 1
    [exec] make: *** [all] Error 2

    This closest issue I can find is
    https://issues.apache.org/jira/browse/HADOOP-6258 (Native compilation
    assumes gcc), as well as other issues regarding where and how to
    specify -m32/64. However, there doesn't seem to be a specific issue
    covering build failure on systems using gcc where the gcc target does
    not support -m32/64 (such as ARM).

    I've attached a patch that disables specifying -m$(JVM_DATA_MODEL)
    when $host_cpu starts with "arm". (For instance, host_cpu = armv7l for
    my system.) To any maintainers on this list, please let me know if
    you'd like me to open a new issue and/or attach this patch to an
    issue.

    Thanks,
    Trevor
  • Allen Wittenauer at May 11, 2011 at 12:34 am

    On May 10, 2011, at 5:13 PM, Trevor Robinson wrote:

    Is the native build failing on ARM (where gcc doesn't support -m32) a
    known issue, and is there a workaround or fix pending?
    That's interesting. I didn't realize there was a gcc that didn't support -m. This seems like an odd thing not to support, but whatever. :)
    This closest issue I can find is
    https://issues.apache.org/jira/browse/HADOOP-6258 (Native compilation
    assumes gcc), as well as other issues regarding where and how to
    specify -m32/64. However, there doesn't seem to be a specific issue
    covering build failure on systems using gcc where the gcc target does
    not support -m32/64 (such as ARM).
    I've got a homegrown patch that basically removes a lot of the GNU-ness from the configure to support Sun's compiler, but I don't think I had to remove -m.... so even that won't help you.
    I've attached a patch that disables specifying -m$(JVM_DATA_MODEL)
    when $host_cpu starts with "arm". (For instance, host_cpu = armv7l for
    my system.) To any maintainers on this list, please let me know if
    you'd like me to open a new issue and/or attach this patch to an
    issue.
    Yes, please file a JIRA in HADOOP and attach the patch.

    Thanks!
  • Trevor Robinson at May 11, 2011 at 4:53 pm

    On Tue, May 10, 2011 at 7:34 PM, Allen Wittenauer wrote:
    That's interesting.  I didn't realize there was a gcc that didn't support -m.  This seems like an odd thing not to support, but whatever. :)
    Agreed. I'm sure ARM will support 64-bit someday, so the "right" fix
    would be to change gcc to make -m32 a no-op for ARM and other
    32-bit-only targets, but I'm assuming that approach would be...
    onerous. ;-)
    I've got a homegrown patch that basically removes a lot of the GNU-ness from the configure to support Sun's compiler, but I don't think I had to remove -m.... so even that won't help you.
    That sounds valuable, but I take homegrown to mean not suitable for
    general inclusion? My patch is not nearly so ambitious.
    Yes, please file a JIRA in HADOOP and attach the patch.
    Done: HADOOP-7276. I linked it to your GNU-ness issue (though I'm
    guessing JIRA probably told you that already).

    FYI, I just noticed that Hadoop "one button cluster install" is a goal
    for the Ubuntu ARM Server release for 11.10
    (https://blueprints.launchpad.net/ubuntu/+spec/server-o-arm-server/),
    so expect a few more patches. ;-)

    -Trevor
  • Eli Collins at May 19, 2011 at 6:40 pm
    Hey Trevor,

    Thanks for contributing. Supporting ARM on Hadoop will require a
    number of different changes right? Eg given that Hadoop currently
    depends on some Sun-specific classes, and requires a Sun-compatible
    JVM you'll have to work around this dependency somehow, there's not a
    Sun JVM for ARM right?

    If there's a handful of additional changes then let's make an umbrella
    jira for Hadoop ARM support and make the issues you've already filed
    sub-tasks. You can ping me off-line on how to that if you want.
    Supporting non-x86 processors and non-gcc compilers is an additional
    maintenance burden on the project so it would be helpful to have an
    end-game figured out so these patches don't bitrot in the meantime.

    Thanks,
    Eli
    On Tue, May 10, 2011 at 5:13 PM, Trevor Robinson wrote:
    Is the native build failing on ARM (where gcc doesn't support -m32) a
    known issue, and is there a workaround or fix pending?

    $ ant -Dcompile.native=true
    ...
    [exec] make  all-am
    [exec] make[1]: Entering directory
    `/home/trobinson/dev/hadoop-common/build/native/Linux-arm-32'
    [exec] /bin/bash ./libtool  --tag=CC   --mode=compile gcc
    -DHAVE_CONFIG_H -I. -I/home/trobinson/dev/hadoop-common/src/native
    -I/usr/lib/jvm/java-6-openjdk/include
    -I/usr/lib/jvm/java-6-openjdk/include/linux
    -I/home/trobinson/dev/hadoop-common/src/native/src
    -Isrc/org/apache/hadoop/io/compress/zlib
    -Isrc/org/apache/hadoop/security -Isrc/org/apache/hadoop/io/nativeio/
    -g -Wall -fPIC -O2 -m32 -g -O2 -MT ZlibCompressor.lo -MD -MP -MF
    .deps/ZlibCompressor.Tpo -c -o ZlibCompressor.lo `test -f
    'src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c' || echo
    '/home/trobinson/dev/hadoop-common/src/native/'`src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
    [exec] libtool: compile:  gcc -DHAVE_CONFIG_H -I.
    -I/home/trobinson/dev/hadoop-common/src/native
    -I/usr/lib/jvm/java-6-openjdk/include
    -I/usr/lib/jvm/java-6-openjdk/include/linux
    -I/home/trobinson/dev/hadoop-common/src/native/src
    -Isrc/org/apache/hadoop/io/compress/zlib
    -Isrc/org/apache/hadoop/security -Isrc/org/apache/hadoop/io/nativeio/
    -g -Wall -fPIC -O2 -m32 -g -O2 -MT ZlibCompressor.lo -MD -MP -MF
    .deps/ZlibCompressor.Tpo -c
    /home/trobinson/dev/hadoop-common/src/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
    -fPIC -DPIC -o .libs/ZlibCompressor.o
    [exec] make[1]: Leaving directory
    `/home/trobinson/dev/hadoop-common/build/native/Linux-arm-32'
    [exec] cc1: error: unrecognized command line option "-m32"
    [exec] make[1]: *** [ZlibCompressor.lo] Error 1
    [exec] make: *** [all] Error 2

    This closest issue I can find is
    https://issues.apache.org/jira/browse/HADOOP-6258 (Native compilation
    assumes gcc), as well as other issues regarding where and how to
    specify -m32/64. However, there doesn't seem to be a specific issue
    covering build failure on systems using gcc where the gcc target does
    not support -m32/64 (such as ARM).

    I've attached a patch that disables specifying -m$(JVM_DATA_MODEL)
    when $host_cpu starts with "arm". (For instance, host_cpu = armv7l for
    my system.) To any maintainers on this list, please let me know if
    you'd like me to open a new issue and/or attach this patch to an
    issue.

    Thanks,
    Trevor
  • Trevor Robinson at May 20, 2011 at 11:28 pm
    Hi Eli,
    On Thu, May 19, 2011 at 1:39 PM, Eli Collins wrote:
    Thanks for contributing.   Supporting ARM on Hadoop will require a
    number of different changes right? Eg given that Hadoop currently
    depends on some Sun-specific classes, and requires a Sun-compatible
    JVM you'll have to work around this dependency somehow, there's not a
    Sun JVM for ARM right?
    Actually, there is a Sun JVM for ARM, and it works quite well:

    http://www.oracle.com/technetwork/java/embedded/downloads/index.html

    Currently, it's just a JRE, so you have to use another JDK for javac,
    etc., but I'm optimistic that we'll see a Sun Java SE JDK for ARM
    servers one of these days, given all the ARM server activity from
    Calxeda [http://www.theregister.co.uk/2011/03/14/calxeda_arm_server/],
    Marvell, and nVidia
    [http://www.channelregister.co.uk/2011/01/05/nvidia_arm_pc_server_chip/].

    With the patches I submitted, Hadoop builds completely and nearly all
    of the Commons and HDFS unit tests pass with OpenJDK on ARM. (Some of
    the Map/Reduce unit tests have some crashes due to a bug in the
    OpenJDK build I'm using.) I need to re-run the unit tests with the Sun
    JRE and see if they pass; other tests/benchmarks have run much faster
    and more reliably with the Sun JRE, so I anticipate better results.
    I've run tests like TestDFSIO with the Sun JRE and have had no
    problems.
    If there's a handful of additional changes then let's make an umbrella
    jira for Hadoop ARM support and make the issues you've already filed
    sub-tasks. You can ping me off-line on how to that if you want.
    Supporting non-x86 processors and non-gcc compilers is an additional
    maintenance burden on the project so it would be helpful to have an
    end-game figured out so these patches don't bitrot in the meantime.
    I really don't anticipate any additional changes at this point. No
    Java or C++ code changes have been necessary; it's simply removing
    -m32 from CFLAGS/LDFLAGS and adding ARM to the list of processors in
    apsupport.m4 (which contains lots of other unsupported processors
    anyway). And just to be clear, pretty much everyone uses gcc for
    compilation on ARM, so supporting another compiler is unnecessary for
    this.

    I certainly don't want to increase maintenance burden at this point,
    especially given that data center-grade ARM servers are still in the
    prototype stage. OTOH, these changes seem pretty trivial to me, and
    allow other developers (particularly those evaluating ARM and those
    involved in the Ubuntu ARM Server 11.10 release this fall:
    https://blueprints.launchpad.net/ubuntu/+spec/server-o-arm-server) to
    get Hadoop up and running without having to patch the build.

    I'll follow up offline though, so I can better understand any concerns
    you may still have.

    Thanks,
    Trevor
    On Tue, May 10, 2011 at 5:13 PM, Trevor Robinson wrote:
    Is the native build failing on ARM (where gcc doesn't support -m32) a
    known issue, and is there a workaround or fix pending?

    $ ant -Dcompile.native=true
    ...
    [exec] make  all-am
    [exec] make[1]: Entering directory
    `/home/trobinson/dev/hadoop-common/build/native/Linux-arm-32'
    [exec] /bin/bash ./libtool  --tag=CC   --mode=compile gcc
    -DHAVE_CONFIG_H -I. -I/home/trobinson/dev/hadoop-common/src/native
    -I/usr/lib/jvm/java-6-openjdk/include
    -I/usr/lib/jvm/java-6-openjdk/include/linux
    -I/home/trobinson/dev/hadoop-common/src/native/src
    -Isrc/org/apache/hadoop/io/compress/zlib
    -Isrc/org/apache/hadoop/security -Isrc/org/apache/hadoop/io/nativeio/
    -g -Wall -fPIC -O2 -m32 -g -O2 -MT ZlibCompressor.lo -MD -MP -MF
    .deps/ZlibCompressor.Tpo -c -o ZlibCompressor.lo `test -f
    'src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c' || echo
    '/home/trobinson/dev/hadoop-common/src/native/'`src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
    [exec] libtool: compile:  gcc -DHAVE_CONFIG_H -I.
    -I/home/trobinson/dev/hadoop-common/src/native
    -I/usr/lib/jvm/java-6-openjdk/include
    -I/usr/lib/jvm/java-6-openjdk/include/linux
    -I/home/trobinson/dev/hadoop-common/src/native/src
    -Isrc/org/apache/hadoop/io/compress/zlib
    -Isrc/org/apache/hadoop/security -Isrc/org/apache/hadoop/io/nativeio/
    -g -Wall -fPIC -O2 -m32 -g -O2 -MT ZlibCompressor.lo -MD -MP -MF
    .deps/ZlibCompressor.Tpo -c
    /home/trobinson/dev/hadoop-common/src/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
    -fPIC -DPIC -o .libs/ZlibCompressor.o
    [exec] make[1]: Leaving directory
    `/home/trobinson/dev/hadoop-common/build/native/Linux-arm-32'
    [exec] cc1: error: unrecognized command line option "-m32"
    [exec] make[1]: *** [ZlibCompressor.lo] Error 1
    [exec] make: *** [all] Error 2

    This closest issue I can find is
    https://issues.apache.org/jira/browse/HADOOP-6258 (Native compilation
    assumes gcc), as well as other issues regarding where and how to
    specify -m32/64. However, there doesn't seem to be a specific issue
    covering build failure on systems using gcc where the gcc target does
    not support -m32/64 (such as ARM).

    I've attached a patch that disables specifying -m$(JVM_DATA_MODEL)
    when $host_cpu starts with "arm". (For instance, host_cpu = armv7l for
    my system.) To any maintainers on this list, please let me know if
    you'd like me to open a new issue and/or attach this patch to an
    issue.

    Thanks,
    Trevor
  • Eli Collins at May 23, 2011 at 3:39 am
    Hey Trevor,

    Thanks for all the info. I took a quick look at HADOOP-7276 and
    HDFS-1920, haven't gotten a chance for a full review yet but they
    don't look like they'll be a burden, and if they get Hadoop running on
    ARM that's great!

    Thanks,
    Eli
    On Fri, May 20, 2011 at 4:27 PM, Trevor Robinson wrote:
    Hi Eli,
    On Thu, May 19, 2011 at 1:39 PM, Eli Collins wrote:
    Thanks for contributing.   Supporting ARM on Hadoop will require a
    number of different changes right? Eg given that Hadoop currently
    depends on some Sun-specific classes, and requires a Sun-compatible
    JVM you'll have to work around this dependency somehow, there's not a
    Sun JVM for ARM right?
    Actually, there is a Sun JVM for ARM, and it works quite well:

    http://www.oracle.com/technetwork/java/embedded/downloads/index.html

    Currently, it's just a JRE, so you have to use another JDK for javac,
    etc., but I'm optimistic that we'll see a Sun Java SE JDK for ARM
    servers one of these days, given all the ARM server activity from
    Calxeda [http://www.theregister.co.uk/2011/03/14/calxeda_arm_server/],
    Marvell, and nVidia
    [http://www.channelregister.co.uk/2011/01/05/nvidia_arm_pc_server_chip/].

    With the patches I submitted, Hadoop builds completely and nearly all
    of the Commons and HDFS unit tests pass with OpenJDK on ARM. (Some of
    the Map/Reduce unit tests have some crashes due to a bug in the
    OpenJDK build I'm using.) I need to re-run the unit tests with the Sun
    JRE and see if they pass; other tests/benchmarks have run much faster
    and more reliably with the Sun JRE, so I anticipate better results.
    I've run tests like TestDFSIO with the Sun JRE and have had no
    problems.
    If there's a handful of additional changes then let's make an umbrella
    jira for Hadoop ARM support and make the issues you've already filed
    sub-tasks. You can ping me off-line on how to that if you want.
    Supporting non-x86 processors and non-gcc compilers is an additional
    maintenance burden on the project so it would be helpful to have an
    end-game figured out so these patches don't bitrot in the meantime.
    I really don't anticipate any additional changes at this point. No
    Java or C++ code changes have been necessary; it's simply removing
    -m32 from CFLAGS/LDFLAGS and adding ARM to the list of processors in
    apsupport.m4 (which contains lots of other unsupported processors
    anyway). And just to be clear, pretty much everyone uses gcc for
    compilation on ARM, so supporting another compiler is unnecessary for
    this.

    I certainly don't want to increase maintenance burden at this point,
    especially given that data center-grade ARM servers are still in the
    prototype stage. OTOH, these changes seem pretty trivial to me, and
    allow other developers (particularly those evaluating ARM and those
    involved in the Ubuntu ARM Server 11.10 release this fall:
    https://blueprints.launchpad.net/ubuntu/+spec/server-o-arm-server) to
    get Hadoop up and running without having to patch the build.

    I'll follow up offline though, so I can better understand any concerns
    you may still have.

    Thanks,
    Trevor
    On Tue, May 10, 2011 at 5:13 PM, Trevor Robinson wrote:
    Is the native build failing on ARM (where gcc doesn't support -m32) a
    known issue, and is there a workaround or fix pending?

    $ ant -Dcompile.native=true
    ...
    [exec] make  all-am
    [exec] make[1]: Entering directory
    `/home/trobinson/dev/hadoop-common/build/native/Linux-arm-32'
    [exec] /bin/bash ./libtool  --tag=CC   --mode=compile gcc
    -DHAVE_CONFIG_H -I. -I/home/trobinson/dev/hadoop-common/src/native
    -I/usr/lib/jvm/java-6-openjdk/include
    -I/usr/lib/jvm/java-6-openjdk/include/linux
    -I/home/trobinson/dev/hadoop-common/src/native/src
    -Isrc/org/apache/hadoop/io/compress/zlib
    -Isrc/org/apache/hadoop/security -Isrc/org/apache/hadoop/io/nativeio/
    -g -Wall -fPIC -O2 -m32 -g -O2 -MT ZlibCompressor.lo -MD -MP -MF
    .deps/ZlibCompressor.Tpo -c -o ZlibCompressor.lo `test -f
    'src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c' || echo
    '/home/trobinson/dev/hadoop-common/src/native/'`src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
    [exec] libtool: compile:  gcc -DHAVE_CONFIG_H -I.
    -I/home/trobinson/dev/hadoop-common/src/native
    -I/usr/lib/jvm/java-6-openjdk/include
    -I/usr/lib/jvm/java-6-openjdk/include/linux
    -I/home/trobinson/dev/hadoop-common/src/native/src
    -Isrc/org/apache/hadoop/io/compress/zlib
    -Isrc/org/apache/hadoop/security -Isrc/org/apache/hadoop/io/nativeio/
    -g -Wall -fPIC -O2 -m32 -g -O2 -MT ZlibCompressor.lo -MD -MP -MF
    .deps/ZlibCompressor.Tpo -c
    /home/trobinson/dev/hadoop-common/src/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
    -fPIC -DPIC -o .libs/ZlibCompressor.o
    [exec] make[1]: Leaving directory
    `/home/trobinson/dev/hadoop-common/build/native/Linux-arm-32'
    [exec] cc1: error: unrecognized command line option "-m32"
    [exec] make[1]: *** [ZlibCompressor.lo] Error 1
    [exec] make: *** [all] Error 2

    This closest issue I can find is
    https://issues.apache.org/jira/browse/HADOOP-6258 (Native compilation
    assumes gcc), as well as other issues regarding where and how to
    specify -m32/64. However, there doesn't seem to be a specific issue
    covering build failure on systems using gcc where the gcc target does
    not support -m32/64 (such as ARM).

    I've attached a patch that disables specifying -m$(JVM_DATA_MODEL)
    when $host_cpu starts with "arm". (For instance, host_cpu = armv7l for
    my system.) To any maintainers on this list, please let me know if
    you'd like me to open a new issue and/or attach this patch to an
    issue.

    Thanks,
    Trevor
  • Bharath Mundlapudi at May 23, 2011 at 3:47 pm
    Adding ARM processor support to Hadoop is great. Reducing power consumption on Hadoop grids is a plus.


    Hi Trevor,

    You have mentioned that - "other tests/benchmarks have run much faster". This information is good to know. Can you please tell us which areas you are seeing improvements on w.r.t ARM compared to others. Is this public information?


    -Bharath



    ________________________________
    From: Eli Collins <eli@cloudera.com>
    To: common-dev@hadoop.apache.org
    Sent: Sunday, May 22, 2011 8:38 PM
    Subject: Re: Hadoop native builds fail on ARM due to -m32

    Hey Trevor,

    Thanks for all the info.  I took a quick look at HADOOP-7276 and
    HDFS-1920, haven't gotten a chance for a full review yet but they
    don't look like they'll be a burden, and if they get Hadoop running on
    ARM that's great!

    Thanks,
    Eli
    On Fri, May 20, 2011 at 4:27 PM, Trevor Robinson wrote:
    Hi Eli,
    On Thu, May 19, 2011 at 1:39 PM, Eli Collins wrote:
    Thanks for contributing.   Supporting ARM on Hadoop will require a
    number of different changes right? Eg given that Hadoop currently
    depends on some Sun-specific classes, and requires a Sun-compatible
    JVM you'll have to work around this dependency somehow, there's not a
    Sun JVM for ARM right?
    Actually, there is a Sun JVM for ARM, and it works quite well:

    http://www.oracle.com/technetwork/java/embedded/downloads/index.html

    Currently, it's just a JRE, so you have to use another JDK for javac,
    etc., but I'm optimistic that we'll see a Sun Java SE JDK for ARM
    servers one of these days, given all the ARM server activity from
    Calxeda [http://www.theregister.co.uk/2011/03/14/calxeda_arm_server/],
    Marvell, and nVidia
    [http://www.channelregister.co.uk/2011/01/05/nvidia_arm_pc_server_chip/].

    With the patches I submitted, Hadoop builds completely and nearly all
    of the Commons and HDFS unit tests pass with OpenJDK on ARM. (Some of
    the Map/Reduce unit tests have some crashes due to a bug in the
    OpenJDK build I'm using.) I need to re-run the unit tests with the Sun
    JRE and see if they pass; other tests/benchmarks have run much faster
    and more reliably with the Sun JRE, so I anticipate better results.
    I've run tests like TestDFSIO with the Sun JRE and have had no
    problems.
    If there's a handful of additional changes then let's make an umbrella
    jira for Hadoop ARM support and make the issues you've already filed
    sub-tasks. You can ping me off-line on how to that if you want.
    Supporting non-x86 processors and non-gcc compilers is an additional
    maintenance burden on the project so it would be helpful to have an
    end-game figured out so these patches don't bitrot in the meantime.
    I really don't anticipate any additional changes at this point. No
    Java or C++ code changes have been necessary; it's simply removing
    -m32 from CFLAGS/LDFLAGS and adding ARM to the list of processors in
    apsupport.m4 (which contains lots of other unsupported processors
    anyway). And just to be clear, pretty much everyone uses gcc for
    compilation on ARM, so supporting another compiler is unnecessary for
    this.

    I certainly don't want to increase maintenance burden at this point,
    especially given that data center-grade ARM servers are still in the
    prototype stage. OTOH, these changes seem pretty trivial to me, and
    allow other developers (particularly those evaluating ARM and those
    involved in the Ubuntu ARM Server 11.10 release this fall:
    https://blueprints.launchpad.net/ubuntu/+spec/server-o-arm-server) to
    get Hadoop up and running without having to patch the build.

    I'll follow up offline though, so I can better understand any concerns
    you may still have.

    Thanks,
    Trevor
    On Tue, May 10, 2011 at 5:13 PM, Trevor Robinson wrote:
    Is the native build failing on ARM (where gcc doesn't support -m32) a
    known issue, and is there a workaround or fix pending?

    $ ant -Dcompile.native=true
    ...
    [exec] make  all-am
    [exec] make[1]: Entering directory
    `/home/trobinson/dev/hadoop-common/build/native/Linux-arm-32'
    [exec] /bin/bash ./libtool  --tag=CC   --mode=compile gcc
    -DHAVE_CONFIG_H -I. -I/home/trobinson/dev/hadoop-common/src/native
    -I/usr/lib/jvm/java-6-openjdk/include
    -I/usr/lib/jvm/java-6-openjdk/include/linux
    -I/home/trobinson/dev/hadoop-common/src/native/src
    -Isrc/org/apache/hadoop/io/compress/zlib
    -Isrc/org/apache/hadoop/security -Isrc/org/apache/hadoop/io/nativeio/
    -g -Wall -fPIC -O2 -m32 -g -O2 -MT ZlibCompressor.lo -MD -MP -MF
    .deps/ZlibCompressor.Tpo -c -o ZlibCompressor.lo `test -f
    'src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c' || echo
    '/home/trobinson/dev/hadoop-common/src/native/'`src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
    [exec] libtool: compile:  gcc -DHAVE_CONFIG_H -I.
    -I/home/trobinson/dev/hadoop-common/src/native
    -I/usr/lib/jvm/java-6-openjdk/include
    -I/usr/lib/jvm/java-6-openjdk/include/linux
    -I/home/trobinson/dev/hadoop-common/src/native/src
    -Isrc/org/apache/hadoop/io/compress/zlib
    -Isrc/org/apache/hadoop/security -Isrc/org/apache/hadoop/io/nativeio/
    -g -Wall -fPIC -O2 -m32 -g -O2 -MT ZlibCompressor.lo -MD -MP -MF
    .deps/ZlibCompressor.Tpo -c
    /home/trobinson/dev/hadoop-common/src/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
    -fPIC -DPIC -o .libs/ZlibCompressor.o
    [exec] make[1]: Leaving directory
    `/home/trobinson/dev/hadoop-common/build/native/Linux-arm-32'
    [exec] cc1: error: unrecognized command line option "-m32"
    [exec] make[1]: *** [ZlibCompressor.lo] Error 1
    [exec] make: *** [all] Error 2

    This closest issue I can find is
    https://issues.apache.org/jira/browse/HADOOP-6258 (Native compilation
    assumes gcc), as well as other issues regarding where and how to
    specify -m32/64. However, there doesn't seem to be a specific issue
    covering build failure on systems using gcc where the gcc target does
    not support -m32/64 (such as ARM).

    I've attached a patch that disables specifying -m$(JVM_DATA_MODEL)
    when $host_cpu starts with "arm". (For instance, host_cpu = armv7l for
    my system.) To any maintainers on this list, please let me know if
    you'd like me to open a new issue and/or attach this patch to an
    issue.

    Thanks,
    Trevor
  • Trevor Robinson at May 23, 2011 at 5:10 pm
    Hi Bharath,

    Sorry if I was unclear: I meant that tests/benchmarks run much faster
    on the Sun JRE than with OpenJDK, not necessarily faster than on other
    processors. Of course, this is not surprising given that the Sun JRE
    has a full JIT compiler, and OpenJDK for ARM just has the Zero C++
    interpreter.

    There are other, higher-performance plug-in VMs for OpenJDK (CACAO,
    JamVM, Shark), but they currently have stability issues in some Linux
    distributions. I believe that CACAO and JamVM are code-copying JITs,
    so they place a lot of constraints on the compiler used to build them.
    Shark uses the LLVM JIT, which has some serious bugs on non-x86
    processors (including ARM and PPC); I believe these have not been
    fixed yet because effort is focused on building a new JIT engine (the
    "MC JIT") that shares more of the static compilation code:
    http://blog.llvm.org/2010/04/intro-to-llvm-mc-project.html

    All that said, once we have server-grade ARM hardware in the lab, I'll
    certainly be looking for and sharing any performance advantages I can
    find. Given the low-power, scale-out focus of these processors, it's
    unlikely that we'll see higher single-thread performance than a
    power-hungry x86, but we certainly to expect better performance per
    watt and performance per hardware cost.

    Regards,
    Trevor

    On Mon, May 23, 2011 at 10:46 AM, Bharath Mundlapudi
    wrote:
    Adding ARM processor support to Hadoop is great. Reducing power consumption on Hadoop grids is a plus.


    Hi Trevor,

    You have mentioned that - "other tests/benchmarks have run much faster". This information is good to know. Can you please tell us which areas you are seeing improvements on w.r.t ARM compared to others. Is this public information?


    -Bharath



    ________________________________
    From: Eli Collins <eli@cloudera.com>
    To: common-dev@hadoop.apache.org
    Sent: Sunday, May 22, 2011 8:38 PM
    Subject: Re: Hadoop native builds fail on ARM due to -m32

    Hey Trevor,

    Thanks for all the info.  I took a quick look at HADOOP-7276 and
    HDFS-1920, haven't gotten a chance for a full review yet but they
    don't look like they'll be a burden, and if they get Hadoop running on
    ARM that's great!

    Thanks,
    Eli
    On Fri, May 20, 2011 at 4:27 PM, Trevor Robinson wrote:
    Hi Eli,
    On Thu, May 19, 2011 at 1:39 PM, Eli Collins wrote:
    Thanks for contributing.   Supporting ARM on Hadoop will require a
    number of different changes right? Eg given that Hadoop currently
    depends on some Sun-specific classes, and requires a Sun-compatible
    JVM you'll have to work around this dependency somehow, there's not a
    Sun JVM for ARM right?
    Actually, there is a Sun JVM for ARM, and it works quite well:

    http://www.oracle.com/technetwork/java/embedded/downloads/index.html

    Currently, it's just a JRE, so you have to use another JDK for javac,
    etc., but I'm optimistic that we'll see a Sun Java SE JDK for ARM
    servers one of these days, given all the ARM server activity from
    Calxeda [http://www.theregister.co.uk/2011/03/14/calxeda_arm_server/],
    Marvell, and nVidia
    [http://www.channelregister.co.uk/2011/01/05/nvidia_arm_pc_server_chip/].

    With the patches I submitted, Hadoop builds completely and nearly all
    of the Commons and HDFS unit tests pass with OpenJDK on ARM. (Some of
    the Map/Reduce unit tests have some crashes due to a bug in the
    OpenJDK build I'm using.) I need to re-run the unit tests with the Sun
    JRE and see if they pass; other tests/benchmarks have run much faster
    and more reliably with the Sun JRE, so I anticipate better results.
    I've run tests like TestDFSIO with the Sun JRE and have had no
    problems.
    If there's a handful of additional changes then let's make an umbrella
    jira for Hadoop ARM support and make the issues you've already filed
    sub-tasks. You can ping me off-line on how to that if you want.
    Supporting non-x86 processors and non-gcc compilers is an additional
    maintenance burden on the project so it would be helpful to have an
    end-game figured out so these patches don't bitrot in the meantime.
    I really don't anticipate any additional changes at this point. No
    Java or C++ code changes have been necessary; it's simply removing
    -m32 from CFLAGS/LDFLAGS and adding ARM to the list of processors in
    apsupport.m4 (which contains lots of other unsupported processors
    anyway). And just to be clear, pretty much everyone uses gcc for
    compilation on ARM, so supporting another compiler is unnecessary for
    this.

    I certainly don't want to increase maintenance burden at this point,
    especially given that data center-grade ARM servers are still in the
    prototype stage. OTOH, these changes seem pretty trivial to me, and
    allow other developers (particularly those evaluating ARM and those
    involved in the Ubuntu ARM Server 11.10 release this fall:
    https://blueprints.launchpad.net/ubuntu/+spec/server-o-arm-server) to
    get Hadoop up and running without having to patch the build.

    I'll follow up offline though, so I can better understand any concerns
    you may still have.

    Thanks,
    Trevor
    On Tue, May 10, 2011 at 5:13 PM, Trevor Robinson wrote:
    Is the native build failing on ARM (where gcc doesn't support -m32) a
    known issue, and is there a workaround or fix pending?

    $ ant -Dcompile.native=true
    ...
    [exec] make  all-am
    [exec] make[1]: Entering directory
    `/home/trobinson/dev/hadoop-common/build/native/Linux-arm-32'
    [exec] /bin/bash ./libtool  --tag=CC   --mode=compile gcc
    -DHAVE_CONFIG_H -I. -I/home/trobinson/dev/hadoop-common/src/native
    -I/usr/lib/jvm/java-6-openjdk/include
    -I/usr/lib/jvm/java-6-openjdk/include/linux
    -I/home/trobinson/dev/hadoop-common/src/native/src
    -Isrc/org/apache/hadoop/io/compress/zlib
    -Isrc/org/apache/hadoop/security -Isrc/org/apache/hadoop/io/nativeio/
    -g -Wall -fPIC -O2 -m32 -g -O2 -MT ZlibCompressor.lo -MD -MP -MF
    .deps/ZlibCompressor.Tpo -c -o ZlibCompressor.lo `test -f
    'src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c' || echo
    '/home/trobinson/dev/hadoop-common/src/native/'`src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
    [exec] libtool: compile:  gcc -DHAVE_CONFIG_H -I.
    -I/home/trobinson/dev/hadoop-common/src/native
    -I/usr/lib/jvm/java-6-openjdk/include
    -I/usr/lib/jvm/java-6-openjdk/include/linux
    -I/home/trobinson/dev/hadoop-common/src/native/src
    -Isrc/org/apache/hadoop/io/compress/zlib
    -Isrc/org/apache/hadoop/security -Isrc/org/apache/hadoop/io/nativeio/
    -g -Wall -fPIC -O2 -m32 -g -O2 -MT ZlibCompressor.lo -MD -MP -MF
    .deps/ZlibCompressor.Tpo -c
    /home/trobinson/dev/hadoop-common/src/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
    -fPIC -DPIC -o .libs/ZlibCompressor.o
    [exec] make[1]: Leaving directory
    `/home/trobinson/dev/hadoop-common/build/native/Linux-arm-32'
    [exec] cc1: error: unrecognized command line option "-m32"
    [exec] make[1]: *** [ZlibCompressor.lo] Error 1
    [exec] make: *** [all] Error 2

    This closest issue I can find is
    https://issues.apache.org/jira/browse/HADOOP-6258 (Native compilation
    assumes gcc), as well as other issues regarding where and how to
    specify -m32/64. However, there doesn't seem to be a specific issue
    covering build failure on systems using gcc where the gcc target does
    not support -m32/64 (such as ARM).

    I've attached a patch that disables specifying -m$(JVM_DATA_MODEL)
    when $host_cpu starts with "arm". (For instance, host_cpu = armv7l for
    my system.) To any maintainers on this list, please let me know if
    you'd like me to open a new issue and/or attach this patch to an
    issue.

    Thanks,
    Trevor

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-dev @
categorieshadoop
postedMay 11, '11 at 12:14a
activeMay 23, '11 at 5:10p
posts9
users5
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase