Hi, there.

I've run into an odd situation, and I'm wondering if there's a way around
it; I'm trying to use Jackson for some JSON serialization in my program,
and I wrote/unit-tested it to work with Jackson 1.9. Then, in integration
testing, I started to see some weird version incompatibilities and
AbstractMethodErrors. Indeed, some digging revealed that our Hadoop
installation (CDH3b3, incidentally) has the Jackson 1.5.2 JARs in its
$HADOOP_HOME/lib directory which, as I understand it, forms the basis of
the remote JVM classpath.

So, for now I've rewritten our code to use the 1.5.2 libraries, but it's
ugly and hacky in some places due to Jackson 1.5.2 not having a sensible
TypeFactory or anything like that. I'm wondering, though, if there's a way
to make the remove JVM use *our* versions of the Jackson libraries
(packaged in the fat JAR) instead of the ones that come with Hadoop.

And no, in deployment we will not be able to control the cluster ourselves
and rip out the old JARs or replace them with updated ones.

Search Discussions

  • David Rosenstrauch at Dec 14, 2011 at 4:05 pm

    On 12/14/2011 08:20 AM, John Armstrong wrote:
    Hi, there.

    I've run into an odd situation, and I'm wondering if there's a way around
    it; I'm trying to use Jackson for some JSON serialization in my program,
    and I wrote/unit-tested it to work with Jackson 1.9. Then, in integration
    testing, I started to see some weird version incompatibilities and
    AbstractMethodErrors. Indeed, some digging revealed that our Hadoop
    installation (CDH3b3, incidentally) has the Jackson 1.5.2 JARs in its
    $HADOOP_HOME/lib directory which, as I understand it, forms the basis of
    the remote JVM classpath.

    So, for now I've rewritten our code to use the 1.5.2 libraries, but it's
    ugly and hacky in some places due to Jackson 1.5.2 not having a sensible
    TypeFactory or anything like that. I'm wondering, though, if there's a way
    to make the remove JVM use *our* versions of the Jackson libraries
    (packaged in the fat JAR) instead of the ones that come with Hadoop.

    And no, in deployment we will not be able to control the cluster ourselves
    and rip out the old JARs or replace them with updated ones.
    I ran into the same (known) issue. (See:
    https://issues.apache.org/jira/browse/MAPREDUCE-1700)

    Doesn't look like there's a solution yet.

    DR
  • John Armstrong at Dec 14, 2011 at 5:36 pm

    On Wed, 14 Dec 2011 11:04:37 -0500, David Rosenstrauch wrote:
    I ran into the same (known) issue. (See:
    https://issues.apache.org/jira/browse/MAPREDUCE-1700)

    Doesn't look like there's a solution yet.
    Thanks; good to know that I'm actually doing the best I can be writing
    everything to be compatible with 1.5.2.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupmapreduce-user @
categorieshadoop
postedDec 14, '11 at 1:21p
activeDec 14, '11 at 5:36p
posts3
users2
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase