FAQ
I've been scratching my head for this one all afternoon. Anyone seen this exception before?


The query where it's failing is similar to this:


=> (?<- (stdout)
         [?person ?age]
         ([["alice"] ["bob"]] ?person)
         (age ?person ?age))


RESULTS
-----------------------
alice 28
bob 33
-----------------------


The actual code is this:


(defn filter-bots [uscc-src wl-src]
   (let [fields (get-out-fields wl-src)] ;; fields contains "?uscc"
     (<- fields
         (uscc-src ?uscc)
         (wl-src :>> fields)
         (:distinct false))))


I've tried numerous variations of it already. And to add to the mystery, the failing query is passing in a local test.




2012-04-10 17:02:00,130 WARN org.apache.hadoop.conf.Configuration: /data3/mapred/local/taskTracker/paullam/jobcache/job_201202271958_419049/job.xml:a attempt to override final parameter: fs.trash.interval; Ignoring.
2012-04-10 17:02:00,401 WARN org.apache.hadoop.conf.Configuration: /data3/mapred/local/taskTracker/paullam/jobcache/job_201202271958_419049/job.xml:a attempt to override final parameter: fs.trash.interval; Ignoring.
2012-04-10 17:02:00,439 WARN org.apache.hadoop.io.compress.snappy.LoadSnappy: Snappy native library is available
2012-04-10 17:02:59,442 WARN org.apache.hadoop.mapred.Child: Error running child
cascading.flow.FlowException: internal error during reducer execution
  at cascading.flow.FlowReducer.reduce(FlowReducer.java:82)
  at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:469)
  at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:417)
  at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:396)
  at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
  at org.apache.hadoop.mapred.Child.main(Child.java:264)
Caused by: java.lang.RuntimeException: Invalid token: :
  at clojure.lang.Util.runtimeException(Util.java:156)
  at clojure.lang.LispReader.interpretToken(LispReader.java:321)
  at clojure.lang.LispReader.read(LispReader.java:206)
  at clojure.lang.RT.readString(RT.java:1681)
  at clojure.core$read_string.invoke(core.clj:3359)
  at carbonite.serializer$clj_read.invoke(serializer.clj:40)
  at clojure.lang.Var.invoke(Var.java:401)
  at carbonite.ClojureReaderSerializer.readObjectData(Unknown Source)
  at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:566)
  at carbonite.serializer$read_map$fn__38.invoke(serializer.clj:86)
  at carbonite.serializer$read_map.invoke(serializer.clj:80)
  at clojure.lang.Var.invoke(Var.java:405)
  at carbonite.ClojureMapSerializer.readObjectData(Unknown Source)
  at com.esotericsoftware.kryo.Serializer.readObject(Serializer.java:61)
  at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:589)
  at com.esotericsoftware.kryo.ObjectBuffer.readObject(ObjectBuffer.java:213)
  at cascading.kryo.KryoDeserializer.deserialize(KryoDeserializer.java:34)
  at cascading.tuple.hadoop.SerializationElementReader.read(SerializationElementReader.java:70)
  at cascading.tuple.TupleInputStream.readType(TupleInputStream.java:174)
  at cascading.tuple.TupleInputStream.getNextElement(TupleInputStream.java:101)
  at cascading.tuple.TupleInputStream.readTuple(TupleInputStream.java:84)
  at cascading.tuple.TupleInputStream.readTuple(TupleInputStream.java:73)
  at cascading.tuple.TupleInputStream.readIndexTuple(TupleInputStream.java:127)
  at cascading.tuple.hadoop.IndexTupleDeserializer.deserialize(IndexTupleDeserializer.java:40)
  at cascading.tuple.hadoop.IndexTupleDeserializer.deserialize(IndexTupleDeserializer.java:28)
  at org.apache.hadoop.mapred.Task$ValuesIterator.readNextValue(Task.java:1161)
  at org.apache.hadoop.mapred.Task$ValuesIterator.next(Task.java:1101)
  at org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.moveToNext(ReduceTask.java:246)
  at org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.next(ReduceTask.java:242)
  at cascading.pipe.cogroup.CoGroupClosure.build(CoGroupClosure.java:108)
  at cascading.pipe.cogroup.CoGroupClosure.reset(CoGroupClosure.java:99)
  at cascading.pipe.Group.iterateReduceValues(Group.java:993)
  at cascading.flow.stack.GroupReducerStackElement.operateGroup(GroupReducerStackElement.java:70)
  at cascading.flow.stack.GroupReducerStackElement.collect(GroupReducerStackElement.java:58)
  at cascading.flow.stack.FlowReducerStack.reduce(FlowReducerStack.java:169)
  at cascading.flow.FlowReducer.reduce(FlowReducer.java:75)
  ... 7 more

Search Discussions

  • Soren Macbeth at Apr 10, 2012 at 4:24 pm
    Can't say for sure, but it looks like something in one of your fields has a ':' character and kryo is getting confused trying to interpret it as a clojure keyword perhaps?

    --
    http://about.me/soren

    On Tuesday, April 10, 2012 at 9:12 AM, Paul Lam wrote:

    I've been scratching my head for this one all afternoon. Anyone seen this exception before?

    The query where it's failing is similar to this:

    => (?<- (stdout) [?person ?age] ([["alice"] ["bob"]] ?person) (age ?person ?age))

    RESULTS ----------------------- alice 28 bob 33 -----------------------

    The actual code is this:

    (defn filter-bots [uscc-src wl-src] (let [fields (get-out-fields wl-src)] ;; fields contains "?uscc" (<- fields (uscc-src ?uscc) (wl-src :>> fields) (:distinct false))))

    I've tried numerous variations of it already. And to add to the mystery, the failing query is passing in a local test.



    2012-04-10 17:02:00,130 WARN org.apache.hadoop.conf.Configuration: /data3/mapred/local/taskTracker/paullam/jobcache/job_201202271958_419049/job.xml:a attempt to override final parameter: fs.trash.interval; Ignoring. 2012-04-10 17:02:00,401 WARN org.apache.hadoop.conf.Configuration: /data3/mapred/local/taskTracker/paullam/jobcache/job_201202271958_419049/job.xml:a attempt to override final parameter: fs.trash.interval; Ignoring. 2012-04-10 17:02:00,439 WARN org.apache.hadoop.io.compress.snappy.LoadSnappy: Snappy native library is available 2012-04-10 17:02:59,442 WARN org.apache.hadoop.mapred.Child: Error running child cascading.flow.FlowException: internal error during reducer execution at cascading.flow.FlowReducer.reduce(FlowReducer.java:82) at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:469) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:417) at org.apache.hadoop.mapred.Child$4.run(Child.java:270) at java.security.AccessController.doPrivileged(N
    ative Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127) at org.apache.hadoop.mapred.Child.main(Child.java:264) Caused by: java.lang.RuntimeException: Invalid token: : at clojure.lang.Util.runtimeException(Util.java:156) at clojure.lang.LispReader.interpretToken(LispReader.java:321) at clojure.lang.LispReader.read(LispReader.java:206) at clojure.lang.RT.readString(RT.java:1681) at clojure.core$read_string.invoke(core.clj:3359) at carbonite.serializer$clj_read.invoke(serializer.clj:40) at clojure.lang.Var.invoke(Var.java:401) at carbonite.ClojureReaderSerializer.readObjectData(Unknown Source) at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:566) at carbonite.serializer$read_map$fn__38.invoke(serializer.clj:86) at carbonite.serializer$read_map.invoke(serializer.clj:80) at clojure.lang.Var.invoke(Var.java:405) at carbonite.ClojureMapSerializer.readObjectData(Unknown Source) a
    t com.esotericsoftware.kryo.Serializer.readObject(Serializer.java:61) at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:589) at com.esotericsoftware.kryo.ObjectBuffer.readObject(ObjectBuffer.java:213) at cascading.kryo.KryoDeserializer.deserialize(KryoDeserializer.java:34) at cascading.tuple.hadoop.SerializationElementReader.read(SerializationElementReader.java:70) at cascading.tuple.TupleInputStream.readType(TupleInputStream.java:174) at cascading.tuple.TupleInputStream.getNextElement(TupleInputStream.java:101) at cascading.tuple.TupleInputStream.readTuple(TupleInputStream.java:84) at cascading.tuple.TupleInputStream.readTuple(TupleInputStream.java:73) at cascading.tuple.TupleInputStream.readIndexTuple(TupleInputStream.java:127) at cascading.tuple.hadoop.IndexTupleDeserializer.deserialize(IndexTupleDeserializer.java:40) at cascading.tuple.hadoop.IndexTupleDeserializer.deserialize(IndexTupleDeserializer.java:28) at org.apache.hadoop.mapred.Task$ValuesIterator.readNextValue(Task.
    java:1161) at org.apache.hadoop.mapred.Task$ValuesIterator.next(Task.java:1101) at org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.moveToNext(ReduceTask.java:246) at org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.next(ReduceTask.java:242) at cascading.pipe.cogroup.CoGroupClosure.build(CoGroupClosure.java:108) at cascading.pipe.cogroup.CoGroupClosure.reset(CoGroupClosure.java:99) at cascading.pipe.Group.iterateReduceValues(Group.java:993) at cascading.flow.stack.GroupReducerStackElement.operateGroup(GroupReducerStackElement.java:70) at cascading.flow.stack.GroupReducerStackElement.collect(GroupReducerStackElement.java:58) at cascading.flow.stack.FlowReducerStack.reduce(FlowReducerStack.java:169) at cascading.flow.FlowReducer.reduce(FlowReducer.java:75) ... 7 more
  • Paul Lam at Apr 10, 2012 at 4:46 pm
    I tried to name the in and out vars explicitly but still no good.

    On Tuesday, 10 April 2012 17:23:53 UTC+1, Soren Macbeth wrote:

    Can't say for sure, but it looks like something in one of your fields has
    a ':' character and kryo is getting confused trying to interpret it as a
    clojure keyword perhaps?

    --
    http://about.me/soren

    On Tuesday, April 10, 2012 at 9:12 AM, Paul Lam wrote:

    I've been scratching my head for this one all afternoon. Anyone seen this exception before?


    The query where it's failing is similar to this:


    => (?<- (stdout)
    [?person ?age]
    ([["alice"] ["bob"]] ?person)
    (age ?person ?age))


    RESULTS
    -----------------------
    alice 28
    bob 33
    -----------------------


    The actual code is this:


    (defn filter-bots [uscc-src wl-src]
    (let [fields (get-out-fields wl-src)] ;; fields contains "?uscc"
    (<- fields
    (uscc-src ?uscc)
    (wl-src :>> fields)
    (:distinct false))))


    I've tried numerous variations of it already. And to add to the mystery, the failing query is passing in a local test.




    2012-04-10 17:02:00,130 WARN org.apache.hadoop.conf.Configuration: /data3/mapred/local/taskTracker/paullam/jobcache/job_201202271958_419049/job.xml:a attempt to override final parameter: fs.trash.interval; Ignoring.
    2012-04-10 17:02:00,401 WARN org.apache.hadoop.conf.Configuration: /data3/mapred/local/taskTracker/paullam/jobcache/job_201202271958_419049/job.xml:a attempt to override final parameter: fs.trash.interval; Ignoring.
    2012-04-10 17:02:00,439 WARN org.apache.hadoop.io.compress.snappy.LoadSnappy: Snappy native library is available
    2012-04-10 17:02:59,442 WARN org.apache.hadoop.mapred.Child: Error running child
    cascading.flow.FlowException: internal error during reducer execution
    at cascading.flow.FlowReducer.reduce(FlowReducer.java:82)
    at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:469)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:417)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.mapred.Child.main(Child.java:264)
    Caused by: java.lang.RuntimeException: Invalid token: :
    at clojure.lang.Util.runtimeException(Util.java:156)
    at clojure.lang.LispReader.interpretToken(LispReader.java:321)
    at clojure.lang.LispReader.read(LispReader.java:206)
    at clojure.lang.RT.readString(RT.java:1681)
    at clojure.core$read_string.invoke(core.clj:3359)
    at carbonite.serializer$clj_read.invoke(serializer.clj:40)
    at clojure.lang.Var.invoke(Var.java:401)
    at carbonite.ClojureReaderSerializer.readObjectData(Unknown Source)
    at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:566)
    at carbonite.serializer$read_map$fn__38.invoke(serializer.clj:86)
    at carbonite.serializer$read_map.invoke(serializer.clj:80)
    at clojure.lang.Var.invoke(Var.java:405)
    at carbonite.ClojureMapSerializer.readObjectData(Unknown Source)
    at com.esotericsoftware.kryo.Serializer.readObject(Serializer.java:61)
    at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:589)
    at com.esotericsoftware.kryo.ObjectBuffer.readObject(ObjectBuffer.java:213)
    at cascading.kryo.KryoDeserializer.deserialize(KryoDeserializer.java:34)
    at cascading.tuple.hadoop.SerializationElementReader.read(SerializationElementReader.java:70)
    at cascading.tuple.TupleInputStream.readType(TupleInputStream.java:174)
    at cascading.tuple.TupleInputStream.getNextElement(TupleInputStream.java:101)
    at cascading.tuple.TupleInputStream.readTuple(TupleInputStream.java:84)
    at cascading.tuple.TupleInputStream.readTuple(TupleInputStream.java:73)
    at cascading.tuple.TupleInputStream.readIndexTuple(TupleInputStream.java:127)
    at cascading.tuple.hadoop.IndexTupleDeserializer.deserialize(IndexTupleDeserializer.java:40)
    at cascading.tuple.hadoop.IndexTupleDeserializer.deserialize(IndexTupleDeserializer.java:28)
    at org.apache.hadoop.mapred.Task$ValuesIterator.readNextValue(Task.java:1161)
    at org.apache.hadoop.mapred.Task$ValuesIterator.next(Task.java:1101)
    at org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.moveToNext(ReduceTask.java:246)
    at org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.next(ReduceTask.java:242)
    at cascading.pipe.cogroup.CoGroupClosure.build(CoGroupClosure.java:108)
    at cascading.pipe.cogroup.CoGroupClosure.reset(CoGroupClosure.java:99)
    at cascading.pipe.Group.iterateReduceValues(Group.java:993)
    at cascading.flow.stack.GroupReducerStackElement.operateGroup(GroupReducerStackElement.java:70)
    at cascading.flow.stack.GroupReducerStackElement.collect(GroupReducerStackElement.java:58)
    at cascading.flow.stack.FlowReducerStack.reduce(FlowReducerStack.java:169)
    at cascading.flow.FlowReducer.reduce(FlowReducer.java:75)
    ... 7 more

  • Soren Macbeth at Apr 10, 2012 at 5:01 pm
    I didn't mean the names of the vars, I meant the actual value of one of the fields, specifically one of the fields being used in the group-by.

    --
    http://about.me/soren

    On Tuesday, April 10, 2012 at 9:46 AM, Paul Lam wrote:

    I tried to name the in and out vars explicitly but still no good.

    On Tuesday, 10 April 2012 17:23:53 UTC+1, Soren Macbeth wrote:
    Can't say for sure, but it looks like something in one of your fields has a ':' character and kryo is getting confused trying to interpret it as a clojure keyword perhaps?

    --
    http://about.me/soren

    On Tuesday, April 10, 2012 at 9:12 AM, Paul Lam wrote:

    I've been scratching my head for this one all afternoon. Anyone seen this exception before?

    The query where it's failing is similar to this:

    => (?<- (stdout) [?person ?age] ([["alice"] ["bob"]] ?person) (age ?person ?age))

    RESULTS ----------------------- alice 28 bob 33 -----------------------

    The actual code is this:

    (defn filter-bots [uscc-src wl-src] (let [fields (get-out-fields wl-src)] ;; fields contains "?uscc" (<- fields (uscc-src ?uscc) (wl-src :>> fields) (:distinct false))))

    I've tried numerous variations of it already. And to add to the mystery, the failing query is passing in a local test.



    2012-04-10 17:02:00,130 WARN org.apache.hadoop.conf.Configuration: /data3/mapred/local/taskTracker/paullam/jobcache/job_201202271958_419049/job.xml:a attempt to override final parameter: fs.trash.interval; Ignoring. 2012-04-10 17:02:00,401 WARN org.apache.hadoop.conf.Configuration: /data3/mapred/local/taskTracker/paullam/jobcache/job_201202271958_419049/job.xml:a attempt to override final parameter: fs.trash.interval; Ignoring. 2012-04-10 17:02:00,439 WARN org.apache.hadoop.io.compress.snappy.LoadSnappy: Snappy native library is available 2012-04-10 17:02:59,442 WARN org.apache.hadoop.mapred.Child: Error running child cascading.flow.FlowException: internal error during reducer execution at cascading.flow.FlowReducer.reduce(FlowReducer.java:82) at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:469) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:417) at org.apache.hadoop.mapred.Child$4.run(Child.java:270) at java.security.AccessController.doPrivileg
    ed(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127) at org.apache.hadoop.mapred.Child.main(Child.java:264) Caused by: java.lang.RuntimeException: Invalid token: : at clojure.lang.Util.runtimeException(Util.java:156) at clojure.lang.LispReader.interpretToken(LispReader.java:321) at clojure.lang.LispReader.read(LispReader.java:206) at clojure.lang.RT.readString(RT.java:1681) at clojure.core$read_string.invoke(core.clj:3359) at carbonite.serializer$clj_read.invoke(serializer.clj:40) at clojure.lang.Var.invoke(Var.java:401) at carbonite.ClojureReaderSerializer.readObjectData(Unknown Source) at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:566) at carbonite.serializer$read_map$fn__38.invoke(serializer.clj:86) at carbonite.serializer$read_map.invoke(serializer.clj:80) at clojure.lang.Var.invoke(Var.java:405) at carbonite.ClojureMapSerializer.readObjectData(Unknown Sourc
    e) at com.esotericsoftware.kryo.Serializer.readObject(Serializer.java:61) at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:589) at com.esotericsoftware.kryo.ObjectBuffer.readObject(ObjectBuffer.java:213) at cascading.kryo.KryoDeserializer.deserialize(KryoDeserializer.java:34) at cascading.tuple.hadoop.SerializationElementReader.read(SerializationElementReader.java:70) at cascading.tuple.TupleInputStream.readType(TupleInputStream.java:174) at cascading.tuple.TupleInputStream.getNextElement(TupleInputStream.java:101) at cascading.tuple.TupleInputStream.readTuple(TupleInputStream.java:84) at cascading.tuple.TupleInputStream.readTuple(TupleInputStream.java:73) at cascading.tuple.TupleInputStream.readIndexTuple(TupleInputStream.java:127) at cascading.tuple.hadoop.IndexTupleDeserializer.deserialize(IndexTupleDeserializer.java:40) at cascading.tuple.hadoop.IndexTupleDeserializer.deserialize(IndexTupleDeserializer.java:28) at org.apache.hadoop.mapred.Task$ValuesIterator.readNextValue(T
    ask.java:1161) at org.apache.hadoop.mapred.Task$ValuesIterator.next(Task.java:1101) at org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.moveToNext(ReduceTask.java:246) at org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.next(ReduceTask.java:242) at cascading.pipe.cogroup.CoGroupClosure.build(CoGroupClosure.java:108) at cascading.pipe.cogroup.CoGroupClosure.reset(CoGroupClosure.java:99) at cascading.pipe.Group.iterateReduceValues(Group.java:993) at cascading.flow.stack.GroupReducerStackElement.operateGroup(GroupReducerStackElement.java:70) at cascading.flow.stack.GroupReducerStackElement.collect(GroupReducerStackElement.java:58) at cascading.flow.stack.FlowReducerStack.reduce(FlowReducerStack.java:169) at cascading.flow.FlowReducer.reduce(FlowReducer.java:75) ... 7 more
    > >
  • Paul Lam at Apr 10, 2012 at 7:29 pm
    You're right! There's one record with a weird format "95kie: 9522e3b1e3".
    I'll investigate tomorrow how a string like that (in a protobuf scheme, so
    not text parsed) caused this problem. But the query is working now!

    Thank you


    On Tuesday, 10 April 2012 18:00:56 UTC+1, Soren Macbeth wrote:

    I didn't mean the names of the vars, I meant the actual value of one of
    the fields, specifically one of the fields being used in the group-by.

    --
    http://about.me/soren

    On Tuesday, April 10, 2012 at 9:46 AM, Paul Lam wrote:

    I tried to name the in and out vars explicitly but still no good.


    On Tuesday, 10 April 2012 17:23:53 UTC+1, Soren Macbeth wrote:

    Can't say for sure, but it looks like something in one of your fields has
    a ':' character and kryo is getting confused trying to interpret it as a
    clojure keyword perhaps?

    --
    http://about.me/soren

    On Tuesday, April 10, 2012 at 9:12 AM, Paul Lam wrote:

    I've been scratching my head for this one all afternoon. Anyone seen this exception before?


    The query where it's failing is similar to this:


    => (?<- (stdout)
    [?person ?age]
    ([["alice"] ["bob"]] ?person)
    (age ?person ?age))


    RESULTS
    -----------------------
    alice 28
    bob 33
    -----------------------


    The actual code is this:


    (defn filter-bots [uscc-src wl-src]
    (let [fields (get-out-fields wl-src)] ;; fields contains "?uscc"
    (<- fields
    (uscc-src ?uscc)
    (wl-src :>> fields)
    (:distinct false))))


    I've tried numerous variations of it already. And to add to the mystery, the failing query is passing in a local test.




    2012-04-10 17:02:00,130 WARN org.apache.hadoop.conf.Configuration: /data3/mapred/local/taskTracker/paullam/jobcache/job_201202271958_419049/job.xml:a attempt to override final parameter: fs.trash.interval; Ignoring.
    2012-04-10 17:02:00,401 WARN org.apache.hadoop.conf.Configuration: /data3/mapred/local/taskTracker/paullam/jobcache/job_201202271958_419049/job.xml:a attempt to override final parameter: fs.trash.interval; Ignoring.
    2012-04-10 17:02:00,439 WARN org.apache.hadoop.io.compress.snappy.LoadSnappy: Snappy native library is available
    2012-04-10 17:02:59,442 WARN org.apache.hadoop.mapred.Child: Error running child
    cascading.flow.FlowException: internal error during reducer execution
    at cascading.flow.FlowReducer.reduce(FlowReducer.java:82)
    at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:469)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:417)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.mapred.Child.main(Child.java:264)
    Caused by: java.lang.RuntimeException: Invalid token: :
    at clojure.lang.Util.runtimeException(Util.java:156)
    at clojure.lang.LispReader.interpretToken(LispReader.java:321)
    at clojure.lang.LispReader.read(LispReader.java:206)
    at clojure.lang.RT.readString(RT.java:1681)
    at clojure.core$read_string.invoke(core.clj:3359)
    at carbonite.serializer$clj_read.invoke(serializer.clj:40)
    at clojure.lang.Var.invoke(Var.java:401)
    at carbonite.ClojureReaderSerializer.readObjectData(Unknown Source)
    at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:566)
    at carbonite.serializer$read_map$fn__38.invoke(serializer.clj:86)
    at carbonite.serializer$read_map.invoke(serializer.clj:80)
    at clojure.lang.Var.invoke(Var.java:405)
    at carbonite.ClojureMapSerializer.readObjectData(Unknown Source)
    at com.esotericsoftware.kryo.Serializer.readObject(Serializer.java:61)
    at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:589)
    at com.esotericsoftware.kryo.ObjectBuffer.readObject(ObjectBuffer.java:213)
    at cascading.kryo.KryoDeserializer.deserialize(KryoDeserializer.java:34)
    at cascading.tuple.hadoop.SerializationElementReader.read(SerializationElementReader.java:70)
    at cascading.tuple.TupleInputStream.readType(TupleInputStream.java:174)
    at cascading.tuple.TupleInputStream.getNextElement(TupleInputStream.java:101)
    at cascading.tuple.TupleInputStream.readTuple(TupleInputStream.java:84)
    at cascading.tuple.TupleInputStream.readTuple(TupleInputStream.java:73)
    at cascading.tuple.TupleInputStream.readIndexTuple(TupleInputStream.java:127)
    at cascading.tuple.hadoop.IndexTupleDeserializer.deserialize(IndexTupleDeserializer.java:40)
    at cascading.tuple.hadoop.IndexTupleDeserializer.deserialize(IndexTupleDeserializer.java:28)
    at org.apache.hadoop.mapred.Task$ValuesIterator.readNextValue(Task.java:1161)
    at org.apache.hadoop.mapred.Task$ValuesIterator.next(Task.java:1101)
    at org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.moveToNext(ReduceTask.java:246)
    at org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.next(ReduceTask.java:242)
    at cascading.pipe.cogroup.CoGroupClosure.build(CoGroupClosure.java:108)
    at cascading.pipe.cogroup.CoGroupClosure.reset(CoGroupClosure.java:99)
    at cascading.pipe.Group.iterateReduceValues(Group.java:993)
    at cascading.flow.stack.GroupReducerStackElement.operateGroup(GroupReducerStackElement.java:70)
    at cascading.flow.stack.GroupReducerStackElement.collect(GroupReducerStackElement.java:58)
    at cascading.flow.stack.FlowReducerStack.reduce(FlowReducerStack.java:169)
    at cascading.flow.FlowReducer.reduce(FlowReducer.java:75)
    ... 7 more


  • Sam Ritchie at Apr 10, 2012 at 8:29 pm
    This is really interesting; Paul, if you could get me a text case with a
    single tuple, something like

    (??- [[instance]])

    we can figure out if this is a Kryo bug and get it taken care of. Nice
    work, guys!
    On Tue, Apr 10, 2012 at 12:29 PM, Paul Lam wrote:

    You're right! There's one record with a weird format "95kie: 9522e3b1e3".
    I'll investigate tomorrow how a string like that (in a protobuf scheme, so
    not text parsed) caused this problem. But the query is working now!

    Thank you


    On Tuesday, 10 April 2012 18:00:56 UTC+1, Soren Macbeth wrote:

    I didn't mean the names of the vars, I meant the actual value of one of
    the fields, specifically one of the fields being used in the group-by.

    --
    http://about.me/soren

    On Tuesday, April 10, 2012 at 9:46 AM, Paul Lam wrote:

    I tried to name the in and out vars explicitly but still no good.


    On Tuesday, 10 April 2012 17:23:53 UTC+1, Soren Macbeth wrote:

    Can't say for sure, but it looks like something in one of your fields
    has a ':' character and kryo is getting confused trying to interpret it as
    a clojure keyword perhaps?

    --
    http://about.me/soren

    On Tuesday, April 10, 2012 at 9:12 AM, Paul Lam wrote:

    I've been scratching my head for this one all afternoon. Anyone seen this exception before?


    The query where it's failing is similar to this:


    => (?<- (stdout)
    [?person ?age]
    ([["alice"] ["bob"]] ?person)
    (age ?person ?age))


    RESULTS
    -----------------------
    alice 28
    bob 33
    -----------------------


    The actual code is this:


    (defn filter-bots [uscc-src wl-src]
    (let [fields (get-out-fields wl-src)] ;; fields contains "?uscc"
    (<- fields
    (uscc-src ?uscc)
    (wl-src :>> fields)
    (:distinct false))))


    I've tried numerous variations of it already. And to add to the mystery, the failing query is passing in a local test.




    2012-04-10 17:02:00,130 WARN org.apache.hadoop.conf.**Configuration: /data3/mapred/local/**taskTracker/paullam/jobcache/**job_201202271958_419049/job.**xml:a attempt to override final parameter: fs.trash.interval; Ignoring.
    2012-04-10 17:02:00,401 WARN org.apache.hadoop.conf.**Configuration: /data3/mapred/local/**taskTracker/paullam/jobcache/**job_201202271958_419049/job.**xml:a attempt to override final parameter: fs.trash.interval; Ignoring.
    2012-04-10 17:02:00,439 WARN org.apache.hadoop.io.compress.**snappy.LoadSnappy: Snappy native library is available
    2012-04-10 17:02:59,442 WARN org.apache.hadoop.mapred.**Child: Error running child
    cascading.flow.FlowException: internal error during reducer execution
    at cascading.flow.FlowReducer.**reduce(FlowReducer.java:82)
    at org.apache.hadoop.mapred.**ReduceTask.runOldReducer(**ReduceTask.java:469)
    at org.apache.hadoop.mapred.**ReduceTask.run(ReduceTask.**java:417)
    at org.apache.hadoop.mapred.**Child$4.run(Child.java:270)
    at java.security.**AccessController.doPrivileged(**Native Method)
    at javax.security.auth.Subject.**doAs(Subject.java:396)
    at org.apache.hadoop.security.**UserGroupInformation.doAs(**UserGroupInformation.java:**1127)
    at org.apache.hadoop.mapred.**Child.main(Child.java:264)
    Caused by: java.lang.RuntimeException: Invalid token: :
    at clojure.lang.Util.**runtimeException(Util.java:**156)
    at clojure.lang.LispReader.**interpretToken(LispReader.**java:321)
    at clojure.lang.LispReader.read(**LispReader.java:206)
    at clojure.lang.RT.readString(RT.**java:1681)
    at clojure.core$read_string.**invoke(core.clj:3359)
    at carbonite.serializer$clj_read.**invoke(serializer.clj:40)
    at clojure.lang.Var.invoke(Var.**java:401)
    at carbonite.**ClojureReaderSerializer.**readObjectData(Unknown Source)
    at com.esotericsoftware.kryo.**Kryo.readClassAndObject(Kryo.**java:566)
    at carbonite.serializer$read_map$**fn__38.invoke(serializer.clj:**86)
    at carbonite.serializer$read_map.**invoke(serializer.clj:80)
    at clojure.lang.Var.invoke(Var.**java:405)
    at carbonite.**ClojureMapSerializer.**readObjectData(Unknown Source)
    at com.esotericsoftware.kryo.**Serializer.readObject(**Serializer.java:61)
    at com.esotericsoftware.kryo.**Kryo.readObject(Kryo.java:589)
    at com.esotericsoftware.kryo.**ObjectBuffer.readObject(**ObjectBuffer.java:213)
    at cascading.kryo.**KryoDeserializer.deserialize(**KryoDeserializer.java:34)
    at cascading.tuple.hadoop.**SerializationElementReader.**read(**SerializationElementReader.**java:70)
    at cascading.tuple.**TupleInputStream.readType(**TupleInputStream.java:174)
    at cascading.tuple.**TupleInputStream.**getNextElement(**TupleInputStream.java:101)
    at cascading.tuple.**TupleInputStream.readTuple(**TupleInputStream.java:84)
    at cascading.tuple.**TupleInputStream.readTuple(**TupleInputStream.java:73)
    at cascading.tuple.**TupleInputStream.**readIndexTuple(**TupleInputStream.java:127)
    at cascading.tuple.hadoop.**IndexTupleDeserializer.**deserialize(**IndexTupleDeserializer.java:**40)
    at cascading.tuple.hadoop.**IndexTupleDeserializer.**deserialize(**IndexTupleDeserializer.java:**28)
    at org.apache.hadoop.mapred.Task$**ValuesIterator.readNextValue(**Task.java:1161)
    at org.apache.hadoop.mapred.Task$**ValuesIterator.next(Task.java:**1101)
    at org.apache.hadoop.mapred.**ReduceTask$**ReduceValuesIterator.**moveToNext(ReduceTask.java:**246)
    at org.apache.hadoop.mapred.**ReduceTask$**ReduceValuesIterator.next(**ReduceTask.java:242)
    at cascading.pipe.cogroup.**CoGroupClosure.build(**CoGroupClosure.java:108)
    at cascading.pipe.cogroup.**CoGroupClosure.reset(**CoGroupClosure.java:99)
    at cascading.pipe.Group.**iterateReduceValues(Group.**java:993)
    at cascading.flow.stack.**GroupReducerStackElement.**operateGroup(**GroupReducerStackElement.java:**70)
    at cascading.flow.stack.**GroupReducerStackElement.**collect(**GroupReducerStackElement.java:**58)
    at cascading.flow.stack.**FlowReducerStack.reduce(**FlowReducerStack.java:169)
    at cascading.flow.FlowReducer.**reduce(FlowReducer.java:75)
    ... 7 more



    --
    Sam Ritchie, Twitter Inc
    703.662.1337
    @sritchie09

    (Too brief? Here's why! http://emailcharter.org)
  • Paul Lam at Apr 13, 2012 at 11:30 am
    Looks as though it wasn't that the group-by field had weird string. It's
    actually one of the other fields having a bad map value, something like {:
    abc 123, :efg 234} when it should be {:abc 123, :efg 234}. This field was
    parsed from a string into a map during the mapping stage. Somehow it made
    it into the tuple fine during the mapping stage until the join when the
    value is being read for the first time.

    So in summary, bad data structure seem to be able to creep through the
    mapping stage until a later pipe when it's being read.


    On Tuesday, 10 April 2012 21:28:44 UTC+1, Sam Ritchie wrote:

    This is really interesting; Paul, if you could get me a text case with a
    single tuple, something like

    (??- [[instance]])

    we can figure out if this is a Kryo bug and get it taken care of. Nice
    work, guys!
    On Tue, Apr 10, 2012 at 12:29 PM, Paul Lam wrote:

    You're right! There's one record with a weird format "95kie: 9522e3b1e3".
    I'll investigate tomorrow how a string like that (in a protobuf scheme, so
    not text parsed) caused this problem. But the query is working now!

    Thank you


    On Tuesday, 10 April 2012 18:00:56 UTC+1, Soren Macbeth wrote:

    I didn't mean the names of the vars, I meant the actual value of one of
    the fields, specifically one of the fields being used in the group-by.

    --
    http://about.me/soren

    On Tuesday, April 10, 2012 at 9:46 AM, Paul Lam wrote:

    I tried to name the in and out vars explicitly but still no good.


    On Tuesday, 10 April 2012 17:23:53 UTC+1, Soren Macbeth wrote:

    Can't say for sure, but it looks like something in one of your fields
    has a ':' character and kryo is getting confused trying to interpret it as
    a clojure keyword perhaps?

    --
    http://about.me/soren

    On Tuesday, April 10, 2012 at 9:12 AM, Paul Lam wrote:

    I've been scratching my head for this one all afternoon. Anyone seen this exception before?


    The query where it's failing is similar to this:


    => (?<- (stdout)
    [?person ?age]
    ([["alice"] ["bob"]] ?person)
    (age ?person ?age))


    RESULTS
    -----------------------
    alice 28
    bob 33
    -----------------------


    The actual code is this:


    (defn filter-bots [uscc-src wl-src]
    (let [fields (get-out-fields wl-src)] ;; fields contains "?uscc"
    (<- fields
    (uscc-src ?uscc)
    (wl-src :>> fields)
    (:distinct false))))


    I've tried numerous variations of it already. And to add to the mystery, the failing query is passing in a local test.





    2012-04-10 17:02:00,130 WARN org.apache.hadoop.conf.**Configuration: /data3/mapred/local/**taskTracker/paullam/jobcache/**job_201202271958_419049/job.**xml:a attempt to override final parameter: fs.trash.interval; Ignoring.
    2012-04-10 17:02:00,401 WARN org.apache.hadoop.conf.**Configuration: /data3/mapred/local/**taskTracker/paullam/jobcache/**job_201202271958_419049/job.**xml:a attempt to override final parameter: fs.trash.interval; Ignoring.
    2012-04-10 17:02:00,439 WARN org.apache.hadoop.io.compress.**snappy.LoadSnappy: Snappy native library is available
    2012-04-10 17:02:59,442 WARN org.apache.hadoop.mapred.**Child: Error running child
    cascading.flow.FlowException: internal error during reducer execution
    at cascading.flow.FlowReducer.**reduce(FlowReducer.java:82)
    at org.apache.hadoop.mapred.**ReduceTask.runOldReducer(**ReduceTask.java:469)
    at org.apache.hadoop.mapred.**ReduceTask.run(ReduceTask.**java:417)
    at org.apache.hadoop.mapred.**Child$4.run(Child.java:270)
    at java.security.**AccessController.doPrivileged(**Native Method)
    at javax.security.auth.Subject.**doAs(Subject.java:396)
    at org.apache.hadoop.security.**UserGroupInformation.doAs(**UserGroupInformation.java:**1127)
    at org.apache.hadoop.mapred.**Child.main(Child.java:264)
    Caused by: java.lang.RuntimeException: Invalid token: :
    at clojure.lang.Util.**runtimeException(Util.java:**156)
    at clojure.lang.LispReader.**interpretToken(LispReader.**java:321)
    at clojure.lang.LispReader.read(**LispReader.java:206)
    at clojure.lang.RT.readString(RT.**java:1681)
    at clojure.core$read_string.**invoke(core.clj:3359)
    at carbonite.serializer$clj_read.**invoke(serializer.clj:40)
    at clojure.lang.Var.invoke(Var.**java:401)
    at carbonite.**ClojureReaderSerializer.**readObjectData(Unknown Source)
    at com.esotericsoftware.kryo.**Kryo.readClassAndObject(Kryo.**java:566)
    at carbonite.serializer$read_map$**fn__38.invoke(serializer.clj:**86)
    at carbonite.serializer$read_map.**invoke(serializer.clj:80)
    at clojure.lang.Var.invoke(Var.**java:405)
    at carbonite.**ClojureMapSerializer.**readObjectData(Unknown Source)
    at com.esotericsoftware.kryo.**Serializer.readObject(**Serializer.java:61)
    at com.esotericsoftware.kryo.**Kryo.readObject(Kryo.java:589)
    at com.esotericsoftware.kryo.**ObjectBuffer.readObject(**ObjectBuffer.java:213)
    at cascading.kryo.**KryoDeserializer.deserialize(**KryoDeserializer.java:34)
    at cascading.tuple.hadoop.**SerializationElementReader.**read(**SerializationElementReader.**java:70)
    at cascading.tuple.**TupleInputStream.readType(**TupleInputStream.java:174)
    at cascading.tuple.**TupleInputStream.**getNextElement(**TupleInputStream.java:101)
    at cascading.tuple.**TupleInputStream.readTuple(**TupleInputStream.java:84)
    at cascading.tuple.**TupleInputStream.readTuple(**TupleInputStream.java:73)
    at cascading.tuple.**TupleInputStream.**readIndexTuple(**TupleInputStream.java:127)
    at cascading.tuple.hadoop.**IndexTupleDeserializer.**deserialize(**IndexTupleDeserializer.java:**40)
    at cascading.tuple.hadoop.**IndexTupleDeserializer.**deserialize(**IndexTupleDeserializer.java:**28)
    at org.apache.hadoop.mapred.Task$**ValuesIterator.readNextValue(**Task.java:1161)
    at org.apache.hadoop.mapred.Task$**ValuesIterator.next(Task.java:**1101)
    at org.apache.hadoop.mapred.**ReduceTask$**ReduceValuesIterator.**moveToNext(ReduceTask.java:**246)
    at org.apache.hadoop.mapred.**ReduceTask$**ReduceValuesIterator.next(**ReduceTask.java:242)
    at cascading.pipe.cogroup.**CoGroupClosure.build(**CoGroupClosure.java:108)
    at cascading.pipe.cogroup.**CoGroupClosure.reset(**CoGroupClosure.java:99)
    at cascading.pipe.Group.**iterateReduceValues(Group.**java:993)
    at cascading.flow.stack.**GroupReducerStackElement.**operateGroup(**GroupReducerStackElement.java:**70)
    at cascading.flow.stack.**GroupReducerStackElement.**collect(**GroupReducerStackElement.java:**58)
    at cascading.flow.stack.**FlowReducerStack.reduce(**FlowReducerStack.java:169)
    at cascading.flow.FlowReducer.**reduce(FlowReducer.java:75)
    ... 7 more



    --
    Sam Ritchie, Twitter Inc
    703.662.1337
    @sritchie09

    (Too brief? Here's why! http://emailcharter.org)
  • Jamie Rothfeder at Dec 6, 2013 at 4:24 pm
    Hey All,

    Did anything ever come of this? We're seeing the same error on Cascalog
    2.6.0 and have just started digging in to figure out what is happening.

    Thanks,
    Jamie
    On Friday, April 13, 2012 7:30:02 AM UTC-4, Paul Lam wrote:

    Looks as though it wasn't that the group-by field had weird string. It's
    actually one of the other fields having a bad map value, something like {:
    abc 123, :efg 234} when it should be {:abc 123, :efg 234}. This field was
    parsed from a string into a map during the mapping stage. Somehow it made
    it into the tuple fine during the mapping stage until the join when the
    value is being read for the first time.

    So in summary, bad data structure seem to be able to creep through the
    mapping stage until a later pipe when it's being read.


    On Tuesday, 10 April 2012 21:28:44 UTC+1, Sam Ritchie wrote:

    This is really interesting; Paul, if you could get me a text case with a
    single tuple, something like

    (??- [[instance]])

    we can figure out if this is a Kryo bug and get it taken care of. Nice
    work, guys!

    On Tue, Apr 10, 2012 at 12:29 PM, Paul Lam <paul...@forward.co.uk<javascript:>
    wrote:
    You're right! There's one record with a weird format "95kie:
    9522e3b1e3". I'll investigate tomorrow how a string like that (in a
    protobuf scheme, so not text parsed) caused this problem. But the query is
    working now!

    Thank you


    On Tuesday, 10 April 2012 18:00:56 UTC+1, Soren Macbeth wrote:

    I didn't mean the names of the vars, I meant the actual value of one of
    the fields, specifically one of the fields being used in the group-by.

    --
    http://about.me/soren

    On Tuesday, April 10, 2012 at 9:46 AM, Paul Lam wrote:

    I tried to name the in and out vars explicitly but still no good.


    On Tuesday, 10 April 2012 17:23:53 UTC+1, Soren Macbeth wrote:

    Can't say for sure, but it looks like something in one of your fields
    has a ':' character and kryo is getting confused trying to interpret it as
    a clojure keyword perhaps?

    --
    http://about.me/soren

    On Tuesday, April 10, 2012 at 9:12 AM, Paul Lam wrote:

    I've been scratching my head for this one all afternoon. Anyone seen this exception before?


    The query where it's failing is similar to this:


    => (?<- (stdout)
    [?person ?age]
    ([["alice"] ["bob"]] ?person)
    (age ?person ?age))


    RESULTS
    -----------------------
    alice 28
    bob 33
    -----------------------


    The actual code is this:


    (defn filter-bots [uscc-src wl-src]
    (let [fields (get-out-fields wl-src)] ;; fields contains "?uscc"
    (<- fields
    (uscc-src ?uscc)
    (wl-src :>> fields)
    (:distinct false))))


    I've tried numerous variations of it already. And to add to the mystery, the failing query is passing in a local test.





    2012-04-10 17:02:00,130 WARN org.apache.hadoop.conf.Configuration: /data3/mapred/local/taskTracker/paullam/jobcache/job_201202271958_419049/job.xml:a attempt to override final parameter: fs.trash.interval; Ignoring.
    2012-04-10 17:02:00,401 WARN org.apache.hadoop.conf.Configuration: /data3/mapred/local/taskTracker/paullam/jobcache/job_201202271958_419049/job.xml:a attempt to override final parameter: fs.trash.interval; Ignoring.
    2012-04-10 17:02:00,439 WARN org.apache.hadoop.io.compress.snappy.LoadSnappy: Snappy native library is available
    2012-04-10 17:02:59,442 WARN org.apache.hadoop.mapred.Child: Error running child
    cascading.flow.FlowException: internal error during reducer execution
    at cascading.flow.FlowReducer.reduce(FlowReducer.java:82)
    at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:469)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:417)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.mapred.Child.main(Child.java:264)
    Caused by: java.lang.RuntimeException: Invalid token: :
    at clojure.lang.Util.runtimeException(Util.java:156)
    at clojure.lang.LispReader.interpretToken(LispReader.java:321)
    at clojure.lang.LispReader.read(LispReader.java:206)
    at clojure.lang.RT.readString(RT.java:1681)
    at clojure.core$read_string.invoke(core.clj:3359)
    at carbonite.serializer$clj_read.invoke(serializer.clj:40)
    at clojure.lang.Var.invoke(Var.java:401)
    at carbonite.ClojureReaderSerializer.readObjectData(Unknown Source)
    at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:566)
    at carbonite.serializer$read_map$fn__38.invoke(serializer.clj:86)
    at carbonite.serializer$read_map.invoke(serializer.clj:80)
    at clojure.lang.Var.invoke(Var.java:405)
    at carbonite.ClojureMapSerializer.readObjectData(Unknown Source)
    at com.esotericsoftware.kryo.Serializer.readObject(Serializer.java:61)
    at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:589)
    at com.esotericsoftware.kryo.ObjectBuffer.readObject(ObjectBuffer.java:213)
    at cascading.kryo.KryoDeserializer.deserialize(KryoDeserializer.java:34)
    at cascading.tuple.hadoop.SerializationElementReader.read(SerializationElementReader.java:70)
    at cascading.tuple.TupleInputStream.readType(TupleInputStream.java:174)
    at cascading.tuple.TupleInputStream.getNextElement(TupleInputStream.java:101)
    at cascading.tuple.TupleInputStream.readTuple(TupleInputStream.java:84)
    at cascading.tuple.TupleInputStream.readTuple(TupleInputStream.java:73)
    at cascading.tuple.TupleInputStream.readIndexTuple(TupleInputStream.java:127)
    at cascading.tuple.hadoop.IndexTupleDeserializer.deserialize(IndexTupleDeserializer.java:40)
    at cascading.tuple.hadoop.IndexTupleDeserializer.deserialize(IndexTupleDeserializer.java:28)
    at org.apache.hadoop.mapred.Task$ValuesIterator.readNextValue(Task.java:1161)
    at org.apache.hadoop.mapred.Task$ValuesIterator.next(Task.java:1101)
    at org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.moveToNext(ReduceTask.java:246)
    at org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.next(ReduceTask.java:242)
    at cascading.pipe.cogroup.CoGroupClosure.build(CoGroupClosure.java:108)
    at cascading.pipe.cogroup.CoGroupClosure.reset(CoGroupClosure.java:99)
    at cascading.pipe.Group.iterateReduceValues(Group.java:993)
    at cascading.flow.stack.GroupReducerStackElement.operateGroup(GroupReducerStackElement.java:70)
    at cascading.flow.stack.GroupReducerStackElement.collect(GroupReducerStackElement.java:58)
    at cascading.flow.stack.FlowReducerStack.reduce(FlowReducerStack.java:169)
    at cascading.flow.FlowReducer.reduce(FlowReducer.java:75)
    ... 7 more



    --
    Sam Ritchie, Twitter Inc
    703.662.1337
    @sritchie09

    (Too brief? Here's why! http://emailcharter.org)
    --
    You received this message because you are subscribed to the Google Groups "cascalog-user" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to cascalog-user+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/groups/opt_out.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcascalog-user @
categoriesclojure, hadoop
postedApr 10, '12 at 4:13p
activeDec 6, '13 at 4:24p
posts8
users4
websiteclojure.org
irc#clojure

People

Translate

site design / logo © 2022 Grokbase