Like the pig.udf.profile one maybe..
D
On Thu, Apr 4, 2013 at 6:25 AM, Lauren Blau wrote:
I'm running a simple script to add a sequence_number to a relation, sort
the result and store to a file:
a0 = load '<filename>' using PigStorage('\t','-schema');
a1 = rank a0;
a2 = foreach a1 generate col1 .. col16 , rank_a0 as sequence_number;
a3 = order a2 by sequence_number;
store a3 into 'outputfile' using PigStorage('\t','-schema');
I get the following error:
org.apache.hadoop.mapreduce.counters.LimitExceededException: Too many
counters: 241 max=240
at
org.apache.hadoop.mapreduce.counters.Limits.checkCounters(Limits.java:61)
at
org.apache.hadoop.mapreduce.counters.Limits.incrCounters(Limits.java:68)
at
org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.readFields(AbstractCounterGroup.java:174)
at
org.apache.hadoop.mapred.Counters$Group.readFields(Counters.java:278)
at
org.apache.hadoop.mapreduce.counters.AbstractCounters.readFields(AbstractCounters.java:303)
at
org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:280)
at
org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:75)
at
org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:951)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:835)
we aren't able to up our counters any higher (policy) and I don't
understand why I should need so many counters for such a simple script
anyway?
running Apache Pig version 0.11.1-SNAPSHOT (r: unknown)
compiled Mar 22 2013, 10:19:19
Can someone help?
Thanks,
Lauren
I'm running a simple script to add a sequence_number to a relation, sort
the result and store to a file:
a0 = load '<filename>' using PigStorage('\t','-schema');
a1 = rank a0;
a2 = foreach a1 generate col1 .. col16 , rank_a0 as sequence_number;
a3 = order a2 by sequence_number;
store a3 into 'outputfile' using PigStorage('\t','-schema');
I get the following error:
org.apache.hadoop.mapreduce.counters.LimitExceededException: Too many
counters: 241 max=240
at
org.apache.hadoop.mapreduce.counters.Limits.checkCounters(Limits.java:61)
at
org.apache.hadoop.mapreduce.counters.Limits.incrCounters(Limits.java:68)
at
org.apache.hadoop.mapreduce.counters.AbstractCounterGroup.readFields(AbstractCounterGroup.java:174)
at
org.apache.hadoop.mapred.Counters$Group.readFields(Counters.java:278)
at
org.apache.hadoop.mapreduce.counters.AbstractCounters.readFields(AbstractCounters.java:303)
at
org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:280)
at
org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:75)
at
org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:951)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:835)
we aren't able to up our counters any higher (policy) and I don't
understand why I should need so many counters for such a simple script
anyway?
running Apache Pig version 0.11.1-SNAPSHOT (r: unknown)
compiled Mar 22 2013, 10:19:19
Can someone help?
Thanks,
Lauren