Can you also provide numbers for Reduce Shuffle Bytes? And Combine Input
and Output records? How many map slots do you have on the cluster? How many
spills on the Map and Reduce side?
If you are confident about shuffle time being the bottleneck, you could try
tuning a couple of parameters
1. mapred.inmem.merge.threshold - # of map output to be merged at once
on reduce side. Set it to 0 so it depends on
2. mapred.job.reduce.input.buffer.percent - you mentioned reduce is not
memory intensive, in which case you can try increasing this to 0.70 or 0.80
3. Make sure the combiners are doing work (aggregation). If not, you
could shut off combiner.
You can also play with io.sort.mb and io.sort.factor which really depends
on how much memory you have allocated each task (mapred.child.java.opts).
Tuning depends on a lot of factors, you might have to dig deeper into the
On Tue, Mar 13, 2012 at 5:24 AM, Austin Chungath wrote:
I am running a pig query on around 500 GB input data.
The current block size is 128 MB and split size is the default 128 MB.
I have also specified 16 reducers and around 3800 mappers are running.
Now I observe that shuffling is taking a long time to complete execution,
approximately 25 mins per job.
Can anyone suggest how I can bring down the shuffling time? Is there any
property that I can tweak to improve performance?
Thanks & Regards,