FAQ
I have biuld a distribute index using the source code of
hadoop/contrib/index,but I found that when the input files become big(such
as one file is 16G),the OOM exception will be throwed .The cause is that: in
combiner ,"writer.addIndexNoOptimize()",this use much memory cause to OOM,
it's the Lucene OOM insead of the MapReduce OOM, but I hope create a new
method like the "spill" to solve this problem ,how can I do? My English is
poor ,sorry.


Thanks

--
View this message in context: http://lucene.472066.n3.nabble.com/mapreduce-combiner-tp3612513p3612513.html
Sent from the Hadoop lucene-dev mailing list archive at Nabble.com.

Search Discussions

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-dev @
categorieshadoop
postedDec 26, '11 at 7:53p
activeDec 26, '11 at 7:53p
posts1
users1
websitehadoop.apache.org...
irc#hadoop

1 user in discussion

27g: 1 post

People

Translate

site design / logo © 2022 Grokbase