On Wed, Mar 21, 2012 at 12:17 PM, Sam Ritchie wrote:Me neither, though it's certainly irritating to see what looks like a leak
from Hadoop (or one of our abstraction layers, who knows). Might be worth
profiling.
--
Sam Ritchie
Sent with Sparrow <
http://www.sparrowmailapp.com/?sig>
On Wednesday, March 21, 2012 at 11:28 AM, Andrew Xue wrote:
i see this error on my local machine during a lot as well, usually
involving agg functions. bumping up -Xmx doesn't seem to help either
(i've gone up to 2048). this a problem only in local, however, and
mostly just gets in the way of tests (which doesn't make too much
sense either, usually the test data is like less than 10 tuples), but,
never seen this problem on an actual production cluster.
On Mar 20, 9:10 am, Sam Ritchie wrote:
Yeah, Hadoop requires quite a bit more heap space than most. Not really
sure why this is. There may be a way to provide a Leiningen default for ALL
projects, but you'll have to go to the lein list for that :)
--
Sam Ritchie
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)
On Tuesday, March 20, 2012 at 9:00 AM, Federico Brubacher wrote:
Thanks Sam and Nathan,
The jvm-ops worked for that specific project, weird thing other projects
were not doing that..
Thanks
Federico
On Tue, Mar 20, 2012 at 12:29 PM, Sam Ritchie (
mailto:sritchi wrote:
Hey Federico,
Can you paste a code example where you're getting old results? One thing I
do to combat the heap space issue is add the following to my project.clj:
:jvm-opts ["-Xmx768m" "-server"]
This bumps up the heap space and triggers the JVM to run in server mode
(necessary to allow Clojure to utilize multiple cores).
Let me know if this fixes those warnings! (Bump it up higher if you're got
more memory, maybe "-Xmx1024m".)
Cheers,
Sam
--
Sam Ritchie
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)
On Friday, March 16, 2012 at 6:52 AM, Federico Brubacher wrote:
Hi all,
Something strange is happening in my env,
When I do regular simple queries without any custom operation everything
goes well. But, when I put a custom operation in the mix defmapop,
defbufferop whatever I start seeing old stuff, maybe from old queries that
i did and always the same stuff, on and on until I get a "Java heap space".
Seems like hadoop is getting some old cached things... Does anyone know
what can be happening ? Already tried deleting .m2 and the lib for my
project. I'm using Cascading 1.8.6..
Weird
Best
Federico
--
Federico Brubacher
@fbru02
--
Federico Brubacher
@fbru02