|| at Feb 19, 2010 at 5:04 am
We used do this all the time at attributor. Now if I can remember how we did
If the libraries are constant you can just install them on your nodes to
save pushing them through the distributed cache, and then setup the
The key issue if you push them through the distributed cache is ensuring
that the directory that the library gets dropped in, is actually in the
You can also give explicit paths to System.load
The -Djava.library.path in the child.options mapred.child.java.opts (if I
have the param correct) should work also.
On Thu, Feb 18, 2010 at 6:49 PM, Utkarsh Agarwal wrote:
My .so file has other .so dependencies , so would I have to add them all in
the DistributedCache . Also I tried setting LD_LIBRARY_PATH in
doesnt work. the java.library.path is not sufficient to set , have to get
On Thu, Feb 18, 2010 at 3:14 PM, Allen Wittenauer
On 2/16/10 5:29 PM, "Jason Rutherglen" wrote:
How would this work?
On Fri, Feb 12, 2010 at 10:45 AM, Allen Wittenauer
... or just use distributed cache.
On 2/12/10 10:02 AM, "Alex Kozlov" wrote:
All native libraries should be on each of the cluster nodes. You
set "java.library.path" property to point to your libraries (or just
them in the default system dirs).
On Fri, Feb 12, 2010 at 9:12 AM, Utkarsh Agarwal
Can anybody point me how to use JNI calls in a map reduce program.
files have other dependencies also , is there a way to load the
LD_LIBRARY_PATH for child processes . Should all the native stuff be
Pro Hadoop, a book to guide you from beginner to hadoop mastery,http://www.amazon.com/dp/1430219424?tag=jewlerymall
www.prohadoopbook.com a community for Hadoop Professionals