|
Vinod KV |
at Jun 11, 2010 at 3:58 am
|
⇧ |
| |
On Wednesday 26 May 2010 04:38 PM, Saurabh Agarwal wrote:
Hii..
I am toying around with hadoop configuration.
I am trying to replace HDFS with a common nfsmount, I only have map tasks.
so intermediate outputs need not be communicated!!!
So I want is there a way to make the temp directory local to the nodes and
place job conf object and jar in a nfs mount so all the nodes can access
it..
Saurabh Agarwal
Ideally you can do it, because MapReduce uses FileSystem APIs
everywhere, but you may find some quirks.
OTOH, it is a very very bad idea and highly discouraged to run MapReduce
on NFS - as soon as the number of nodes and thus tasks scales up, NFS
will become bottlenecked and tasks/jobs will start failing with hard to
debug failures.
+Vinod