FAQ
Hii..
I am toying around with hadoop configuration.
I am trying to replace HDFS with a common nfsmount, I only have map tasks.
so intermediate outputs need not be communicated!!!
So I want is there a way to make the temp directory local to the nodes and
place job conf object and jar in a nfs mount so all the nodes can access
it..
Saurabh Agarwal

Search Discussions

  • Vinod KV at Jun 11, 2010 at 3:58 am

    On Wednesday 26 May 2010 04:38 PM, Saurabh Agarwal wrote:
    Hii..
    I am toying around with hadoop configuration.
    I am trying to replace HDFS with a common nfsmount, I only have map tasks.
    so intermediate outputs need not be communicated!!!
    So I want is there a way to make the temp directory local to the nodes and
    place job conf object and jar in a nfs mount so all the nodes can access
    it..
    Saurabh Agarwal
    Ideally you can do it, because MapReduce uses FileSystem APIs
    everywhere, but you may find some quirks.

    OTOH, it is a very very bad idea and highly discouraged to run MapReduce
    on NFS - as soon as the number of nodes and thus tasks scales up, NFS
    will become bottlenecked and tasks/jobs will start failing with hard to
    debug failures.

    +Vinod

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-dev @
categorieshadoop
postedMay 26, '10 at 11:08a
activeJun 11, '10 at 3:58a
posts2
users2
websitehadoop.apache.org...
irc#hadoop

2 users in discussion

Vinod KV: 1 post Saurabh Agarwal: 1 post

People

Translate

site design / logo © 2022 Grokbase