FAQ
Hello!

I am have following queries related to Hadoop::

-> Once I place my data in HDFS, it gets replicated and chunked
automatically over the datanodes. Right? Hadoop takes care of all those
things.

-> Now, if there is some third party who is not participating in the Hadoop
program. Means, he is not one of the nodes of hadoop cluster. Now, he has
some data on his local filesystem. Thus, can I place this data into HDFS?
How?

-> Then, now when, that third party asks for a file or a direcory or any
kind of data that was previously being dumped in HDFS without that third
person's knowledge- he wnats it back(wants to retrieve it). Thus, the data
should get placed on his local file system again, in some specific
directory. How can I do this?

-> Will I have to use Map-Reduce or something else ot make it work.

-> Also, if I write map reduce code for all the complete activity, how will
I fetch the data or the files that are chunked in HDFS in the form of blocks
and combine(reassemble) them into a complete file and place it on a node;s
local filesystem who is not a part of hadoop cluster setup.

Eagerly waiting for reply!

Thanking You,
Sugandha!



--
Regards!
Sugandha

Search Discussions

  • Sugandha Naolekar at Jun 5, 2009 at 7:33 am
    Hello!

    I am have following queries related to Hadoop::

    -> Once I place my data in HDFS, it gets replicated and chunked
    automatically over the datanodes. Right? Hadoop takes care of all those
    things.

    -> Now, if there is some third party who is not participating in the Hadoop
    program. Means, he is not one of the nodes of hadoop cluster. Now, he has
    some data on his local filesystem. Thus, can I place this data into HDFS?
    How?

    -> Then, now when, that third party asks for a file or a direcory or any
    kind of data that was previously being dumped in HDFS without that third
    person's knowledge- he wnats it back(wants to retrieve it). Thus, the data
    should get placed on his local file system again, in some specific
    directory. How can I do this?

    -> Will I have to use Map-Reduce or something else ot make it work.

    -> Also, if I write map reduce code for all the complete activity, how will
    I fetch the data or the files that are chunked in HDFS in the form of blocks
    and combine(reassemble) them into a complete file and place it on a node;s
    local filesystem who is not a part of hadoop cluster setup.

    Eagerly waiting for reply!

    Thanking You,
    Sugandha!



    --
    Regards!
    Sugandha

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-dev @
categorieshadoop
postedJun 5, '09 at 7:31a
activeJun 5, '09 at 7:33a
posts2
users1
websitehadoop.apache.org...
irc#hadoop

1 user in discussion

Sugandha Naolekar: 2 posts

People

Translate

site design / logo © 2022 Grokbase