FAQ
hi there, i want to aski if hdfs api supports reading just a specific block of a file (of course if file exceeds the default block size). for example is it possible to read/fetch just the first of the third block of a specific file in hdfs? does the api supports that?

Search Discussions

  • Aman at Dec 17, 2010 at 10:06 pm
    I have never used it myself but I know for sure that HDFS supports this.

    Petrucci Andreas wrote:

    hi there, i want to aski if hdfs api supports reading just a specific
    block of a file (of course if file exceeds the default block size). for
    example is it possible to read/fetch just the first of the third block of
    a specific file in hdfs? does the api supports that?
    --
    View this message in context: http://lucene.472066.n3.nabble.com/Read-specific-block-of-a-file-tp2099661p2107589.html
    Sent from the Hadoop lucene-users mailing list archive at Nabble.com.
  • Harsh J at Dec 18, 2010 at 12:47 pm
    AFAIK blocks are defined by their offset and length alone, correct?
    You can get those details for a given source via
    DFSClient.getBlockLocations() and then perhaps construct a manual
    FileSplit object with the length details to read out a single block
    (Am not sure if this would handle records properly).

    2010/12/16 Petrucci Andreas <petrucci_2005@hotmail.com>:
    hi there, i want to aski if hdfs api supports reading just a specific block of a file (of course if file exceeds the default block size). for example is it possible to read/fetch just the first of the third block of a specific file in hdfs? does the api supports that?


    --
    Harsh J
    www.harshj.com

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedDec 16, '10 at 5:31p
activeDec 18, '10 at 12:47p
posts3
users3
websitehadoop.apache.org...
irc#hadoop

3 users in discussion

Aman: 1 post Petrucci Andreas: 1 post Harsh J: 1 post

People

Translate

site design / logo © 2022 Grokbase