FAQ
hi,
I have a use case to upload gzipped text files of sizes ranging from 10-30
GB on hdfs.
We have decided on sequence file format as format on hdfs.
I have some doubts/questions regarding it:

i) what should be the optimal size for a sequence file considering the input
text files range from 10-30 GB in size ? Can we have a sequence file as same
size as text file ?

ii) is there some tool that could be used to convert a gzipped text file to
sequence file ?

ii) what should be a good metadata management for the files. Currently, we
have about 30-40 different types of schema for these text files. We thought
of 2 options:
- uploading metadata as a text file on hdfs along with data. So users
can view using hadoop fs -cat <file>.
- adding metadata in seq file header. In this case, we could not find
how to fetch the metadata from sequence file as we need to provide our
downstream users a way to see what is the metadata of the
data they are reading.

thanks a lot !
-JJ

Search Discussions

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedFeb 24, '11 at 10:51p
activeFeb 24, '11 at 10:51p
posts1
users1
websitehadoop.apache.org...
irc#hadoop

1 user in discussion

Mapred Learn: 1 post

People

Translate

site design / logo © 2023 Grokbase