I'm interested in hearing how you get data into and out of HDFS. Are you
using tools like Flume? Are you using fuse_dfs? Are you putting files on
HDFS with "hadoop dfs -put ..."?
And how does your method scale? Can you move terrabytes of data per day? Or
are we talking gigabytes?