|| at Oct 20, 2009 at 12:06 pm
On Oct 19, 2009, at 11:13 PM, Huang Qian wrote:
How can I creat 10 files on each datanode? I think I can only create
Is there any method to assign a file to a datanode?
No, and you probably don't want to. At such a small scale, I can't
think of a benefit to having the cluster *exactly* balanced. Hadoop
is designed to scale large, and it's simply too inefficient to put
such a mechanism in place.
If you want *approximately* the same number of blocks on each node,
you can use the rebalancer.
PS - if you really think this is mission-critical (and if you do, I'd
advise to re-check your assumptions), you may want to look at the
custom block placement plugins in the upcoming 0.21.0 release.
2009/10/19 Jason Venner <firstname.lastname@example.org>
If you set your replication count to one and on each datanode,
files, you will achieve the pattern you are trying for.
By default when a file is created on a machine hosting a datanode,
datanode will receive 1 replica of the file, and will be
sending the file data to the next replica if any.
On Thu, Oct 15, 2009 at 1:39 PM, Huang Qian <email@example.com>
Hi everyone. I am working on a project with hadoop and now I come
some problem. How can I deploy 100 files, with each file have one
setting the blocksize and controling the file size, on to 10
make sure each datanode has 10 blocks. I know the file system can
blocks automaticly, but I want to make sure for the assigns files,
will be deployed well-proportioned. How can I make it by the
hadoop tool or
Institute of Remote Sensing and GIS,Peking University
Phone: (86-10) 5276-3109
Mobile: (86) 1590-126-8883
Pro Hadoop, a book to guide you from beginner to hadoop mastery,http://www.amazon.com/dp/1430219424?tag=jewlerymall
www.prohadoopbook.com a community for Hadoop Professionals