FAQ
Hi,

I have a 10 node cluster (IBM blade servers, 48GB RAM, 2x500GB Disk, 16 HT
cores).

I've uploaded 10 files to HDFS. Each file is 10GB. I used the streaming jar
with 'wc -l' as mapper and 'cat' as reducer.

I use 64MB block size and the default replication (3).

The wc on the 100 GB took about 220 seconds which translates to about 3.5
Gbit/sec processing speed. One disk can do sequential read with 1Gbit/sec so
i would expect someting around 20 GBit/sec (minus some overhead), and I'm
getting only 3.5.

Is my expectaion valid?

I checked the jobtracked and it seems all nodes are working, each reading
the right blocks. I have not played with the number of mapper and reducers
yet. It seems the number of mappers is the same as the number of blocks and
the number of reducers is 20 (there are 20 disks). This looks ok for me.

We also did an experiment with TestDFSIO with similar results. Aggregated
read io speed is around 3.5Gbit/sec. It is just too far from my
expectation:(

Please help!

Thank you,
Gyorgy
--
View this message in context: http://old.nabble.com/Poor-IO-performance-on-a-10-node-cluster.-tp31732971p31732971.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.

Search Discussions

  • Praveen Peddi at May 30, 2011 at 12:54 pm
    That's because you are assuming that processing time for mappers and reducers to be 0? Counting words is processor intensive and it's likely that lot of those 220 seconds are spent in processing, not just reading the file.
    On May 30, 2011, at 8:28 AM, "ext Gyuribácsi" wrote:



    Hi,

    I have a 10 node cluster (IBM blade servers, 48GB RAM, 2x500GB Disk, 16 HT
    cores).

    I've uploaded 10 files to HDFS. Each file is 10GB. I used the streaming jar
    with 'wc -l' as mapper and 'cat' as reducer.

    I use 64MB block size and the default replication (3).

    The wc on the 100 GB took about 220 seconds which translates to about 3.5
    Gbit/sec processing speed. One disk can do sequential read with 1Gbit/sec so
    i would expect someting around 20 GBit/sec (minus some overhead), and I'm
    getting only 3.5.

    Is my expectaion valid?

    I checked the jobtracked and it seems all nodes are working, each reading
    the right blocks. I have not played with the number of mapper and reducers
    yet. It seems the number of mappers is the same as the number of blocks and
    the number of reducers is 20 (there are 20 disks). This looks ok for me.

    We also did an experiment with TestDFSIO with similar results. Aggregated
    read io speed is around 3.5Gbit/sec. It is just too far from my
    expectation:(

    Please help!

    Thank you,
    Gyorgy
    --
    View this message in context: http://old.nabble.com/Poor-IO-performance-on-a-10-node-cluster.-tp31732971p31732971.html
    Sent from the Hadoop core-user mailing list archive at Nabble.com.
  • Brian Bockelman at May 30, 2011 at 3:20 pm

    On May 30, 2011, at 7:27 AM, Gyuribácsi wrote:



    Hi,

    I have a 10 node cluster (IBM blade servers, 48GB RAM, 2x500GB Disk, 16 HT
    cores).

    I've uploaded 10 files to HDFS. Each file is 10GB. I used the streaming jar
    with 'wc -l' as mapper and 'cat' as reducer.

    I use 64MB block size and the default replication (3).

    The wc on the 100 GB took about 220 seconds which translates to about 3.5
    Gbit/sec processing speed. One disk can do sequential read with 1Gbit/sec so
    i would expect someting around 20 GBit/sec (minus some overhead), and I'm
    getting only 3.5.

    Is my expectaion valid?
    Probably not, at least not out-of-the-box. Things to tune:
    1) Number of threads per disk
    2) Size of the block size (yours seems relatively small)
    3) Network bottlenecks (can be seen using Ganglia)
    4) Related to (3), the number of replicas.
    5) Selection of the Linux I/O scheduler. Default CFQ scheduler is inappropriate for batch workloads.

    Finally, if you don't have enough host-level monitoring to indicate the current bottleneck (CPU, memory, network, or I/O?), you likely won't ever be able to solve this riddle

    Brian
  • Boris Aleksandrovsky at May 30, 2011 at 3:23 pm
    Ljddfjfjfififfifjftjiiiiiifjfjjjffkxbznzsjxodiewisshsudddudsjidhddueiweefiuftttoitfiirriifoiffkllddiririiriioerorooiieirrioeekroooeoooirjjfdijdkkduddjudiiehs
    On May 30, 2011 5:28 AM, "Gyuribácsi" wrote:


    Hi,

    I have a 10 node cluster (IBM blade servers, 48GB RAM, 2x500GB Disk, 16 HT
    cores).

    I've uploaded 10 files to HDFS. Each file is 10GB. I used the streaming jar
    with 'wc -l' as mapper and 'cat' as reducer.

    I use 64MB block size and the default replication (3).

    The wc on the 100 GB took about 220 seconds which translates to about 3.5
    Gbit/sec processing speed. One disk can do sequential read with 1Gbit/sec so
    i would expect someting around 20 GBit/sec (minus some overhead), and I'm
    getting only 3.5.

    Is my expectaion valid?

    I checked the jobtracked and it seems all nodes are working, each reading
    the right blocks. I have not played with the number of mapper and reducers
    yet. It seems the number of mappers is the same as the number of blocks and
    the number of reducers is 20 (there are 20 disks). This looks ok for me.

    We also did an experiment with TestDFSIO with similar results. Aggregated
    read io speed is around 3.5Gbit/sec. It is just too far from my
    expectation:(

    Please help!

    Thank you,
    Gyorgy
    --
    View this message in context:
    http://old.nabble.com/Poor-IO-performance-on-a-10-node-cluster.-tp31732971p31732971.html
    Sent from the Hadoop core-user mailing list archive at Nabble.com.
  • James Seigel at May 30, 2011 at 5:01 pm
    Not sure that will help ;)

    Sent from my mobile. Please excuse the typos.
    On 2011-05-30, at 9:23 AM, Boris Aleksandrovsky wrote:

    Ljddfjfjfififfifjftjiiiiiifjfjjjffkxbznzsjxodiewisshsudddudsjidhddueiweefiuftttoitfiirriifoiffkllddiririiriioerorooiieirrioeekroooeoooirjjfdijdkkduddjudiiehs
    On May 30, 2011 5:28 AM, "Gyuribácsi" wrote:


    Hi,

    I have a 10 node cluster (IBM blade servers, 48GB RAM, 2x500GB Disk, 16 HT
    cores).

    I've uploaded 10 files to HDFS. Each file is 10GB. I used the streaming jar
    with 'wc -l' as mapper and 'cat' as reducer.

    I use 64MB block size and the default replication (3).

    The wc on the 100 GB took about 220 seconds which translates to about 3.5
    Gbit/sec processing speed. One disk can do sequential read with 1Gbit/sec so
    i would expect someting around 20 GBit/sec (minus some overhead), and I'm
    getting only 3.5.

    Is my expectaion valid?

    I checked the jobtracked and it seems all nodes are working, each reading
    the right blocks. I have not played with the number of mapper and reducers
    yet. It seems the number of mappers is the same as the number of blocks and
    the number of reducers is 20 (there are 20 disks). This looks ok for me.

    We also did an experiment with TestDFSIO with similar results. Aggregated
    read io speed is around 3.5Gbit/sec. It is just too far from my
    expectation:(

    Please help!

    Thank you,
    Gyorgy
    --
    View this message in context:
    http://old.nabble.com/Poor-IO-performance-on-a-10-node-cluster.-tp31732971p31732971.html
    Sent from the Hadoop core-user mailing list archive at Nabble.com.
  • Harsh J at May 30, 2011 at 5:32 pm
    Psst. The cats speak in their own language ;-)
    On Mon, May 30, 2011 at 10:31 PM, James Seigel wrote:
    Not sure that will help ;)

    Sent from my mobile. Please excuse the typos.
    On 2011-05-30, at 9:23 AM, Boris Aleksandrovsky wrote:

    Ljddfjfjfififfifjftjiiiiiifjfjjjffkxbznzsjxodiewisshsudddudsjidhddueiweefiuftttoitfiirriifoiffkllddiririiriioerorooiieirrioeekroooeoooirjjfdijdkkduddjudiiehs
    On May 30, 2011 5:28 AM, "Gyuribácsi" wrote:


    Hi,

    I have a 10 node cluster (IBM blade servers, 48GB RAM, 2x500GB Disk, 16 HT
    cores).

    I've uploaded 10 files to HDFS. Each file is 10GB. I used the streaming jar
    with 'wc -l' as mapper and 'cat' as reducer.

    I use 64MB block size and the default replication (3).

    The wc on the 100 GB took about 220 seconds which translates to about 3.5
    Gbit/sec processing speed. One disk can do sequential read with 1Gbit/sec so
    i would expect someting around 20 GBit/sec (minus some overhead), and I'm
    getting only 3.5.

    Is my expectaion valid?

    I checked the jobtracked and it seems all nodes are working, each reading
    the right blocks. I have not played with the number of mapper and reducers
    yet. It seems the number of mappers is the same as the number of blocks and
    the number of reducers is 20 (there are 20 disks). This looks ok for me.

    We also did an experiment with TestDFSIO with similar results. Aggregated
    read io speed is around 3.5Gbit/sec. It is just too far from my
    expectation:(

    Please help!

    Thank you,
    Gyorgy
    --
    View this message in context:
    http://old.nabble.com/Poor-IO-performance-on-a-10-node-cluster.-tp31732971p31732971.html
    Sent from the Hadoop core-user mailing list archive at Nabble.com.


    --
    Harsh J
  • He Chen at May 30, 2011 at 6:40 pm
    Hi Gyuribácsi

    I would suggest you divide MapReduce program execution time into 3 parts

    a) Map Stage
    In this stage, wc splits input data and generates map tasks. Each map task
    process one block (in default, you can change it in FileInputFormat.java).
    As Brian said, if you have larger blocks size, you may have less number of
    map tasks, and then probably less overhead.

    b) Reduce Stage
    2) shuffle phase
    In this phase, reduce task collect intermediate results from every node
    that has executed map tasks. Each reduce task can have many current threads
    to obtain data(you can configure it in mapred-site.xml, it is
    "mapreduce.reduce.shuffle.parallelcopies"). But, be careful to your data
    popularity. For example, you have "Hadoop, Hadoop, Hadoop,hello". The
    default Hadoop partitioner will assign 3 <Hadoop, 1> key-value pairs to one
    node. Thus, if you have two nodes run reduce tasks, one of them will copy 3
    times more data than the other. This will cause one node slower than the
    other. You may rewrite the partitioner.

    3) sort and reduce phase
    I think the Hadoop UI will give you some hints about how long this phase
    takes.

    By dividing MapReduce application into these 3 parts, you can easily find
    which one is your bottleneck and do some profiling. And I don't know why my
    font change to this type.:(

    Hope it will be helpful.
    Chen
    On Mon, May 30, 2011 at 12:32 PM, Harsh J wrote:

    Psst. The cats speak in their own language ;-)
    On Mon, May 30, 2011 at 10:31 PM, James Seigel wrote:
    Not sure that will help ;)

    Sent from my mobile. Please excuse the typos.
    On 2011-05-30, at 9:23 AM, Boris Aleksandrovsky wrote:
    Ljddfjfjfififfifjftjiiiiiifjfjjjffkxbznzsjxodiewisshsudddudsjidhddueiweefiuftttoitfiirriifoiffkllddiririiriioerorooiieirrioeekroooeoooirjjfdijdkkduddjudiiehs
    On May 30, 2011 5:28 AM, "Gyuribácsi" wrote:


    Hi,

    I have a 10 node cluster (IBM blade servers, 48GB RAM, 2x500GB Disk, 16
    HT
    cores).

    I've uploaded 10 files to HDFS. Each file is 10GB. I used the streaming jar
    with 'wc -l' as mapper and 'cat' as reducer.

    I use 64MB block size and the default replication (3).

    The wc on the 100 GB took about 220 seconds which translates to about
    3.5
    Gbit/sec processing speed. One disk can do sequential read with
    1Gbit/sec
    so
    i would expect someting around 20 GBit/sec (minus some overhead), and
    I'm
    getting only 3.5.

    Is my expectaion valid?

    I checked the jobtracked and it seems all nodes are working, each
    reading
    the right blocks. I have not played with the number of mapper and
    reducers
    yet. It seems the number of mappers is the same as the number of blocks and
    the number of reducers is 20 (there are 20 disks). This looks ok for
    me.
    We also did an experiment with TestDFSIO with similar results.
    Aggregated
    read io speed is around 3.5Gbit/sec. It is just too far from my
    expectation:(

    Please help!

    Thank you,
    Gyorgy
    --
    View this message in context:
    http://old.nabble.com/Poor-IO-performance-on-a-10-node-cluster.-tp31732971p31732971.html
    Sent from the Hadoop core-user mailing list archive at Nabble.com.


    --
    Harsh J
  • Jagaran das at May 30, 2011 at 7:56 pm
    Your Font block size got increased dynamically , check in core-site :) :)

    - Jagaran



    ________________________________
    From: He Chen <[email protected]>
    To: [email protected]
    Sent: Mon, 30 May, 2011 11:39:35 AM
    Subject: Re: Poor IO performance on a 10 node cluster.

    Hi Gyuribácsi

    I would suggest you divide MapReduce program execution time into 3 parts

    a) Map Stage
    In this stage, wc splits input data and generates map tasks. Each map task
    process one block (in default, you can change it in FileInputFormat.java).
    As Brian said, if you have larger blocks size, you may have less number of
    map tasks, and then probably less overhead.

    b) Reduce Stage
    2) shuffle phase
    In this phase, reduce task collect intermediate results from every node
    that has executed map tasks. Each reduce task can have many current threads
    to obtain data(you can configure it in mapred-site.xml, it is
    "mapreduce.reduce.shuffle.parallelcopies"). But, be careful to your data
    popularity. For example, you have "Hadoop, Hadoop, Hadoop,hello". The
    default Hadoop partitioner will assign 3 <Hadoop, 1> key-value pairs to one
    node. Thus, if you have two nodes run reduce tasks, one of them will copy 3
    times more data than the other. This will cause one node slower than the
    other. You may rewrite the partitioner.

    3) sort and reduce phase
    I think the Hadoop UI will give you some hints about how long this phase
    takes.

    By dividing MapReduce application into these 3 parts, you can easily find
    which one is your bottleneck and do some profiling. And I don't know why my
    font change to this type.:(

    Hope it will be helpful.
    Chen
    On Mon, May 30, 2011 at 12:32 PM, Harsh J wrote:

    Psst. The cats speak in their own language ;-)
    On Mon, May 30, 2011 at 10:31 PM, James Seigel wrote:
    Not sure that will help ;)

    Sent from my mobile. Please excuse the typos.
    On 2011-05-30, at 9:23 AM, Boris Aleksandrovsky wrote:
    Ljddfjfjfififfifjftjiiiiiifjfjjjffkxbznzsjxodiewisshsudddudsjidhddueiweefiuftttoitfiirriifoiffkllddiririiriioerorooiieirrioeekroooeoooirjjfdijdkkduddjudiiehs
    s
    On May 30, 2011 5:28 AM, "Gyuribácsi" wrote:


    Hi,

    I have a 10 node cluster (IBM blade servers, 48GB RAM, 2x500GB Disk, 16
    HT
    cores).

    I've uploaded 10 files to HDFS. Each file is 10GB. I used the streaming jar
    with 'wc -l' as mapper and 'cat' as reducer.

    I use 64MB block size and the default replication (3).

    The wc on the 100 GB took about 220 seconds which translates to about
    3.5
    Gbit/sec processing speed. One disk can do sequential read with
    1Gbit/sec
    so
    i would expect someting around 20 GBit/sec (minus some overhead), and
    I'm
    getting only 3.5.

    Is my expectaion valid?

    I checked the jobtracked and it seems all nodes are working, each
    reading
    the right blocks. I have not played with the number of mapper and
    reducers
    yet. It seems the number of mappers is the same as the number of blocks and
    the number of reducers is 20 (there are 20 disks). This looks ok for
    me.
    We also did an experiment with TestDFSIO with similar results.
    Aggregated
    read io speed is around 3.5Gbit/sec. It is just too far from my
    expectation:(

    Please help!

    Thank you,
    Gyorgy
    --
    View this message in context:
    http://old.nabble.com/Poor-IO-performance-on-a-10-node-cluster.-tp31732971p31732971.html
    l
    Sent from the Hadoop core-user mailing list archive at Nabble.com.


    --
    Harsh J
  • Jason Rutherglen at May 30, 2011 at 6:23 pm
    That's a small town in Iceland.
    On Mon, May 30, 2011 at 10:01 AM, James Seigel wrote:
    Not sure that will help ;)

    Sent from my mobile. Please excuse the typos.
    On 2011-05-30, at 9:23 AM, Boris Aleksandrovsky wrote:

    Ljddfjfjfififfifjftjiiiiiifjfjjjffkxbznzsjxodiewisshsudddudsjidhddueiweefiuftttoitfiirriifoiffkllddiririiriioerorooiieirrioeekroooeoooirjjfdijdkkduddjudiiehs
    On May 30, 2011 5:28 AM, "Gyuribácsi" wrote:


    Hi,

    I have a 10 node cluster (IBM blade servers, 48GB RAM, 2x500GB Disk, 16 HT
    cores).

    I've uploaded 10 files to HDFS. Each file is 10GB. I used the streaming jar
    with 'wc -l' as mapper and 'cat' as reducer.

    I use 64MB block size and the default replication (3).

    The wc on the 100 GB took about 220 seconds which translates to about 3.5
    Gbit/sec processing speed. One disk can do sequential read with 1Gbit/sec so
    i would expect someting around 20 GBit/sec (minus some overhead), and I'm
    getting only 3.5.

    Is my expectaion valid?

    I checked the jobtracked and it seems all nodes are working, each reading
    the right blocks. I have not played with the number of mapper and reducers
    yet. It seems the number of mappers is the same as the number of blocks and
    the number of reducers is 20 (there are 20 disks). This looks ok for me.

    We also did an experiment with TestDFSIO with similar results. Aggregated
    read io speed is around 3.5Gbit/sec. It is just too far from my
    expectation:(

    Please help!

    Thank you,
    Gyorgy
    --
    View this message in context:
    http://old.nabble.com/Poor-IO-performance-on-a-10-node-cluster.-tp31732971p31732971.html
    Sent from the Hadoop core-user mailing list archive at Nabble.com.
  • Lance Norskog at May 30, 2011 at 10:01 pm
    I'm sorry, but she's with me now.

    On Mon, May 30, 2011 at 8:22 AM, Boris Aleksandrovsky
    wrote:
    Ljddfjfjfififfifjftjiiiiiifjfjjjffkxbznzsjxodiewisshsudddudsjidhddueiweefiuftttoitfiirriifoiffkllddiririiriioerorooiieirrioeekroooeoooirjjfdijdkkduddjudiiehs
    On May 30, 2011 5:28 AM, "Gyuribácsi" wrote:


    Hi,

    I have a 10 node cluster (IBM blade servers, 48GB RAM, 2x500GB Disk, 16 HT
    cores).

    I've uploaded 10 files to HDFS. Each file is 10GB. I used the streaming jar
    with 'wc -l' as mapper and 'cat' as reducer.

    I use 64MB block size and the default replication (3).

    The wc on the 100 GB took about 220 seconds which translates to about 3.5
    Gbit/sec processing speed. One disk can do sequential read with 1Gbit/sec so
    i would expect someting around 20 GBit/sec (minus some overhead), and I'm
    getting only 3.5.

    Is my expectaion valid?

    I checked the jobtracked and it seems all nodes are working, each reading
    the right blocks. I have not played with the number of mapper and reducers
    yet. It seems the number of mappers is the same as the number of blocks and
    the number of reducers is 20 (there are 20 disks). This looks ok for me.

    We also did an experiment with TestDFSIO with similar results. Aggregated
    read io speed is around 3.5Gbit/sec. It is just too far from my
    expectation:(

    Please help!

    Thank you,
    Gyorgy
    --
    View this message in context:
    http://old.nabble.com/Poor-IO-performance-on-a-10-node-cluster.-tp31732971p31732971.html
    Sent from the Hadoop core-user mailing list archive at Nabble.com.


    --
    Lance Norskog
    [email protected]
  • Hadoopman at Jun 2, 2011 at 12:57 am
    Some things which helped us include setting your vm.swappiness to 0 and
    mounting your disks with noatime,nodiratime options.

    Also make sure your disks aren't setup with RAID (JBOD is recommended)

    You might want to run terasort as you tweak your environment. It's very
    helpful when checking if a change helped (or hurt) your cluster.

    Hope that helps a bit.
    On 05/30/2011 06:27 AM, Gyuribácsi wrote:

    Hi,

    I have a 10 node cluster (IBM blade servers, 48GB RAM, 2x500GB Disk, 16 HT
    cores).

    I've uploaded 10 files to HDFS. Each file is 10GB. I used the streaming jar
    with 'wc -l' as mapper and 'cat' as reducer.

    I use 64MB block size and the default replication (3).

    The wc on the 100 GB took about 220 seconds which translates to about 3.5
    Gbit/sec processing speed. One disk can do sequential read with 1Gbit/sec so
    i would expect someting around 20 GBit/sec (minus some overhead), and I'm
    getting only 3.5.

    Is my expectaion valid?

    I checked the jobtracked and it seems all nodes are working, each reading
    the right blocks. I have not played with the number of mapper and reducers
    yet. It seems the number of mappers is the same as the number of blocks and
    the number of reducers is 20 (there are 20 disks). This looks ok for me.

    We also did an experiment with TestDFSIO with similar results. Aggregated
    read io speed is around 3.5Gbit/sec. It is just too far from my
    expectation:(

    Please help!

    Thank you,
    Gyorgy
  • Ted Dunning at Jun 2, 2011 at 4:06 am
    It is also worth using dd to verify your raw disk speeds.

    Also, expressing disk transfer rates in bytes per second makes it a bit
    easier for most of the disk people I know to figure out what is large or
    small.

    Each of these disks disk should do about 100MB/s when driven well. Hadoop
    does OK, but not nearly full capacity so I would expected more like
    40-50MB/s/disk. Also, if one of your disks is doing double duty you may
    have some extra cost. 40 x 2 x 10 = 800MB per second. You are doing 100GB
    / 220 seconds = 0.5 GB / s which isn't so terribly bad. It is definitely
    less than the theoretically possible 100 x 2 x 10 = 2GB/second and I would
    expect you could tune this up a little, but not a massive amount.

    In general, blade servers do not make good Hadoop nodes exactly because the
    I/O performance tends to be low when you only have a few spindle.

    One other reason that this might be a bit below expectations is that your
    files may not be well distributed on your cluster. Can you say what you
    used to upload the files?
    On Wed, Jun 1, 2011 at 5:56 PM, hadoopman wrote:

    Some things which helped us include setting your vm.swappiness to 0 and
    mounting your disks with noatime,nodiratime options.

    Also make sure your disks aren't setup with RAID (JBOD is recommended)

    You might want to run terasort as you tweak your environment. It's very
    helpful when checking if a change helped (or hurt) your cluster.

    Hope that helps a bit.
    On 05/30/2011 06:27 AM, Gyuribácsi wrote:


    Hi,

    I have a 10 node cluster (IBM blade servers, 48GB RAM, 2x500GB Disk, 16 HT
    cores).

    I've uploaded 10 files to HDFS. Each file is 10GB. I used the streaming
    jar
    with 'wc -l' as mapper and 'cat' as reducer.

    I use 64MB block size and the default replication (3).

    The wc on the 100 GB took about 220 seconds which translates to about 3.5
    Gbit/sec processing speed. One disk can do sequential read with 1Gbit/sec
    so
    i would expect someting around 20 GBit/sec (minus some overhead), and I'm
    getting only 3.5.

    Is my expectaion valid?

    I checked the jobtracked and it seems all nodes are working, each reading
    the right blocks. I have not played with the number of mapper and reducers
    yet. It seems the number of mappers is the same as the number of blocks
    and
    the number of reducers is 20 (there are 20 disks). This looks ok for me.

    We also did an experiment with TestDFSIO with similar results. Aggregated
    read io speed is around 3.5Gbit/sec. It is just too far from my
    expectation:(

    Please help!

    Thank you,
    Gyorgy
  • Raja Nagendra Kumar at Jul 17, 2011 at 2:07 am
    Hi,

    Is this the speed you are observing when initial writes of the files
    happening (i.e while you are initially putting 10gb files with replication)

    Regards,
    Raja Nagendra Kumar


    Gyuribácsi wrote:

    Hi,

    I have a 10 node cluster (IBM blade servers, 48GB RAM, 2x500GB Disk, 16 HT
    cores).

    I've uploaded 10 files to HDFS. Each file is 10GB. I used the streaming
    jar
    with 'wc -l' as mapper and 'cat' as reducer.

    I use 64MB block size and the default replication (3).

    The wc on the 100 GB took about 220 seconds which translates to about 3.5
    Gbit/sec processing speed. One disk can do sequential read with 1Gbit/sec
    so
    i would expect someting around 20 GBit/sec (minus some overhead), and I'm
    getting only 3.5.

    Is my expectaion valid?

    I checked the jobtracked and it seems all nodes are working, each reading
    the right blocks. I have not played with the number of mapper and reducers
    yet. It seems the number of mappers is the same as the number of blocks
    and
    the number of reducers is 20 (there are 20 disks). This looks ok for me.

    We also did an experiment with TestDFSIO with similar results. Aggregated
    read io speed is around 3.5Gbit/sec. It is just too far from my
    expectation:(

    Please help!

    Thank you,
    Gyorgy
    --
    View this message in context: http://old.nabble.com/Poor-IO-performance-on-a-10-node-cluster.-tp31732971p32076106.html
    Sent from the Hadoop core-user mailing list archive at Nabble.com.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedMay 30, '11 at 12:28p
activeJul 17, '11 at 2:07a
posts13
users13
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2023 Grokbase