FAQ
Hi,all

I set dfs.name.dir to a comma-delimited list of directories, dir1 is in /dev/sdb1 dir2 is in /dev/sdb2 and dir3 is nfs derectory.
What happens if /dev/sdb1 disk error, so dir1 cannot be read and write?

What happens if nfs server down, so dir3 cannot be read and write?
Will hadoop ignore the bad directory and use the good directory and continue server?

Thanks.

Search Discussions

  • Harsh J at May 25, 2011 at 7:20 am
    Yes. But depending on the version you're using, you may have to
    manually restart the NN after fixing the mount points, to get the
    directories in action again.

    2011/5/25 ccxixicc <ccxixicc@foxmail.com>:
    Hi,all
    I set dfs.name.dir to a comma-delimited list of directories, dir1 is in
    /dev/sdb1 dir2 is in /dev/sdb2 and dir3 is nfs derectory.
    What happens if /dev/sdb1 disk error, so dir1 cannot be read and write?
    What happens if nfs server down, so dir3 cannot be read and write?
    Will hadoop ignore the bad directory and use the good directory and continue
    server?
    Thanks.


    --
    Harsh J
  • Ccxixicc at May 25, 2011 at 8:07 am
    I'm using 0.20.2.

    I had some test. I dont know how to simulate a disk failure, just chmod 000 dir1, the namenode shutdown immediately. And NN will hang if the nfs server down.




    ------------------ Original ------------------
    From: "Harsh J"<harsh@cloudera.com>;
    Date: Wed, May 25, 2011 03:49 PM
    To: "hdfs-user"<hdfs-user@hadoop.apache.org>;
    Subject: Re: What if one of the directory(dfs.name.dir) rw error ?


    Yes. But depending on the version you're using, you may have to
    manually restart the NN after fixing the mount points, to get the
    directories in action again.

    2011/5/25 ccxixicc <ccxixicc@foxmail.com>:
    Hi,all
    I set dfs.name.dir to a comma-delimited list of directories, dir1 is in
    /dev/sdb1 dir2 is in /dev/sdb2 and dir3 is nfs derectory.
    What happens if /dev/sdb1 disk error, so dir1 cannot be read and write?
    What happens if nfs server down, so dir3 cannot be read and write?
    Will hadoop ignore the bad directory and use the good directory and continue
    server?
    Thanks.


    --
    Harsh J
  • Bharath Mundlapudi at May 25, 2011 at 8:09 pm
    I dont know how to simulate a disk failure..
    Couple of things you could do. chmod 000 is one.
    1. umount -l
    2. mount ro only
    3. If machine has hot swappable disks, pull out a disk.

    -Bharath



    ________________________________
    From: ccxixicc <ccxixicc@foxmail.com>
    To: hdfs-user <hdfs-user@hadoop.apache.org>
    Sent: Wednesday, May 25, 2011 1:07 AM
    Subject: Re: What if one of the directory(dfs.name.dir) rw error ?




    I'm using 0.20.2.

    I had some test. I dont know how to simulate a disk failure, just chmod 000 dir1, the namenode shutdown immediately. And NN will hang if the nfs server down.



    ------------------ Original ------------------
    From:  "Harsh J"<harsh@cloudera.com>;
    Date:  Wed, May 25, 2011 03:49 PM
    To:  "hdfs-user"<hdfs-user@hadoop.apache.org>;
    Subject:  Re: What if one of the directory(dfs.name.dir) rw error ?

    Yes. But depending on the version you're using, you may have to
    manually restart the NN after fixing the mount points, to get the
    directories in action again.

    2011/5/25 ccxixicc <ccxixicc@foxmail.com>:
    Hi,all
    I set dfs.name.dir to a comma-delimited list of directories, dir1 is in
    /dev/sdb1 dir2 is in /dev/sdb2 and dir3 is nfs derectory.
    What happens if /dev/sdb1 disk error, so dir1 cannot be read and write?
    What happens if nfs server down, so dir3 cannot be read and write?
    Will hadoop ignore the bad directory and use the good directory and continue
    server?
    Thanks.


    --
    Harsh J
  • Thanh Do at May 26, 2011 at 3:48 am
    You can simulate disk failure by some fault injection techniques.
    Applying AspectJ is one of them.
    On Wed, May 25, 2011 at 3:07 AM, ccxixicc wrote:


    I'm using 0.20.2.
    I had some test. I dont know how to simulate a disk failure, just chmod 000
    dir1, the namenode shutdown immediately. And NN will hang if the nfs server
    down.



    ------------------ Original ------------------
    *From: * "Harsh J"<harsh@cloudera.com>;
    *Date: * Wed, May 25, 2011 03:49 PM
    *To: * "hdfs-user"<hdfs-user@hadoop.apache.org>;
    *Subject: * Re: What if one of the directory(dfs.name.dir) rw error ?

    Yes. But depending on the version you're using, you may have to
    manually restart the NN after fixing the mount points, to get the
    directories in action again.

    2011/5/25 ccxixicc <ccxixicc@foxmail.com>:
    Hi,all
    I set dfs.name.dir to a comma-delimited list of directories, dir1 is in
    /dev/sdb1 dir2 is in /dev/sdb2 and dir3 is nfs derectory.
    What happens if /dev/sdb1 disk error, so dir1 cannot be read and write?
    What happens if nfs server down, so dir3 cannot be read and write?
    Will hadoop ignore the bad directory and use the good directory and continue
    server?
    Thanks.


    --
    Harsh J
  • Konstantin Boudnik at May 26, 2011 at 4:42 am

    On Wed, May 25, 2011 at 10:48PM, Thanh Do wrote:
    You can simulate disk failure by some fault injection techniques.
    Applying AspectJ is one of them.
    Fault injection is there, so you can just check src/test/aop and
    src/test/system for references, etc.
    On Wed, May 25, 2011 at 3:07 AM, ccxixicc wrote:

    I'm using 0.20.2.A
    I had some test. I dont know how to simulate a disk failure, just chmod
    000 dir1, the namenode shutdown immediately. And NN will hang if the nfs
    server down.
    A
    A
    ------------------A OriginalA ------------------
    From: A "Harsh J"<harsh@cloudera.com>;
    Date: A Wed, May 25, 2011 03:49 PM
    To: A "hdfs-user"<hdfs-user@hadoop.apache.org>;
    Subject: A Re: What if one of the directory(dfs.name.dir) rw error ?
    A
    Yes. But depending on the version you're using, you may have to
    manually restart the NN after fixing the mount points, to get the
    directories in action again.

    2011/5/25 ccxixicc <ccxixicc@foxmail.com>:
    Hi,all
    I set dfs.name.dir to a comma-delimited list of directories, dir1 is in
    /dev/sdb1 dir2 is in /dev/sdb2 and dir3 is nfs derectory.
    What happens if /dev/sdb1 disk error, so dir1 cannot be read and write?
    What happens if nfs server down, so dir3 cannot be read and write?
    Will hadoop ignore the bad directory and use the good directory and continue
    server?
    Thanks.
    --
    Harsh J
  • Tom Hall at May 25, 2011 at 10:32 am
    In my experience I had to edit hdfs and mapred setup on a server with
    a disk missing.

    Tom

    2011/5/25 ccxixicc <ccxixicc@foxmail.com>:
    Hi,all
    I set dfs.name.dir to a comma-delimited list of directories, dir1 is in
    /dev/sdb1 dir2 is in /dev/sdb2 and dir3 is nfs derectory.
    What happens if /dev/sdb1 disk error, so dir1 cannot be read and write?
    What happens if nfs server down, so dir3 cannot be read and write?
    Will hadoop ignore the bad directory and use the good directory and continue
    server?
    Thanks.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouphdfs-user @
categorieshadoop
postedMay 25, '11 at 6:39a
activeMay 26, '11 at 4:42a
posts7
users6
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase