FAQ
I did a ctrl-c immediately after issuing a hadoop dfs -rmr command. The rmr
target is no longer visible from the dfs -ls command. The number of files
deleted is huge and I don't think it can possibly delete them all between
the time the command is issued and ctrl-c. Does this mean it leaves behind
unreachable files on the slave nodes and making them dead weights? We can
always reformat hdfs to be sure. But is there a way to check? Thanks.
--
View this message in context: http://www.nabble.com/What-happens-when-you-do-a-ctrl-c-on-a-big-dfs--rmr-tp22468909p22468909.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.

Search Discussions

  • Lohit at Mar 12, 2009 at 3:57 am
    When you issue -rmr with directory, namenode get a directory name and starts deleting files recursively. It adds the blocks belonging to files to invalidate list. NameNode then deletes those blocks lazily. So, yes it will issue command to datanodes to delete those blocks, just give it some time. You do not need to reformat HDFS.
    Lohit



    ----- Original Message ----
    From: bzheng <[email protected]>
    To: [email protected]
    Sent: Wednesday, March 11, 2009 7:48:41 PM
    Subject: What happens when you do a ctrl-c on a big dfs -rmr


    I did a ctrl-c immediately after issuing a hadoop dfs -rmr command. The rmr
    target is no longer visible from the dfs -ls command. The number of files
    deleted is huge and I don't think it can possibly delete them all between
    the time the command is issued and ctrl-c. Does this mean it leaves behind
    unreachable files on the slave nodes and making them dead weights? We can
    always reformat hdfs to be sure. But is there a way to check? Thanks.
    --
    View this message in context: http://www.nabble.com/What-happens-when-you-do-a-ctrl-c-on-a-big-dfs--rmr-tp22468909p22468909.html
    Sent from the Hadoop core-user mailing list archive at Nabble.com.
  • 何 永强 at Mar 12, 2009 at 5:54 am
    All deleted or not a file deleted at all, that depending on how fast you
    press the ctrl-c. The delete command is not executed in your terminal,
    instead the rmr command is sent to the hadoop namenode and is executed
    there.

    On 09-3-12 上午10:48, "bzheng" wrote:


    I did a ctrl-c immediately after issuing a hadoop dfs -rmr command. The rmr
    target is no longer visible from the dfs -ls command. The number of files
    deleted is huge and I don't think it can possibly delete them all between
    the time the command is issued and ctrl-c. Does this mean it leaves behind
    unreachable files on the slave nodes and making them dead weights? We can
    always reformat hdfs to be sure. But is there a way to check? Thanks.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedMar 12, '09 at 2:49a
activeMar 12, '09 at 5:54a
posts3
users3
websitehadoop.apache.org...
irc#hadoop

3 users in discussion

何 永强: 1 post Bzheng: 1 post Lohit: 1 post

People

Translate

site design / logo © 2023 Grokbase