FAQ
When you issue -rmr with directory, namenode get a directory name and starts deleting files recursively. It adds the blocks belonging to files to invalidate list. NameNode then deletes those blocks lazily. So, yes it will issue command to datanodes to delete those blocks, just give it some time. You do not need to reformat HDFS.
Lohit



----- Original Message ----
From: bzheng <bing.zheng@gmail.com>
To: core-user@hadoop.apache.org
Sent: Wednesday, March 11, 2009 7:48:41 PM
Subject: What happens when you do a ctrl-c on a big dfs -rmr


I did a ctrl-c immediately after issuing a hadoop dfs -rmr command. The rmr
target is no longer visible from the dfs -ls command. The number of files
deleted is huge and I don't think it can possibly delete them all between
the time the command is issued and ctrl-c. Does this mean it leaves behind
unreachable files on the slave nodes and making them dead weights? We can
always reformat hdfs to be sure. But is there a way to check? Thanks.
--
View this message in context: http://www.nabble.com/What-happens-when-you-do-a-ctrl-c-on-a-big-dfs--rmr-tp22468909p22468909.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.

Search Discussions

Discussion Posts

Previous

Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 2 of 3 | next ›
Discussion Overview
groupcommon-user @
categorieshadoop
postedMar 12, '09 at 2:49a
activeMar 12, '09 at 5:54a
posts3
users3
websitehadoop.apache.org...
irc#hadoop

3 users in discussion

何 永强: 1 post Bzheng: 1 post Lohit: 1 post

People

Translate

site design / logo © 2022 Grokbase