FAQ
Hi,
I am new to hdfs management. Hope you guys help me on this.
I have a 12-node cluster, up to last week, I found the space usage of hdfs
was approaching to 75%, then I remove tens of thousands the files, which are
quite small ones, after deletion, i use *hadop fs -du / *, it shows only
11TB space are used.
*32296810 hdfs://hadoop-name01/hadoop
3506 hdfs://hadoop-name01/hbase
66,380,418,656 hdfs://hadoop-name01/tmp
10,280,770,071,939 hdfs://hadoop-name01/user*

but when I tried to use *hadoop fs -df /*, it said there are still 68% of
space are used.
*Filesystem Size Used Avail Use%
/ 70767414312960 48673246151277 18472473866240 68%*

I am puzzled here since du said only 11T are used, but in df said 44T are
used. Also in dfsadmin report, 44TB is reported as used, any reason some
space(*33TB*) are missing after my deletion? Any light shed on me are highly
welcomed!

*$ hadoop dfsadmin -report*
*Configured Capacity: 70767414312960 (64.36 TB)
Present Capacity: 67143178552151 (61.07 TB)
DFS Remaining: 18416113192960 (16.75 TB)
DFS Used: 48727065359191 (44.32 TB)
DFS Used%: 72.57%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0*


Regards,
Peter Li

Search Discussions

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouphdfs-user @
categorieshadoop
postedOct 12, '10 at 10:29p
activeOct 12, '10 at 10:29p
posts1
users1
websitehadoop.apache.org...
irc#hadoop

1 user in discussion

Tianqiang Li: 1 post

People

Translate

site design / logo © 2022 Grokbase