FAQ
Hi All

I run hadoop 0.16.4

My config is :
1 namenode
1 secondarynamenode
2 datanodes

When i start hadoop, everything goes fine
i can read / write data on the hdfs

I have lots of small file

Then a use a batch to populate the fs (which consist of lots wget
- --post-file whith about 4000 conexion on each datanode

After 10 hour of copy (about 110GB)

I get lots of errors like this one on the namenode :
org.apache.hadoop.fs.permission.AccessControlException: Permission
denied: user=tomcat, access=WRITE, inode="":hadoop:supergroup:rwxr-xr-x
2008-08-07 00:50:42,926 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 9 on 54310, call create(/500.html, rwxr-xr-x,
DFSClient_594516226, true, 2, 67108864) from TOMCAT_SERVER:56028: error:
org.apache.hadoop.fs.permission.AccessControlException: Permission
denied: user=tomcat, access=WRITE, inode="":hadoop:
supergroup:rwxr-xr-x

I think this is not what i causing my problem

After those first errors i get lots of other errors :

org.apache.hadoop.fs.permission.AccessControlException: Permission
denied: user=tomcat, access=WRITE, inode="":hadoop:supergroup:rwxr-xr-x
2008-08-07 04:11:06,377 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 3 on 54310, call create(/path/to/the/file/file.test, rwxr-xr-x,
DFSClient_-407259278, true, 2, 67108864) from TOMCAT_SERVER:39324:
error: org.apache.hadoop.dfs.AlreadyBeingCreatedException: failed to
create file /path/to/the/file/file.test for DFSClient_-407259278 on
client TOMCAT_SERVER because current leaseholder is trying to recreate file.
org.apache.hadoop.dfs.AlreadyBeingCreatedException: failed to create
file /path/to/the/file/file.test for DFSClient_-407259278 on client
TOMCAT_SERVER because current leaseholder is trying to recreate file.

And at this time when i run
bin/hadoop dfsadmin -report

Total raw bytes: 0 (0 KB)
Remaining raw bytes: 0 (0 KB)
Used raw bytes: 0 (0 KB)
% used: �%

Total effective bytes: 131164171541 (122.16 GB)
Effective replication multiplier: 0.0
- -------------------------------------------------
Datanodes available: 2

Name: 10.100.1.5:50010
State : In Service
Total raw bytes: 0 (0 KB)
Remaining raw bytes: 0 (0 KB)
Used raw bytes: 158403784704 (147.53 GB)
% used: �%
Last contact: Thu Aug 07 11:20:19 CEST 2008


Name: 10.100.1.4:50010
State : In Service
Total raw bytes: 0 (0 KB)
Remaining raw bytes: 0(0 KB)
Used raw bytes: 158403784987 (147.56 GB)
% used: �%
Last contact: Thu Aug 07 11:19:44 CEST 2008



And then i run

bin/hadoop fsck /


/path/to/the/file/file.test: MISSING 1 blocks of total size 12852 B.
.
/bpath/to/the/file/file2.test: MISSING 1 blocks of total size 25095 B.
.
/path/to/the/file/file3.test: MISSING 1 blocks of total size 291 B.
.
/path/to/the/file/file4.test: MISSING 1 blocks of total size 51409 B.

.....


Does anyone see where i can look to solve this problem ?

Thanks

Search Discussions

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedAug 7, '08 at 9:31a
activeAug 7, '08 at 9:31a
posts1
users1
websitehadoop.apache.org...
irc#hadoop

1 user in discussion

Alban: 1 post

People

Translate

site design / logo © 2022 Grokbase