FAQ
Hi guys,

I'm using an NFS cluster consisting of 30 machines, but only specified 3 of
the nodes to be my hadoop cluster. So my problem is this. Datanode won't
start in one of the nodes because of the following error:

org.apache.hadoop.hdfs.server.common.Storage: Cannot lock storage
/cs/student/mark/tmp/hodhod/dfs/data. The directory is already locked

I think it's because of the NFS property which allows one node to lock it
then the second node can't lock it. Any ideas on how to solve this error?

Thanks,
Mark

Search Discussions

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 1 of 1 | next ›
Discussion Overview
groupcommon-user @
categorieshadoop
postedMay 25, '11 at 1:44a
activeMay 25, '11 at 1:44a
posts1
users1
websitehadoop.apache.org...
irc#hadoop

1 user in discussion

Mark question: 1 post

People

Translate

site design / logo © 2022 Grokbase