While this is not a scm specific issue I thought I'd asked here first since
I installed with scm. I can move the question to user@hadoop.apache.org if
it's appropriate.
--- info ----
- 10 node cluster is used for testing
-version = 2.0.0-cdh4.2.0
-namenode is also datanode (that machine is zip4)
I had ip address issues with the nodes. Removed a problem node from the
  -planned to do some cleanup on that node
  -then add it back
  -then rebalance the cluster
But I deleted the dfs.data.dir on the namenode/datanode.
--> it's been a long day.

cm shows the zip4 datanode as fine.!!? But the namenode on zip4 is stopped.
I tried
   *hadoop namenode -recover*
* -*this generated:
      hdfs.StateChange: STATE* Safe mode is ON.
        So i tried : "hdfs dfsadmin -safemode leave" to turn safe mode off
as suggested.
         ... to zip4:8020 failed on connection exception
   -that failed with :
      WARN common.Storage: Storage directory /tmp/hadoop-linux/dfs/name does
not exist
   -which caused:
      InconsistentFSStateException: Directory /tmp/hadoop-linux/dfs/name is
in an inconsistent state: storage directory does not exist or....

This is a test cluster, but has alot of good test data. I would prefer to
not lose the data. But if I do it's not the end of the world.
  Any suggestions?

Search Discussions

Discussion Posts

Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 1 of 7 | next ›
Discussion Overview
groupscm-users @
postedJul 19, '13 at 11:57p
activeJul 25, '13 at 12:43a

2 users in discussion

John Meza: 4 posts Harsh J: 3 posts



site design / logo © 2022 Grokbase