FAQ
Hello Everyone,

At time I get following error,when i restart my cluster desktops.(Before
that I shutdown mapred and dfs properly though).
Temp folder contains of the directory its looking for.Still I get this
error.
Only solution I found to get rid with this error is I have to format my dfs
entirely and then load the data again. and start whole process.

But in that I loose my data on HDFS and I have to reload it.

Does anyone has any clue abt it?

Error from log fil e:-

2009-04-14 19:40:29,963 INFO org.apache.hadoop.dfs.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = Semantic002/192.168.1.133
STARTUP_MSG: args = []
STARTUP_MSG: version = 0.18.3
STARTUP_MSG: build =
https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.18 -r 736250;
compiled by 'ndaley' on Thu Jan 22 23:12:08 UTC 2009
************************************************************/
2009-04-14 19:40:30,958 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
Initializing RPC Metrics with hostName=NameNode, port=9000
2009-04-14 19:40:30,996 INFO org.apache.hadoop.dfs.NameNode: Namenode up at:
Semantic002/192.168.1.133:9000
2009-04-14 19:40:31,007 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializing JVM Metrics with processName=NameNode, sessionId=null
2009-04-14 19:40:31,014 INFO org.apache.hadoop.dfs.NameNodeMetrics:
Initializing NameNodeMeterics using context
object:org.apache.hadoop.metrics.spi.NullCont
ext
2009-04-14 19:40:31,160 INFO org.apache.hadoop.fs.FSNamesystem:
fsOwner=hadoop,hadoop,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,fuse,admin
2009-04-14 19:40:31,161 INFO org.apache.hadoop.fs.FSNamesystem:
supergroup=supergroup
2009-04-14 19:40:31,161 INFO org.apache.hadoop.fs.FSNamesystem:
isPermissionEnabled=true
2009-04-14 19:40:31,183 INFO org.apache.hadoop.dfs.FSNamesystemMetrics:
Initializing FSNamesystemMeterics using context
object:org.apache.hadoop.metrics.spi.
NullContext
2009-04-14 19:40:31,184 INFO org.apache.hadoop.fs.FSNamesystem: Registered
FSNamesystemStatusMBean
2009-04-14 19:40:31,248 INFO org.apache.hadoop.dfs.Storage: Storage
directory /tmp/hadoop-hadoop/dfs/name does not exist.
2009-04-14 19:40:31,251 ERROR org.apache.hadoop.fs.FSNamesystem:
FSNamesystem initialization failed.
org.apache.hadoop.dfs.InconsistentFSStateException: Directory
/tmp/hadoop-hadoop/dfs/name is in an inconsistent state: storage directory
does not exist or is
not accessible.
at
org.apache.hadoop.dfs.FSImage.recoverTransitionRead(FSImage.java:211)
at
org.apache.hadoop.dfs.FSDirectory.loadFSImage(FSDirectory.java:80)
at
org.apache.hadoop.dfs.FSNamesystem.initialize(FSNamesystem.java:294)
at org.apache.hadoop.dfs.FSNamesystem.(NameNode.java:148)
at org.apache.hadoop.dfs.NameNode.(NameNode.java:179)
at org.apache.hadoop.dfs.NameNode.createNameNode(NameNode.java:830)
at org.apache.hadoop.dfs.NameNode.main(NameNode.java:839)
2009-04-14 19:40:31,261 INFO org.apache.hadoop.ipc.Server: Stopping server
on 9000
2009-04-14 19:40:31,262 ERROR org.apache.hadoop.dfs.NameNode:
org.apache.hadoop.dfs.InconsistentFSStateException: Directory
/tmp/hadoop-hadoop/dfs/name is in
an inconsistent state: storage directory does not exist or is not
accessible.
at
org.apache.hadoop.dfs.FSImage.recoverTransitionRead(FSImage.java:211)
at
org.apache.hadoop.dfs.FSDirectory.loadFSImage(FSDirectory.java:80)
at
org.apache.hadoop.dfs.FSNamesystem.initialize(FSNamesystem.java:294)
at org.apache.hadoop.dfs.FSNamesystem.(NameNode.java:148)
at org.apache.hadoop.dfs.NameNode.(NameNode.java:179)
at org.apache.hadoop.dfs.NameNode.createNameNode(NameNode.java:830)
at org.apache.hadoop.dfs.NameNode.main(NameNode.java:839)

2009-04-14 19:40:31,267 INFO org.apache.hadoop.dfs.NameNode: SHUTDOWN_MSG:
/************************************************************
:

Thanks

Pankil

Search Discussions

  • Alex Loddengaard at Apr 15, 2009 at 9:09 pm
    Data stored to /tmp has no consistency / reliability guarantees. Your OS
    can delete that data at any time.

    Configure hadoop-site.xml to store data elsewhere. Grep for "/tmp" in
    hadoop-default.xml to see all the configuration options you'll have to
    change. Here's the list I came up with:

    hadoop.tmp.dir
    fs.checkpoint.dir
    dfs.name.dir
    dfs.data.dir
    mapred.local.dir
    mapred.system.dir
    mapred.temp.dir

    Again, you need to be storing your data somewhere other than /tmp.

    Alex
    On Tue, Apr 14, 2009 at 6:06 PM, Pankil Doshi wrote:

    Hello Everyone,

    At time I get following error,when i restart my cluster desktops.(Before
    that I shutdown mapred and dfs properly though).
    Temp folder contains of the directory its looking for.Still I get this
    error.
    Only solution I found to get rid with this error is I have to format my dfs
    entirely and then load the data again. and start whole process.

    But in that I loose my data on HDFS and I have to reload it.

    Does anyone has any clue abt it?

    Error from log fil e:-

    2009-04-14 19:40:29,963 INFO org.apache.hadoop.dfs.NameNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG: host = Semantic002/192.168.1.133
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 0.18.3
    STARTUP_MSG: build =
    https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.18 -r
    736250;
    compiled by 'ndaley' on Thu Jan 22 23:12:08 UTC 2009
    ************************************************************/
    2009-04-14 19:40:30,958 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
    Initializing RPC Metrics with hostName=NameNode, port=9000
    2009-04-14 19:40:30,996 INFO org.apache.hadoop.dfs.NameNode: Namenode up
    at:
    Semantic002/192.168.1.133:9000
    2009-04-14 19:40:31,007 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
    Initializing JVM Metrics with processName=NameNode, sessionId=null
    2009-04-14 19:40:31,014 INFO org.apache.hadoop.dfs.NameNodeMetrics:
    Initializing NameNodeMeterics using context
    object:org.apache.hadoop.metrics.spi.NullCont
    ext
    2009-04-14 19:40:31,160 INFO org.apache.hadoop.fs.FSNamesystem:

    fsOwner=hadoop,hadoop,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,fuse,admin
    2009-04-14 19:40:31,161 INFO org.apache.hadoop.fs.FSNamesystem:
    supergroup=supergroup
    2009-04-14 19:40:31,161 INFO org.apache.hadoop.fs.FSNamesystem:
    isPermissionEnabled=true
    2009-04-14 19:40:31,183 INFO org.apache.hadoop.dfs.FSNamesystemMetrics:
    Initializing FSNamesystemMeterics using context
    object:org.apache.hadoop.metrics.spi.
    NullContext
    2009-04-14 19:40:31,184 INFO org.apache.hadoop.fs.FSNamesystem: Registered
    FSNamesystemStatusMBean
    2009-04-14 19:40:31,248 INFO org.apache.hadoop.dfs.Storage: Storage
    directory /tmp/hadoop-hadoop/dfs/name does not exist.
    2009-04-14 19:40:31,251 ERROR org.apache.hadoop.fs.FSNamesystem:
    FSNamesystem initialization failed.
    org.apache.hadoop.dfs.InconsistentFSStateException: Directory
    /tmp/hadoop-hadoop/dfs/name is in an inconsistent state: storage directory
    does not exist or is
    not accessible.
    at
    org.apache.hadoop.dfs.FSImage.recoverTransitionRead(FSImage.java:211)
    at
    org.apache.hadoop.dfs.FSDirectory.loadFSImage(FSDirectory.java:80)
    at
    org.apache.hadoop.dfs.FSNamesystem.initialize(FSNamesystem.java:294)
    at org.apache.hadoop.dfs.FSNamesystem.<init>(FSNamesystem.java:273)
    at org.apache.hadoop.dfs.NameNode.initialize(NameNode.java:148)
    at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:193)
    at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:179)
    at org.apache.hadoop.dfs.NameNode.createNameNode(NameNode.java:830)
    at org.apache.hadoop.dfs.NameNode.main(NameNode.java:839)
    2009-04-14 19:40:31,261 INFO org.apache.hadoop.ipc.Server: Stopping server
    on 9000
    2009-04-14 19:40:31,262 ERROR org.apache.hadoop.dfs.NameNode:
    org.apache.hadoop.dfs.InconsistentFSStateException: Directory
    /tmp/hadoop-hadoop/dfs/name is in
    an inconsistent state: storage directory does not exist or is not
    accessible.
    at
    org.apache.hadoop.dfs.FSImage.recoverTransitionRead(FSImage.java:211)
    at
    org.apache.hadoop.dfs.FSDirectory.loadFSImage(FSDirectory.java:80)
    at
    org.apache.hadoop.dfs.FSNamesystem.initialize(FSNamesystem.java:294)
    at org.apache.hadoop.dfs.FSNamesystem.<init>(FSNamesystem.java:273)
    at org.apache.hadoop.dfs.NameNode.initialize(NameNode.java:148)
    at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:193)
    at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:179)
    at org.apache.hadoop.dfs.NameNode.createNameNode(NameNode.java:830)
    at org.apache.hadoop.dfs.NameNode.main(NameNode.java:839)

    2009-04-14 19:40:31,267 INFO org.apache.hadoop.dfs.NameNode: SHUTDOWN_MSG:
    /************************************************************
    :

    Thanks

    Pankil
  • Pankil Doshi at Apr 15, 2009 at 10:15 pm
    Thanks

    Pankil
    On Wed, Apr 15, 2009 at 5:09 PM, Alex Loddengaard wrote:

    Data stored to /tmp has no consistency / reliability guarantees. Your OS
    can delete that data at any time.

    Configure hadoop-site.xml to store data elsewhere. Grep for "/tmp" in
    hadoop-default.xml to see all the configuration options you'll have to
    change. Here's the list I came up with:

    hadoop.tmp.dir
    fs.checkpoint.dir
    dfs.name.dir
    dfs.data.dir
    mapred.local.dir
    mapred.system.dir
    mapred.temp.dir

    Again, you need to be storing your data somewhere other than /tmp.

    Alex
    On Tue, Apr 14, 2009 at 6:06 PM, Pankil Doshi wrote:

    Hello Everyone,

    At time I get following error,when i restart my cluster desktops.(Before
    that I shutdown mapred and dfs properly though).
    Temp folder contains of the directory its looking for.Still I get this
    error.
    Only solution I found to get rid with this error is I have to format my dfs
    entirely and then load the data again. and start whole process.

    But in that I loose my data on HDFS and I have to reload it.

    Does anyone has any clue abt it?

    Error from log fil e:-

    2009-04-14 19:40:29,963 INFO org.apache.hadoop.dfs.NameNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG: host = Semantic002/192.168.1.133
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 0.18.3
    STARTUP_MSG: build =
    https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.18 -r
    736250;
    compiled by 'ndaley' on Thu Jan 22 23:12:08 UTC 2009
    ************************************************************/
    2009-04-14 19:40:30,958 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
    Initializing RPC Metrics with hostName=NameNode, port=9000
    2009-04-14 19:40:30,996 INFO org.apache.hadoop.dfs.NameNode: Namenode up
    at:
    Semantic002/192.168.1.133:9000
    2009-04-14 19:40:31,007 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
    Initializing JVM Metrics with processName=NameNode, sessionId=null
    2009-04-14 19:40:31,014 INFO org.apache.hadoop.dfs.NameNodeMetrics:
    Initializing NameNodeMeterics using context
    object:org.apache.hadoop.metrics.spi.NullCont
    ext
    2009-04-14 19:40:31,160 INFO org.apache.hadoop.fs.FSNamesystem:

    fsOwner=hadoop,hadoop,adm,dialout,fax,cdrom,floppy,tape,audio,dip,plugdev,scanner,fuse,admin
    2009-04-14 19:40:31,161 INFO org.apache.hadoop.fs.FSNamesystem:
    supergroup=supergroup
    2009-04-14 19:40:31,161 INFO org.apache.hadoop.fs.FSNamesystem:
    isPermissionEnabled=true
    2009-04-14 19:40:31,183 INFO org.apache.hadoop.dfs.FSNamesystemMetrics:
    Initializing FSNamesystemMeterics using context
    object:org.apache.hadoop.metrics.spi.
    NullContext
    2009-04-14 19:40:31,184 INFO org.apache.hadoop.fs.FSNamesystem:
    Registered
    FSNamesystemStatusMBean
    2009-04-14 19:40:31,248 INFO org.apache.hadoop.dfs.Storage: Storage
    directory /tmp/hadoop-hadoop/dfs/name does not exist.
    2009-04-14 19:40:31,251 ERROR org.apache.hadoop.fs.FSNamesystem:
    FSNamesystem initialization failed.
    org.apache.hadoop.dfs.InconsistentFSStateException: Directory
    /tmp/hadoop-hadoop/dfs/name is in an inconsistent state: storage directory
    does not exist or is
    not accessible.
    at
    org.apache.hadoop.dfs.FSImage.recoverTransitionRead(FSImage.java:211)
    at
    org.apache.hadoop.dfs.FSDirectory.loadFSImage(FSDirectory.java:80)
    at
    org.apache.hadoop.dfs.FSNamesystem.initialize(FSNamesystem.java:294)
    at
    org.apache.hadoop.dfs.FSNamesystem.<init>(FSNamesystem.java:273)
    at org.apache.hadoop.dfs.NameNode.initialize(NameNode.java:148)
    at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:193)
    at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:179)
    at
    org.apache.hadoop.dfs.NameNode.createNameNode(NameNode.java:830)
    at org.apache.hadoop.dfs.NameNode.main(NameNode.java:839)
    2009-04-14 19:40:31,261 INFO org.apache.hadoop.ipc.Server: Stopping server
    on 9000
    2009-04-14 19:40:31,262 ERROR org.apache.hadoop.dfs.NameNode:
    org.apache.hadoop.dfs.InconsistentFSStateException: Directory
    /tmp/hadoop-hadoop/dfs/name is in
    an inconsistent state: storage directory does not exist or is not
    accessible.
    at
    org.apache.hadoop.dfs.FSImage.recoverTransitionRead(FSImage.java:211)
    at
    org.apache.hadoop.dfs.FSDirectory.loadFSImage(FSDirectory.java:80)
    at
    org.apache.hadoop.dfs.FSNamesystem.initialize(FSNamesystem.java:294)
    at
    org.apache.hadoop.dfs.FSNamesystem.<init>(FSNamesystem.java:273)
    at org.apache.hadoop.dfs.NameNode.initialize(NameNode.java:148)
    at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:193)
    at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:179)
    at
    org.apache.hadoop.dfs.NameNode.createNameNode(NameNode.java:830)
    at org.apache.hadoop.dfs.NameNode.main(NameNode.java:839)

    2009-04-14 19:40:31,267 INFO org.apache.hadoop.dfs.NameNode:
    SHUTDOWN_MSG:
    /************************************************************
    :

    Thanks

    Pankil

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedApr 15, '09 at 1:07a
activeApr 15, '09 at 10:15p
posts3
users2
websitehadoop.apache.org...
irc#hadoop

2 users in discussion

Pankil Doshi: 2 posts Alex Loddengaard: 1 post

People

Translate

site design / logo © 2022 Grokbase