FAQ
Hi all,

I'm getting the following on initializing my NameNode. The actual line
throwing the exception is

if (atime != -1) {
-> long inodeTime = inode.getAccessTime();


Have I corrupted the fsimage or something? This is on the Cloudera
packaging of Hadoop 0.20.1+133.

Regards,
Bryn

09/10/14 18:12:02 INFO metrics.RpcMetrics: Initializing RPC Metrics with
hostName=NameNode, port=8020
09/10/14 18:12:02 INFO namenode.NameNode: Namenode up at:
10.23.4.172/10.23.4.172:8020
09/10/14 18:12:02 INFO jvm.JvmMetrics: Initializing JVM Metrics with
processName=NameNode, sessionId=null
09/10/14 18:12:02 INFO metrics.NameNodeMetrics: Initializing
NameNodeMeterics using context
object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
09/10/14 18:12:03 INFO namenode.FSNamesystem: fsOwner=hadoop,hadoop
09/10/14 18:12:03 INFO namenode.FSNamesystem: supergroup=supergroup
09/10/14 18:12:03 INFO namenode.FSNamesystem: isPermissionEnabled=false
09/10/14 18:12:03 INFO metrics.FSNamesystemMetrics: Initializing
FSNamesystemMetrics using context
object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
09/10/14 18:12:03 INFO namenode.FSNamesystem: Registered
FSNamesystemStatusMBean
09/10/14 18:12:03 INFO common.Storage: Number of files = 80
09/10/14 18:12:03 INFO common.Storage: Number of files under
construction = 0
09/10/14 18:12:03 INFO common.Storage: Image file of size 19567 loaded
in 0 seconds.
09/10/14 18:12:03 ERROR namenode.NameNode:
java.lang.NullPointerException
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedSetTimes(FSDirectory.java:1232)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedSetTimes(FSDirectory.java:1221)
at
org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadFSEdits(FSEditLog.java:776)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:992)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:812)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:364)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(NameNode.java:206)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:968)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:977)

Search Discussions

  • Hairong Kuang at Oct 14, 2009 at 4:55 pm
    This might be caused by
    https://issues.apache.org/jira/browse/HDFS-686. I will upload a patch there
    for you to start your NameNode.

    Hairong

    On 10/14/09 9:15 AM, "Bryn Divey" wrote:

    Hi all,

    I'm getting the following on initializing my NameNode. The actual line
    throwing the exception is

    if (atime != -1) {
    -> long inodeTime = inode.getAccessTime();


    Have I corrupted the fsimage or something? This is on the Cloudera
    packaging of Hadoop 0.20.1+133.

    Regards,
    Bryn

    09/10/14 18:12:02 INFO metrics.RpcMetrics: Initializing RPC Metrics with
    hostName=NameNode, port=8020
    09/10/14 18:12:02 INFO namenode.NameNode: Namenode up at:
    10.23.4.172/10.23.4.172:8020
    09/10/14 18:12:02 INFO jvm.JvmMetrics: Initializing JVM Metrics with
    processName=NameNode, sessionId=null
    09/10/14 18:12:02 INFO metrics.NameNodeMetrics: Initializing
    NameNodeMeterics using context
    object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
    09/10/14 18:12:03 INFO namenode.FSNamesystem: fsOwner=hadoop,hadoop
    09/10/14 18:12:03 INFO namenode.FSNamesystem: supergroup=supergroup
    09/10/14 18:12:03 INFO namenode.FSNamesystem: isPermissionEnabled=false
    09/10/14 18:12:03 INFO metrics.FSNamesystemMetrics: Initializing
    FSNamesystemMetrics using context
    object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
    09/10/14 18:12:03 INFO namenode.FSNamesystem: Registered
    FSNamesystemStatusMBean
    09/10/14 18:12:03 INFO common.Storage: Number of files = 80
    09/10/14 18:12:03 INFO common.Storage: Number of files under
    construction = 0
    09/10/14 18:12:03 INFO common.Storage: Image file of size 19567 loaded
    in 0 seconds.
    09/10/14 18:12:03 ERROR namenode.NameNode:
    java.lang.NullPointerException
    at
    org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedSetTimes(FSDirec
    tory.java:1232)
    at
    org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedSetTimes(FSDirec
    tory.java:1221)
    at
    org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadFSEdits(FSEditLog.java:77
    6)
    at
    org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:992)
    at
    org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:812)
    at
    org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.j
    ava:364)
    at
    org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.jav
    a:87)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.ja
    va:311)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:2
    92)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:206)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:288)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:9
    68)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:977)

  • Bryn at Oct 14, 2009 at 7:07 pm

    On Wed, 14 Oct 2009 09:53:11 -0700, Hairong Kuang wrote:
    This might be caused by
    https://issues.apache.org/jira/browse/HDFS-686. I will upload a patch there
    for you to start your NameNode.
    Thanks, Hairong - I'll patch and give it a go.
  • Todd Lipcon at Oct 14, 2009 at 5:03 pm
    Hi Bryn,

    Just to let you know, we've queued the patch Hairong mentioned for the
    next update to our distribution, due out around the end of this month.

    Thanks!

    -Todd
    On Wed, Oct 14, 2009 at 9:15 AM, Bryn Divey wrote:
    Hi all,

    I'm getting the following on initializing my NameNode. The actual line
    throwing the exception is

    if (atime != -1) {
    ->      long inodeTime = inode.getAccessTime();


    Have I corrupted the fsimage or something? This is on the Cloudera
    packaging of Hadoop 0.20.1+133.

    Regards,
    Bryn

    09/10/14 18:12:02 INFO metrics.RpcMetrics: Initializing RPC Metrics with
    hostName=NameNode, port=8020
    09/10/14 18:12:02 INFO namenode.NameNode: Namenode up at:
    10.23.4.172/10.23.4.172:8020
    09/10/14 18:12:02 INFO jvm.JvmMetrics: Initializing JVM Metrics with
    processName=NameNode, sessionId=null
    09/10/14 18:12:02 INFO metrics.NameNodeMetrics: Initializing
    NameNodeMeterics using context
    object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
    09/10/14 18:12:03 INFO namenode.FSNamesystem: fsOwner=hadoop,hadoop
    09/10/14 18:12:03 INFO namenode.FSNamesystem: supergroup=supergroup
    09/10/14 18:12:03 INFO namenode.FSNamesystem: isPermissionEnabled=false
    09/10/14 18:12:03 INFO metrics.FSNamesystemMetrics: Initializing
    FSNamesystemMetrics using context
    object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
    09/10/14 18:12:03 INFO namenode.FSNamesystem: Registered
    FSNamesystemStatusMBean
    09/10/14 18:12:03 INFO common.Storage: Number of files = 80
    09/10/14 18:12:03 INFO common.Storage: Number of files under
    construction = 0
    09/10/14 18:12:03 INFO common.Storage: Image file of size 19567 loaded
    in 0 seconds.
    09/10/14 18:12:03 ERROR namenode.NameNode:
    java.lang.NullPointerException
    at
    org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedSetTimes(FSDirectory.java:1232)
    at
    org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedSetTimes(FSDirectory.java:1221)
    at
    org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadFSEdits(FSEditLog.java:776)
    at
    org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:992)
    at
    org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:812)
    at
    org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:364)
    at
    org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:206)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:288)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:968)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:977)


  • Bryn at Oct 15, 2009 at 5:14 pm
    Thanks Todd - it's all working perfectly now. By the way, where is the
    Cloudera repository?
    On Wed, 14 Oct 2009 10:02:22 -0700, Todd Lipcon wrote:
    Hi Bryn,

    Just to let you know, we've queued the patch Hairong mentioned for the
    next update to our distribution, due out around the end of this month.

    Thanks!

    -Todd
    On Wed, Oct 14, 2009 at 9:15 AM, Bryn Divey wrote:
    Hi all,

    I'm getting the following on initializing my NameNode. The actual line
    throwing the exception is

    if (atime != -1) {
    ->      long inodeTime = inode.getAccessTime();


    Have I corrupted the fsimage or something? This is on the Cloudera
    packaging of Hadoop 0.20.1+133.

    Regards,
    Bryn

    09/10/14 18:12:02 INFO metrics.RpcMetrics: Initializing RPC Metrics with
    hostName=NameNode, port=8020
    09/10/14 18:12:02 INFO namenode.NameNode: Namenode up at:
    10.23.4.172/10.23.4.172:8020
    09/10/14 18:12:02 INFO jvm.JvmMetrics: Initializing JVM Metrics with
    processName=NameNode, sessionId=null
    09/10/14 18:12:02 INFO metrics.NameNodeMetrics: Initializing
    NameNodeMeterics using context
    object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
    09/10/14 18:12:03 INFO namenode.FSNamesystem: fsOwner=hadoop,hadoop
    09/10/14 18:12:03 INFO namenode.FSNamesystem: supergroup=supergroup
    09/10/14 18:12:03 INFO namenode.FSNamesystem: isPermissionEnabled=false
    09/10/14 18:12:03 INFO metrics.FSNamesystemMetrics: Initializing
    FSNamesystemMetrics using context
    object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
    09/10/14 18:12:03 INFO namenode.FSNamesystem: Registered
    FSNamesystemStatusMBean
    09/10/14 18:12:03 INFO common.Storage: Number of files = 80
    09/10/14 18:12:03 INFO common.Storage: Number of files under
    construction = 0
    09/10/14 18:12:03 INFO common.Storage: Image file of size 19567 loaded
    in 0 seconds.
    09/10/14 18:12:03 ERROR namenode.NameNode:
    java.lang.NullPointerException
    at
    org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedSetTimes(FSDirectory.java:1232)
    at
    org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedSetTimes(FSDirectory.java:1221)
    at
    org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadFSEdits(FSEditLog.java:776)
    at
    org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:992)
    at
    org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:812)
    at
    org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:364)
    at
    org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:206)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:288)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:968)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:977)


  • Todd Lipcon at Oct 15, 2009 at 6:32 pm

    On Thu, Oct 15, 2009 at 10:14 AM, wrote:

    Thanks Todd - it's all working perfectly now. By the way, where is the
    Cloudera repository?
    http://archive.cloudera.com/

    If you have any questions that are Cloudera-specific (around packaging,
    etc), please use our GetSatisfaction page:
    http://getsatisfaction.com/cloudera/products/cloudera_cloudera_s_distribution_for_hadoop

    (we don't want to confuse people on this list if anything is specific to our
    distro)

    -Todd
    On Wed, 14 Oct 2009 10:02:22 -0700, Todd Lipcon wrote:
    Hi Bryn,

    Just to let you know, we've queued the patch Hairong mentioned for the
    next update to our distribution, due out around the end of this month.

    Thanks!

    -Todd
    On Wed, Oct 14, 2009 at 9:15 AM, Bryn Divey wrote:
    Hi all,

    I'm getting the following on initializing my NameNode. The actual line
    throwing the exception is

    if (atime != -1) {
    -> long inodeTime = inode.getAccessTime();


    Have I corrupted the fsimage or something? This is on the Cloudera
    packaging of Hadoop 0.20.1+133.

    Regards,
    Bryn

    09/10/14 18:12:02 INFO metrics.RpcMetrics: Initializing RPC Metrics with
    hostName=NameNode, port=8020
    09/10/14 18:12:02 INFO namenode.NameNode: Namenode up at:
    10.23.4.172/10.23.4.172:8020
    09/10/14 18:12:02 INFO jvm.JvmMetrics: Initializing JVM Metrics with
    processName=NameNode, sessionId=null
    09/10/14 18:12:02 INFO metrics.NameNodeMetrics: Initializing
    NameNodeMeterics using context
    object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
    09/10/14 18:12:03 INFO namenode.FSNamesystem: fsOwner=hadoop,hadoop
    09/10/14 18:12:03 INFO namenode.FSNamesystem: supergroup=supergroup
    09/10/14 18:12:03 INFO namenode.FSNamesystem: isPermissionEnabled=false
    09/10/14 18:12:03 INFO metrics.FSNamesystemMetrics: Initializing
    FSNamesystemMetrics using context
    object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
    09/10/14 18:12:03 INFO namenode.FSNamesystem: Registered
    FSNamesystemStatusMBean
    09/10/14 18:12:03 INFO common.Storage: Number of files = 80
    09/10/14 18:12:03 INFO common.Storage: Number of files under
    construction = 0
    09/10/14 18:12:03 INFO common.Storage: Image file of size 19567 loaded
    in 0 seconds.
    09/10/14 18:12:03 ERROR namenode.NameNode:
    java.lang.NullPointerException
    at
    org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedSetTimes(FSDirectory.java:1232)
    at
    org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedSetTimes(FSDirectory.java:1221)
    at
    org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadFSEdits(FSEditLog.java:776)
    at
    org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:992)
    at
    org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:812)
    at
    org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:364)
    at
    org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:206)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:288)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:968)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:977)


Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedOct 14, '09 at 4:24p
activeOct 15, '09 at 6:32p
posts6
users3
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase