FAQ
Hi,

When I am running the following command in Mandriva Linux
hadoop namenode -format

I am getting the following error:

10/10/09 22:32:07 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = java.net.UnknownHostException:
cs-sy-249.cse.iitkgp.ernet.in: cs-sy-249.cse.iitkgp.ernet.in
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.20.2+320
STARTUP_MSG: build = -r 9b72d268a0b590b4fd7d13aca17c1c453f8bc957;
compiled by 'root' on Mon Jun 28 19:13:09 EDT 2010
************************************************************/
Re-format filesystem in /users/user/hadoop-datastore/hadoop-user/dfs/name ?
(Y or N) Y
10/10/09 22:32:11 INFO namenode.FSNamesystem: fsOwner=user,user
10/10/09 22:32:11 INFO namenode.FSNamesystem: supergroup=supergroup
10/10/09 22:32:11 INFO namenode.FSNamesystem: isPermissionEnabled=true
10/10/09 22:32:12 INFO metrics.MetricsUtil: Unable to obtain hostName
java.net.UnknownHostException: cs-sy-249.cse.iitkgp.ernet.in:
cs-sy-249.cse.iitkgp.ernet.in
at java.net.InetAddress.getLocalHost(InetAddress.java:1353)
at
org.apache.hadoop.metrics.MetricsUtil.getHostName(MetricsUtil.java:91)
at
org.apache.hadoop.metrics.MetricsUtil.createRecord(MetricsUtil.java:80)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.initialize(FSDirectory.java:78)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.(FSNamesystem.java:383)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:904)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:998)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1015)
10/10/09 22:32:12 INFO common.Storage: Image file of size 94 saved in 0
seconds.
10/10/09 22:32:12 INFO common.Storage: Storage directory
/users/user/hadoop-datastore/hadoop-user/dfs/name has been successfully
formatted.
10/10/09 22:32:12 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at java.net.UnknownHostException:
cs-sy-249.cse.iitkgp.ernet.in: cs-sy-249.cse.iitkgp.ernet.in
************************************************************/

Please help me in solving this problem.

Thanks
Regards
Siddharth

Search Discussions

  • Siddharth raghuvanshi at Oct 9, 2010 at 5:58 pm
    Hi,
    I am also getting the following error. Please tell me whether this error is
    related to the previous error which I asked an hour before or this is a
    separate error...

    [user@cs-sy-249 hadoop]$ bin/hadoop dfs -copyFromLocal
    /users/user/Desktop/test_data/
    gutenberg

    10/10/09 23:22:15 WARN hdfs.DFSClient: DataStreamer Exception:
    org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
    /user/user/gutenberg/pg4300.txt could only be replicated to 0 nodes, instead
    of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1310)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

    at
    java.lang.reflect.Method.invoke(Method.java:597)

    at
    org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512)

    at
    org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968)

    at
    org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964)

    at java.security.AccessController.doPrivileged(Native
    Method)
    at
    javax.security.auth.Subject.doAs(Subject.java:396)

    at
    org.apache.hadoop.ipc.Server$Handler.run(Server.java:962)


    at org.apache.hadoop.ipc.Client.call(Client.java:817)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:221)
    at $Proxy0.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

    at
    java.lang.reflect.Method.invoke(Method.java:597)

    at
    org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)

    at
    org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)

    at $Proxy0.addBlock(Unknown
    Source)
    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3000)

    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2881)

    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1900(DFSClient.java:2139)
    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2329)


    10/10/09 23:22:15 WARN hdfs.DFSClient: Error Recovery for block null bad
    datanode[0] nodes ==
    null

    10/10/09 23:22:15 WARN hdfs.DFSClient: Could not get block locations. Source
    file "/user/user/gutenberg/pg4300.txt" -
    Aborting...
    copyFromLocal: java.io.IOException: File /user/user/gutenberg/pg4300.txt
    could only be replicated to 0 nodes, instead of
    1
    10/10/09 23:22:15 ERROR hdfs.DFSClient: Exception closing file
    /user/user/gutenberg/pg4300.txt : org.apache.hadoop.ipc.RemoteException:
    java.io.IOException: File /user/user/gutenberg/pg4300.txt could only be
    replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1310)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

    at
    java.lang.reflect.Method.invoke(Method.java:597)

    at
    org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512)

    at
    org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968)

    at
    org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964)

    at java.security.AccessController.doPrivileged(Native
    Method)
    at
    javax.security.auth.Subject.doAs(Subject.java:396)

    at
    org.apache.hadoop.ipc.Server$Handler.run(Server.java:962)


    org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
    /user/user/gutenberg/pg4300.txt could only be replicated to 0 nodes, instead
    of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1310)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:962)

    at org.apache.hadoop.ipc.Client.call(Client.java:817)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:221)
    at $Proxy0.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at
    org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at
    org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy0.addBlock(Unknown Source)
    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3000)
    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2881)
    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1900(DFSClient.java:2139)
    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2329)
    [user@cs-sy-249 hadoop]$


    Regards
    Siddharth



    On Sat, Oct 9, 2010 at 10:39 PM, siddharth raghuvanshi wrote:

    Hi,

    When I am running the following command in Mandriva Linux
    hadoop namenode -format

    I am getting the following error:

    10/10/09 22:32:07 INFO namenode.NameNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG: host = java.net.UnknownHostException:
    cs-sy-249.cse.iitkgp.ernet.in: cs-sy-249.cse.iitkgp.ernet.in
    STARTUP_MSG: args = [-format]
    STARTUP_MSG: version = 0.20.2+320
    STARTUP_MSG: build = -r 9b72d268a0b590b4fd7d13aca17c1c453f8bc957;
    compiled by 'root' on Mon Jun 28 19:13:09 EDT 2010
    ************************************************************/
    Re-format filesystem in /users/user/hadoop-datastore/hadoop-user/dfs/name ?
    (Y or N) Y
    10/10/09 22:32:11 INFO namenode.FSNamesystem: fsOwner=user,user
    10/10/09 22:32:11 INFO namenode.FSNamesystem: supergroup=supergroup
    10/10/09 22:32:11 INFO namenode.FSNamesystem: isPermissionEnabled=true
    10/10/09 22:32:12 INFO metrics.MetricsUtil: Unable to obtain hostName
    java.net.UnknownHostException: cs-sy-249.cse.iitkgp.ernet.in:
    cs-sy-249.cse.iitkgp.ernet.in
    at java.net.InetAddress.getLocalHost(InetAddress.java:1353)
    at
    org.apache.hadoop.metrics.MetricsUtil.getHostName(MetricsUtil.java:91)
    at
    org.apache.hadoop.metrics.MetricsUtil.createRecord(MetricsUtil.java:80)
    at
    org.apache.hadoop.hdfs.server.namenode.FSDirectory.initialize(FSDirectory.java:78)
    at
    org.apache.hadoop.hdfs.server.namenode.FSDirectory.<init>(FSDirectory.java:73)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:383)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:904)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:998)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1015)
    10/10/09 22:32:12 INFO common.Storage: Image file of size 94 saved in 0
    seconds.
    10/10/09 22:32:12 INFO common.Storage: Storage directory
    /users/user/hadoop-datastore/hadoop-user/dfs/name has been successfully
    formatted.
    10/10/09 22:32:12 INFO namenode.NameNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode at java.net.UnknownHostException:
    cs-sy-249.cse.iitkgp.ernet.in: cs-sy-249.cse.iitkgp.ernet.in
    ************************************************************/

    Please help me in solving this problem.

    Thanks
    Regards
    Siddharth
  • Shi Yu at Oct 9, 2010 at 6:16 pm
    I suggest you change the hadoop.tmp.dir value in hadoop-site.xml
    (0.19.x) and reformat, restart it. Also double check the host machine
    in fs.default.name and mapred.job.tracker is reachable or not.

    Shi
    On 2010-10-9 12:57, siddharth raghuvanshi wrote:
    Hi,
    I am also getting the following error. Please tell me whether this error is
    related to the previous error which I asked an hour before or this is a
    separate error...

    [user@cs-sy-249 hadoop]$ bin/hadoop dfs -copyFromLocal
    /users/user/Desktop/test_data/
    gutenberg

    10/10/09 23:22:15 WARN hdfs.DFSClient: DataStreamer Exception:
    org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
    /user/user/gutenberg/pg4300.txt could only be replicated to 0 nodes, instead
    of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1310)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

    at
    java.lang.reflect.Method.invoke(Method.java:597)

    at
    org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512)

    at
    org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968)

    at
    org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964)

    at java.security.AccessController.doPrivileged(Native
    Method)
    at
    javax.security.auth.Subject.doAs(Subject.java:396)

    at
    org.apache.hadoop.ipc.Server$Handler.run(Server.java:962)


    at org.apache.hadoop.ipc.Client.call(Client.java:817)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:221)
    at $Proxy0.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

    at
    java.lang.reflect.Method.invoke(Method.java:597)

    at
    org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)

    at
    org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)

    at $Proxy0.addBlock(Unknown
    Source)
    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3000)

    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2881)

    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1900(DFSClient.java:2139)
    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2329)


    10/10/09 23:22:15 WARN hdfs.DFSClient: Error Recovery for block null bad
    datanode[0] nodes ==
    null

    10/10/09 23:22:15 WARN hdfs.DFSClient: Could not get block locations. Source
    file "/user/user/gutenberg/pg4300.txt" -
    Aborting...
    copyFromLocal: java.io.IOException: File /user/user/gutenberg/pg4300.txt
    could only be replicated to 0 nodes, instead of
    1
    10/10/09 23:22:15 ERROR hdfs.DFSClient: Exception closing file
    /user/user/gutenberg/pg4300.txt : org.apache.hadoop.ipc.RemoteException:
    java.io.IOException: File /user/user/gutenberg/pg4300.txt could only be
    replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1310)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

    at
    java.lang.reflect.Method.invoke(Method.java:597)

    at
    org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512)

    at
    org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968)

    at
    org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964)

    at java.security.AccessController.doPrivileged(Native
    Method)
    at
    javax.security.auth.Subject.doAs(Subject.java:396)

    at
    org.apache.hadoop.ipc.Server$Handler.run(Server.java:962)


    org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
    /user/user/gutenberg/pg4300.txt could only be replicated to 0 nodes, instead
    of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1310)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:962)

    at org.apache.hadoop.ipc.Client.call(Client.java:817)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:221)
    at $Proxy0.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at
    org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at
    org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy0.addBlock(Unknown Source)
    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3000)
    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2881)
    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1900(DFSClient.java:2139)
    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2329)
    [user@cs-sy-249 hadoop]$


    Regards
    Siddharth




    On Sat, Oct 9, 2010 at 10:39 PM, siddharth raghuvanshi<
    track009.siddharth@gmail.com> wrote:

    Hi,

    When I am running the following command in Mandriva Linux
    hadoop namenode -format

    I am getting the following error:

    10/10/09 22:32:07 INFO namenode.NameNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG: host = java.net.UnknownHostException:
    cs-sy-249.cse.iitkgp.ernet.in: cs-sy-249.cse.iitkgp.ernet.in
    STARTUP_MSG: args = [-format]
    STARTUP_MSG: version = 0.20.2+320
    STARTUP_MSG: build = -r 9b72d268a0b590b4fd7d13aca17c1c453f8bc957;
    compiled by 'root' on Mon Jun 28 19:13:09 EDT 2010
    ************************************************************/
    Re-format filesystem in /users/user/hadoop-datastore/hadoop-user/dfs/name ?
    (Y or N) Y
    10/10/09 22:32:11 INFO namenode.FSNamesystem: fsOwner=user,user
    10/10/09 22:32:11 INFO namenode.FSNamesystem: supergroup=supergroup
    10/10/09 22:32:11 INFO namenode.FSNamesystem: isPermissionEnabled=true
    10/10/09 22:32:12 INFO metrics.MetricsUtil: Unable to obtain hostName
    java.net.UnknownHostException: cs-sy-249.cse.iitkgp.ernet.in:
    cs-sy-249.cse.iitkgp.ernet.in
    at java.net.InetAddress.getLocalHost(InetAddress.java:1353)
    at
    org.apache.hadoop.metrics.MetricsUtil.getHostName(MetricsUtil.java:91)
    at
    org.apache.hadoop.metrics.MetricsUtil.createRecord(MetricsUtil.java:80)
    at
    org.apache.hadoop.hdfs.server.namenode.FSDirectory.initialize(FSDirectory.java:78)
    at
    org.apache.hadoop.hdfs.server.namenode.FSDirectory.<init>(FSDirectory.java:73)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:383)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:904)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:998)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1015)
    10/10/09 22:32:12 INFO common.Storage: Image file of size 94 saved in 0
    seconds.
    10/10/09 22:32:12 INFO common.Storage: Storage directory
    /users/user/hadoop-datastore/hadoop-user/dfs/name has been successfully
    formatted.
    10/10/09 22:32:12 INFO namenode.NameNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode at java.net.UnknownHostException:
    cs-sy-249.cse.iitkgp.ernet.in: cs-sy-249.cse.iitkgp.ernet.in
    ************************************************************/

    Please help me in solving this problem.

    Thanks
    Regards
    Siddharth
  • Siddharth raghuvanshi at Oct 10, 2010 at 2:21 pm
    Hi Shi,

    I am a beginner in Hadoop. I have given the following value in core-site.xml
    <name>hadoop.tmp.dir</name>
    <value>/users/user/hadoop-datastore/hadoop</value>


    <name>fs.default.name</name>
    <value>hdfs://localhost:54310</value>
    How will we check whether the host machine is reachable or not?

    Also, in mapred-site.xml, I have given
    <name>mapred.job.tracker</name>
    <value>localhost:54311</value>


    Please check whether these values are correct or not, if not correct what
    should I do?

    Waiting for your reply
    Regards
    Siddharth


    On Sat, Oct 9, 2010 at 11:47 PM, Shi Yu wrote:

    I suggest you change the hadoop.tmp.dir value in hadoop-site.xml (0.19.x)
    and reformat, restart it. Also double check the host machine in
    fs.default.name and mapred.job.tracker is reachable or not.

    Shi

    On 2010-10-9 12:57, siddharth raghuvanshi wrote:

    Hi,
    I am also getting the following error. Please tell me whether this error
    is
    related to the previous error which I asked an hour before or this is a
    separate error...

    [user@cs-sy-249 hadoop]$ bin/hadoop dfs -copyFromLocal
    /users/user/Desktop/test_data/
    gutenberg

    10/10/09 23:22:15 WARN hdfs.DFSClient: DataStreamer Exception:
    org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
    /user/user/gutenberg/pg4300.txt could only be replicated to 0 nodes,
    instead
    of 1
    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1310)

    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)
    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

    at
    java.lang.reflect.Method.invoke(Method.java:597)

    at
    org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512)

    at
    org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968)

    at
    org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964)

    at java.security.AccessController.doPrivileged(Native
    Method)
    at
    javax.security.auth.Subject.doAs(Subject.java:396)

    at
    org.apache.hadoop.ipc.Server$Handler.run(Server.java:962)


    at org.apache.hadoop.ipc.Client.call(Client.java:817)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:221)
    at $Proxy0.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

    at
    java.lang.reflect.Method.invoke(Method.java:597)

    at

    org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)

    at

    org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)

    at $Proxy0.addBlock(Unknown
    Source)
    at

    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3000)

    at

    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2881)

    at

    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1900(DFSClient.java:2139)
    at

    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2329)


    10/10/09 23:22:15 WARN hdfs.DFSClient: Error Recovery for block null bad
    datanode[0] nodes ==
    null

    10/10/09 23:22:15 WARN hdfs.DFSClient: Could not get block locations.
    Source
    file "/user/user/gutenberg/pg4300.txt" -
    Aborting...
    copyFromLocal: java.io.IOException: File /user/user/gutenberg/pg4300.txt
    could only be replicated to 0 nodes, instead of
    1
    10/10/09 23:22:15 ERROR hdfs.DFSClient: Exception closing file
    /user/user/gutenberg/pg4300.txt : org.apache.hadoop.ipc.RemoteException:
    java.io.IOException: File /user/user/gutenberg/pg4300.txt could only be
    replicated to 0 nodes, instead of 1
    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1310)

    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)
    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

    at
    java.lang.reflect.Method.invoke(Method.java:597)

    at
    org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512)

    at
    org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968)

    at
    org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964)

    at java.security.AccessController.doPrivileged(Native
    Method)
    at
    javax.security.auth.Subject.doAs(Subject.java:396)

    at
    org.apache.hadoop.ipc.Server$Handler.run(Server.java:962)


    org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
    /user/user/gutenberg/pg4300.txt could only be replicated to 0 nodes,
    instead
    of 1
    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1310)

    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)
    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:962)

    at org.apache.hadoop.ipc.Client.call(Client.java:817)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:221)
    at $Proxy0.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at

    org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at

    org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy0.addBlock(Unknown Source)
    at

    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3000)
    at

    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2881)
    at

    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1900(DFSClient.java:2139)
    at

    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2329)
    [user@cs-sy-249 hadoop]$


    Regards
    Siddharth




    On Sat, Oct 9, 2010 at 10:39 PM, siddharth raghuvanshi<
    track009.siddharth@gmail.com> wrote:


    Hi,

    When I am running the following command in Mandriva Linux
    hadoop namenode -format

    I am getting the following error:

    10/10/09 22:32:07 INFO namenode.NameNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG: host = java.net.UnknownHostException:
    cs-sy-249.cse.iitkgp.ernet.in: cs-sy-249.cse.iitkgp.ernet.in
    STARTUP_MSG: args = [-format]
    STARTUP_MSG: version = 0.20.2+320
    STARTUP_MSG: build = -r 9b72d268a0b590b4fd7d13aca17c1c453f8bc957;
    compiled by 'root' on Mon Jun 28 19:13:09 EDT 2010
    ************************************************************/
    Re-format filesystem in /users/user/hadoop-datastore/hadoop-user/dfs/name
    ?
    (Y or N) Y
    10/10/09 22:32:11 INFO namenode.FSNamesystem: fsOwner=user,user
    10/10/09 22:32:11 INFO namenode.FSNamesystem: supergroup=supergroup
    10/10/09 22:32:11 INFO namenode.FSNamesystem: isPermissionEnabled=true
    10/10/09 22:32:12 INFO metrics.MetricsUtil: Unable to obtain hostName
    java.net.UnknownHostException: cs-sy-249.cse.iitkgp.ernet.in:
    cs-sy-249.cse.iitkgp.ernet.in
    at java.net.InetAddress.getLocalHost(InetAddress.java:1353)
    at
    org.apache.hadoop.metrics.MetricsUtil.getHostName(MetricsUtil.java:91)
    at
    org.apache.hadoop.metrics.MetricsUtil.createRecord(MetricsUtil.java:80)
    at

    org.apache.hadoop.hdfs.server.namenode.FSDirectory.initialize(FSDirectory.java:78)
    at

    org.apache.hadoop.hdfs.server.namenode.FSDirectory.<init>(FSDirectory.java:73)
    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:383)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:904)
    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:998)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1015)
    10/10/09 22:32:12 INFO common.Storage: Image file of size 94 saved in 0
    seconds.
    10/10/09 22:32:12 INFO common.Storage: Storage directory
    /users/user/hadoop-datastore/hadoop-user/dfs/name has been successfully
    formatted.
    10/10/09 22:32:12 INFO namenode.NameNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode at java.net.UnknownHostException:
    cs-sy-249.cse.iitkgp.ernet.in: cs-sy-249.cse.iitkgp.ernet.in
    ************************************************************/

    Please help me in solving this problem.

    Thanks
    Regards
    Siddharth

  • Shi Yu at Oct 10, 2010 at 2:54 pm
    Hi. Were you trying hadoop on your own computer or on a cluster? My
    guess you were trying on your own computer. I once observed the same
    problem on my laptop when I switched from wireless to fixed line
    connection, since the IP address was changed but for some reason the
    configuration was not updated. After restart the network service, the
    problem was fixed. The second replication error is relevant to the first
    one because apparently the data node is not running. So, you'd better
    double check the network connection of the machine (make sure the
    "localhost" in your configuration file is reachable).

    Shi
    On 2010-10-10 9:21, siddharth raghuvanshi wrote:
    Hi Shi,

    I am a beginner in Hadoop. I have given the following value in core-site.xml
    <name>hadoop.tmp.dir</name>
    <value>/users/user/hadoop-datastore/hadoop</value>


    <name>fs.default.name</name>
    <value>hdfs://localhost:54310</value>
    How will we check whether the host machine is reachable or not?

    Also, in mapred-site.xml, I have given
    <name>mapred.job.tracker</name>
    <value>localhost:54311</value>


    Please check whether these values are correct or not, if not correct what
    should I do?

    Waiting for your reply
    Regards
    Siddharth



    On Sat, Oct 9, 2010 at 11:47 PM, Shi Yuwrote:

    I suggest you change the hadoop.tmp.dir value in hadoop-site.xml (0.19.x)
    and reformat, restart it. Also double check the host machine in
    fs.default.name and mapred.job.tracker is reachable or not.

    Shi


    On 2010-10-9 12:57, siddharth raghuvanshi wrote:

    Hi,
    I am also getting the following error. Please tell me whether this error
    is
    related to the previous error which I asked an hour before or this is a
    separate error...

    [user@cs-sy-249 hadoop]$ bin/hadoop dfs -copyFromLocal
    /users/user/Desktop/test_data/
    gutenberg

    10/10/09 23:22:15 WARN hdfs.DFSClient: DataStreamer Exception:
    org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
    /user/user/gutenberg/pg4300.txt could only be replicated to 0 nodes,
    instead
    of 1
    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1310)

    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)
    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

    at
    java.lang.reflect.Method.invoke(Method.java:597)

    at
    org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512)

    at
    org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968)

    at
    org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964)

    at java.security.AccessController.doPrivileged(Native
    Method)
    at
    javax.security.auth.Subject.doAs(Subject.java:396)

    at
    org.apache.hadoop.ipc.Server$Handler.run(Server.java:962)


    at org.apache.hadoop.ipc.Client.call(Client.java:817)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:221)
    at $Proxy0.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

    at
    java.lang.reflect.Method.invoke(Method.java:597)

    at

    org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)

    at

    org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)

    at $Proxy0.addBlock(Unknown
    Source)
    at

    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3000)

    at

    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2881)

    at

    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1900(DFSClient.java:2139)
    at

    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2329)


    10/10/09 23:22:15 WARN hdfs.DFSClient: Error Recovery for block null bad
    datanode[0] nodes ==
    null

    10/10/09 23:22:15 WARN hdfs.DFSClient: Could not get block locations.
    Source
    file "/user/user/gutenberg/pg4300.txt" -
    Aborting...
    copyFromLocal: java.io.IOException: File /user/user/gutenberg/pg4300.txt
    could only be replicated to 0 nodes, instead of
    1
    10/10/09 23:22:15 ERROR hdfs.DFSClient: Exception closing file
    /user/user/gutenberg/pg4300.txt : org.apache.hadoop.ipc.RemoteException:
    java.io.IOException: File /user/user/gutenberg/pg4300.txt could only be
    replicated to 0 nodes, instead of 1
    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1310)

    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)
    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

    at
    java.lang.reflect.Method.invoke(Method.java:597)

    at
    org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512)

    at
    org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968)

    at
    org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964)

    at java.security.AccessController.doPrivileged(Native
    Method)
    at
    javax.security.auth.Subject.doAs(Subject.java:396)

    at
    org.apache.hadoop.ipc.Server$Handler.run(Server.java:962)


    org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
    /user/user/gutenberg/pg4300.txt could only be replicated to 0 nodes,
    instead
    of 1
    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1310)

    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)
    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:962)

    at org.apache.hadoop.ipc.Client.call(Client.java:817)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:221)
    at $Proxy0.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at

    org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at

    org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy0.addBlock(Unknown Source)
    at

    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3000)
    at

    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2881)
    at

    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1900(DFSClient.java:2139)
    at

    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2329)
    [user@cs-sy-249 hadoop]$


    Regards
    Siddharth




    On Sat, Oct 9, 2010 at 10:39 PM, siddharth raghuvanshi<
    track009.siddharth@gmail.com> wrote:



    Hi,

    When I am running the following command in Mandriva Linux
    hadoop namenode -format

    I am getting the following error:

    10/10/09 22:32:07 INFO namenode.NameNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG: host = java.net.UnknownHostException:
    cs-sy-249.cse.iitkgp.ernet.in: cs-sy-249.cse.iitkgp.ernet.in
    STARTUP_MSG: args = [-format]
    STARTUP_MSG: version = 0.20.2+320
    STARTUP_MSG: build = -r 9b72d268a0b590b4fd7d13aca17c1c453f8bc957;
    compiled by 'root' on Mon Jun 28 19:13:09 EDT 2010
    ************************************************************/
    Re-format filesystem in /users/user/hadoop-datastore/hadoop-user/dfs/name
    ?
    (Y or N) Y
    10/10/09 22:32:11 INFO namenode.FSNamesystem: fsOwner=user,user
    10/10/09 22:32:11 INFO namenode.FSNamesystem: supergroup=supergroup
    10/10/09 22:32:11 INFO namenode.FSNamesystem: isPermissionEnabled=true
    10/10/09 22:32:12 INFO metrics.MetricsUtil: Unable to obtain hostName
    java.net.UnknownHostException: cs-sy-249.cse.iitkgp.ernet.in:
    cs-sy-249.cse.iitkgp.ernet.in
    at java.net.InetAddress.getLocalHost(InetAddress.java:1353)
    at
    org.apache.hadoop.metrics.MetricsUtil.getHostName(MetricsUtil.java:91)
    at
    org.apache.hadoop.metrics.MetricsUtil.createRecord(MetricsUtil.java:80)
    at

    org.apache.hadoop.hdfs.server.namenode.FSDirectory.initialize(FSDirectory.java:78)
    at

    org.apache.hadoop.hdfs.server.namenode.FSDirectory.<init>(FSDirectory.java:73)
    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:383)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:904)
    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:998)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1015)
    10/10/09 22:32:12 INFO common.Storage: Image file of size 94 saved in 0
    seconds.
    10/10/09 22:32:12 INFO common.Storage: Storage directory
    /users/user/hadoop-datastore/hadoop-user/dfs/name has been successfully
    formatted.
    10/10/09 22:32:12 INFO namenode.NameNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode at java.net.UnknownHostException:
    cs-sy-249.cse.iitkgp.ernet.in: cs-sy-249.cse.iitkgp.ernet.in
    ************************************************************/

    Please help me in solving this problem.

    Thanks
    Regards
    Siddharth



    --
    Postdoctoral Scholar
    Institute for Genomics and Systems Biology
    Department of Medicine, the University of Chicago
    Knapp Center for Biomedical Discovery
    900 E. 57th St. Room 10148
    Chicago, IL 60637, US
    Tel: 773-702-6799
  • Siddharth raghuvanshi at Oct 10, 2010 at 3:08 pm
    Hi,

    Thanks for your reply..

    In browser,

    http://localhost:50030/jobtracker.jsp is opening fine
    but
    http://localhost:50060/ is not.

    Since jobtracker is running, so I'm assuming localhost is reachable.. am I
    wrong??

    Regards
    Siddharth
    On Sun, Oct 10, 2010 at 8:24 PM, Shi Yu wrote:

    Hi. Were you trying hadoop on your own computer or on a cluster? My guess
    you were trying on your own computer. I once observed the same problem on my
    laptop when I switched from wireless to fixed line connection, since the IP
    address was changed but for some reason the configuration was not updated.
    After restart the network service, the problem was fixed. The second
    replication error is relevant to the first one because apparently the data
    node is not running. So, you'd better double check the network connection of
    the machine (make sure the "localhost" in your configuration file is
    reachable).

    Shi

    On 2010-10-10 9:21, siddharth raghuvanshi wrote:

    Hi Shi,

    I am a beginner in Hadoop. I have given the following value in
    core-site.xml
    <name>hadoop.tmp.dir</name>
    <value>/users/user/hadoop-datastore/hadoop</value>


    <name>fs.default.name</name>
    <value>hdfs://localhost:54310</value>
    How will we check whether the host machine is reachable or not?

    Also, in mapred-site.xml, I have given
    <name>mapred.job.tracker</name>
    <value>localhost:54311</value>


    Please check whether these values are correct or not, if not correct what
    should I do?

    Waiting for your reply
    Regards
    Siddharth



    On Sat, Oct 9, 2010 at 11:47 PM, Shi Yuwrote:


    I suggest you change the hadoop.tmp.dir value in hadoop-site.xml
    (0.19.x)
    and reformat, restart it. Also double check the host machine in
    fs.default.name and mapred.job.tracker is reachable or not.

    Shi


    On 2010-10-9 12:57, siddharth raghuvanshi wrote:


    Hi,
    I am also getting the following error. Please tell me whether this error
    is
    related to the previous error which I asked an hour before or this is a
    separate error...

    [user@cs-sy-249 hadoop]$ bin/hadoop dfs -copyFromLocal
    /users/user/Desktop/test_data/
    gutenberg

    10/10/09 23:22:15 WARN hdfs.DFSClient: DataStreamer Exception:
    org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
    /user/user/gutenberg/pg4300.txt could only be replicated to 0 nodes,
    instead
    of 1
    at


    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1310)

    at


    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)
    at


    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

    at


    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

    at
    java.lang.reflect.Method.invoke(Method.java:597)

    at
    org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512)

    at
    org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968)

    at
    org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964)

    at java.security.AccessController.doPrivileged(Native
    Method)
    at
    javax.security.auth.Subject.doAs(Subject.java:396)

    at
    org.apache.hadoop.ipc.Server$Handler.run(Server.java:962)


    at org.apache.hadoop.ipc.Client.call(Client.java:817)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:221)
    at $Proxy0.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at


    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at


    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

    at
    java.lang.reflect.Method.invoke(Method.java:597)

    at


    org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)

    at


    org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)

    at $Proxy0.addBlock(Unknown
    Source)
    at


    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3000)

    at


    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2881)

    at


    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1900(DFSClient.java:2139)
    at


    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2329)


    10/10/09 23:22:15 WARN hdfs.DFSClient: Error Recovery for block null bad
    datanode[0] nodes ==
    null

    10/10/09 23:22:15 WARN hdfs.DFSClient: Could not get block locations.
    Source
    file "/user/user/gutenberg/pg4300.txt" -
    Aborting...
    copyFromLocal: java.io.IOException: File /user/user/gutenberg/pg4300.txt
    could only be replicated to 0 nodes, instead of
    1
    10/10/09 23:22:15 ERROR hdfs.DFSClient: Exception closing file
    /user/user/gutenberg/pg4300.txt : org.apache.hadoop.ipc.RemoteException:
    java.io.IOException: File /user/user/gutenberg/pg4300.txt could only be
    replicated to 0 nodes, instead of 1
    at


    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1310)

    at


    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)
    at


    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

    at


    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

    at
    java.lang.reflect.Method.invoke(Method.java:597)

    at
    org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512)

    at
    org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968)

    at
    org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964)

    at java.security.AccessController.doPrivileged(Native
    Method)
    at
    javax.security.auth.Subject.doAs(Subject.java:396)

    at
    org.apache.hadoop.ipc.Server$Handler.run(Server.java:962)


    org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
    /user/user/gutenberg/pg4300.txt could only be replicated to 0 nodes,
    instead
    of 1
    at


    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1310)

    at


    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)
    at


    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

    at


    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:962)

    at org.apache.hadoop.ipc.Client.call(Client.java:817)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:221)
    at $Proxy0.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at


    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at


    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at


    org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at


    org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy0.addBlock(Unknown Source)
    at


    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3000)
    at


    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2881)
    at


    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1900(DFSClient.java:2139)
    at


    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2329)
    [user@cs-sy-249 hadoop]$


    Regards
    Siddharth




    On Sat, Oct 9, 2010 at 10:39 PM, siddharth raghuvanshi<
    track009.siddharth@gmail.com> wrote:




    Hi,

    When I am running the following command in Mandriva Linux
    hadoop namenode -format

    I am getting the following error:

    10/10/09 22:32:07 INFO namenode.NameNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG: host = java.net.UnknownHostException:
    cs-sy-249.cse.iitkgp.ernet.in: cs-sy-249.cse.iitkgp.ernet.in
    STARTUP_MSG: args = [-format]
    STARTUP_MSG: version = 0.20.2+320
    STARTUP_MSG: build = -r 9b72d268a0b590b4fd7d13aca17c1c453f8bc957;
    compiled by 'root' on Mon Jun 28 19:13:09 EDT 2010
    ************************************************************/
    Re-format filesystem in
    /users/user/hadoop-datastore/hadoop-user/dfs/name
    ?
    (Y or N) Y
    10/10/09 22:32:11 INFO namenode.FSNamesystem: fsOwner=user,user
    10/10/09 22:32:11 INFO namenode.FSNamesystem: supergroup=supergroup
    10/10/09 22:32:11 INFO namenode.FSNamesystem: isPermissionEnabled=true
    10/10/09 22:32:12 INFO metrics.MetricsUtil: Unable to obtain hostName
    java.net.UnknownHostException: cs-sy-249.cse.iitkgp.ernet.in:
    cs-sy-249.cse.iitkgp.ernet.in
    at java.net.InetAddress.getLocalHost(InetAddress.java:1353)
    at
    org.apache.hadoop.metrics.MetricsUtil.getHostName(MetricsUtil.java:91)
    at
    org.apache.hadoop.metrics.MetricsUtil.createRecord(MetricsUtil.java:80)
    at


    org.apache.hadoop.hdfs.server.namenode.FSDirectory.initialize(FSDirectory.java:78)
    at


    org.apache.hadoop.hdfs.server.namenode.FSDirectory.<init>(FSDirectory.java:73)
    at


    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:383)
    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:904)
    at


    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:998)
    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1015)
    10/10/09 22:32:12 INFO common.Storage: Image file of size 94 saved in 0
    seconds.
    10/10/09 22:32:12 INFO common.Storage: Storage directory
    /users/user/hadoop-datastore/hadoop-user/dfs/name has been successfully
    formatted.
    10/10/09 22:32:12 INFO namenode.NameNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode at java.net.UnknownHostException:
    cs-sy-249.cse.iitkgp.ernet.in: cs-sy-249.cse.iitkgp.ernet.in
    ************************************************************/

    Please help me in solving this problem.

    Thanks
    Regards
    Siddharth





    --
    Postdoctoral Scholar
    Institute for Genomics and Systems Biology
    Department of Medicine, the University of Chicago
    Knapp Center for Biomedical Discovery
    900 E. 57th St. Room 10148
    Chicago, IL 60637, US
    Tel: 773-702-6799
  • Shi Yu at Oct 10, 2010 at 3:44 pm
    Check you log, especially the hadoop-**-tasktracker-TEMP.log. What does
    it say?
    On 2010-10-10 10:07, siddharth raghuvanshi wrote:
    Hi,

    Thanks for your reply..

    In browser,

    http://localhost:50030/jobtracker.jsp is opening fine
    but
    http://localhost:50060/ is not.

    Since jobtracker is running, so I'm assuming localhost is reachable.. am I
    wrong??

    Regards
    Siddharth

    On Sun, Oct 10, 2010 at 8:24 PM, Shi Yuwrote:

    Hi. Were you trying hadoop on your own computer or on a cluster? My guess
    you were trying on your own computer. I once observed the same problem on my
    laptop when I switched from wireless to fixed line connection, since the IP
    address was changed but for some reason the configuration was not updated.
    After restart the network service, the problem was fixed. The second
    replication error is relevant to the first one because apparently the data
    node is not running. So, you'd better double check the network connection of
    the machine (make sure the "localhost" in your configuration file is
    reachable).

    Shi


    On 2010-10-10 9:21, siddharth raghuvanshi wrote:

    Hi Shi,

    I am a beginner in Hadoop. I have given the following value in
    core-site.xml
    <name>hadoop.tmp.dir</name>
    <value>/users/user/hadoop-datastore/hadoop</value>


    <name>fs.default.name</name>
    <value>hdfs://localhost:54310</value>
    How will we check whether the host machine is reachable or not?

    Also, in mapred-site.xml, I have given
    <name>mapred.job.tracker</name>
    <value>localhost:54311</value>


    Please check whether these values are correct or not, if not correct what
    should I do?

    Waiting for your reply
    Regards
    Siddharth



    On Sat, Oct 9, 2010 at 11:47 PM, Shi Yuwrote:



    I suggest you change the hadoop.tmp.dir value in hadoop-site.xml
    (0.19.x)
    and reformat, restart it. Also double check the host machine in
    fs.default.name and mapred.job.tracker is reachable or not.

    Shi


    On 2010-10-9 12:57, siddharth raghuvanshi wrote:



    Hi,
    I am also getting the following error. Please tell me whether this error
    is
    related to the previous error which I asked an hour before or this is a
    separate error...

    [user@cs-sy-249 hadoop]$ bin/hadoop dfs -copyFromLocal
    /users/user/Desktop/test_data/
    gutenberg

    10/10/09 23:22:15 WARN hdfs.DFSClient: DataStreamer Exception:
    org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
    /user/user/gutenberg/pg4300.txt could only be replicated to 0 nodes,
    instead
    of 1
    at


    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1310)

    at


    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)
    at


    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

    at


    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

    at
    java.lang.reflect.Method.invoke(Method.java:597)

    at
    org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512)

    at
    org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968)

    at
    org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964)

    at java.security.AccessController.doPrivileged(Native
    Method)
    at
    javax.security.auth.Subject.doAs(Subject.java:396)

    at
    org.apache.hadoop.ipc.Server$Handler.run(Server.java:962)


    at org.apache.hadoop.ipc.Client.call(Client.java:817)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:221)
    at $Proxy0.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at


    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at


    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

    at
    java.lang.reflect.Method.invoke(Method.java:597)

    at


    org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)

    at


    org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)

    at $Proxy0.addBlock(Unknown
    Source)
    at


    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3000)

    at


    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2881)

    at


    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1900(DFSClient.java:2139)
    at


    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2329)


    10/10/09 23:22:15 WARN hdfs.DFSClient: Error Recovery for block null bad
    datanode[0] nodes ==
    null

    10/10/09 23:22:15 WARN hdfs.DFSClient: Could not get block locations.
    Source
    file "/user/user/gutenberg/pg4300.txt" -
    Aborting...
    copyFromLocal: java.io.IOException: File /user/user/gutenberg/pg4300.txt
    could only be replicated to 0 nodes, instead of
    1
    10/10/09 23:22:15 ERROR hdfs.DFSClient: Exception closing file
    /user/user/gutenberg/pg4300.txt : org.apache.hadoop.ipc.RemoteException:
    java.io.IOException: File /user/user/gutenberg/pg4300.txt could only be
    replicated to 0 nodes, instead of 1
    at


    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1310)

    at


    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)
    at


    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

    at


    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

    at
    java.lang.reflect.Method.invoke(Method.java:597)

    at
    org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512)

    at
    org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968)

    at
    org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964)

    at java.security.AccessController.doPrivileged(Native
    Method)
    at
    javax.security.auth.Subject.doAs(Subject.java:396)

    at
    org.apache.hadoop.ipc.Server$Handler.run(Server.java:962)


    org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
    /user/user/gutenberg/pg4300.txt could only be replicated to 0 nodes,
    instead
    of 1
    at


    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1310)

    at


    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)
    at


    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

    at


    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:962)

    at org.apache.hadoop.ipc.Client.call(Client.java:817)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:221)
    at $Proxy0.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at


    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at


    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at


    org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at


    org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy0.addBlock(Unknown Source)
    at


    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3000)
    at


    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2881)
    at


    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1900(DFSClient.java:2139)
    at


    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2329)
    [user@cs-sy-249 hadoop]$


    Regards
    Siddharth




    On Sat, Oct 9, 2010 at 10:39 PM, siddharth raghuvanshi<
    track009.siddharth@gmail.com> wrote:





    Hi,

    When I am running the following command in Mandriva Linux
    hadoop namenode -format

    I am getting the following error:

    10/10/09 22:32:07 INFO namenode.NameNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG: host = java.net.UnknownHostException:
    cs-sy-249.cse.iitkgp.ernet.in: cs-sy-249.cse.iitkgp.ernet.in
    STARTUP_MSG: args = [-format]
    STARTUP_MSG: version = 0.20.2+320
    STARTUP_MSG: build = -r 9b72d268a0b590b4fd7d13aca17c1c453f8bc957;
    compiled by 'root' on Mon Jun 28 19:13:09 EDT 2010
    ************************************************************/
    Re-format filesystem in
    /users/user/hadoop-datastore/hadoop-user/dfs/name
    ?
    (Y or N) Y
    10/10/09 22:32:11 INFO namenode.FSNamesystem: fsOwner=user,user
    10/10/09 22:32:11 INFO namenode.FSNamesystem: supergroup=supergroup
    10/10/09 22:32:11 INFO namenode.FSNamesystem: isPermissionEnabled=true
    10/10/09 22:32:12 INFO metrics.MetricsUtil: Unable to obtain hostName
    java.net.UnknownHostException: cs-sy-249.cse.iitkgp.ernet.in:
    cs-sy-249.cse.iitkgp.ernet.in
    at java.net.InetAddress.getLocalHost(InetAddress.java:1353)
    at
    org.apache.hadoop.metrics.MetricsUtil.getHostName(MetricsUtil.java:91)
    at
    org.apache.hadoop.metrics.MetricsUtil.createRecord(MetricsUtil.java:80)
    at


    org.apache.hadoop.hdfs.server.namenode.FSDirectory.initialize(FSDirectory.java:78)
    at


    org.apache.hadoop.hdfs.server.namenode.FSDirectory.<init>(FSDirectory.java:73)
    at


    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:383)
    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:904)
    at


    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:998)
    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1015)
    10/10/09 22:32:12 INFO common.Storage: Image file of size 94 saved in 0
    seconds.
    10/10/09 22:32:12 INFO common.Storage: Storage directory
    /users/user/hadoop-datastore/hadoop-user/dfs/name has been successfully
    formatted.
    10/10/09 22:32:12 INFO namenode.NameNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode at java.net.UnknownHostException:
    cs-sy-249.cse.iitkgp.ernet.in: cs-sy-249.cse.iitkgp.ernet.in
    ************************************************************/

    Please help me in solving this problem.

    Thanks
    Regards
    Siddharth





    --
    Postdoctoral Scholar
    Institute for Genomics and Systems Biology
    Department of Medicine, the University of Chicago
    Knapp Center for Biomedical Discovery
    900 E. 57th St. Room 10148
    Chicago, IL 60637, US
    Tel: 773-702-6799


    --
    Postdoctoral Scholar
    Institute for Genomics and Systems Biology
    Department of Medicine, the University of Chicago
    Knapp Center for Biomedical Discovery
    900 E. 57th St. Room 10148
    Chicago, IL 60637, US
    Tel: 773-702-6799
  • Steve Loughran at Oct 11, 2010 at 9:36 am

    On 10/10/10 16:07, siddharth raghuvanshi wrote:
    Hi,

    Thanks for your reply..

    In browser,

    http://localhost:50030/jobtracker.jsp is opening fine
    but
    http://localhost:50060/ is not.

    Since jobtracker is running, so I'm assuming localhost is reachable.. am I
    wrong??
    I'd worry more about the machine name in the error
    cs-sy-249.cse.iitkgp.ernet.in

    I don't know where Hadoop is getting this name from, but it's the one it
    can't see.
  • Stephen Watt at Oct 11, 2010 at 9:55 pm
    Check your etc/hosts file. I usually resolve this issue by fixing some
    weirdness or misconfiguration in that file.

    Regards
    Steve Watt



    From:
    Steve Loughran <stevel@apache.org>
    To:
    common-user@hadoop.apache.org
    Date:
    10/11/2010 04:36 AM
    Subject:
    Re: Unknown Host Exception


    On 10/10/10 16:07, siddharth raghuvanshi wrote:
    Hi,

    Thanks for your reply..

    In browser,

    http://localhost:50030/jobtracker.jsp is opening fine
    but
    http://localhost:50060/ is not.

    Since jobtracker is running, so I'm assuming localhost is reachable.. am I
    wrong??
    I'd worry more about the machine name in the error
    cs-sy-249.cse.iitkgp.ernet.in

    I don't know where Hadoop is getting this name from, but it's the one it
    can't see.
  • Siddharth raghuvanshi at Oct 12, 2010 at 6:19 am
    Hi Steve,

    Can you suggest something on this error??

    Regards
    Siddharth
    On Mon, Oct 11, 2010 at 3:05 PM, Steve Loughran wrote:
    On 10/10/10 16:07, siddharth raghuvanshi wrote:

    Hi,

    Thanks for your reply..

    In browser,

    http://localhost:50030/jobtracker.jsp is opening fine
    but
    http://localhost:50060/ is not.

    Since jobtracker is running, so I'm assuming localhost is reachable.. am I
    wrong??
    I'd worry more about the machine name in the error

    cs-sy-249.cse.iitkgp.ernet.in

    I don't know where Hadoop is getting this name from, but it's the one it
    can't see.
  • Steve Loughran at Oct 12, 2010 at 9:47 am

    On 12/10/10 07:18, siddharth raghuvanshi wrote:
    Hi Steve,

    Can you suggest something on this error??

    Regards
    Siddharth

    I'm not going to teach network diagnostics because its something
    everyone should know. Start with trying to nslookup and then ping
    cs-sy-249.cse.iitkgp.ernet.in
    On Mon, Oct 11, 2010 at 3:05 PM, Steve Loughran wrote:

    On 10/10/10 16:07, siddharth raghuvanshi wrote:

    Hi,

    Thanks for your reply..

    In browser,

    http://localhost:50030/jobtracker.jsp is opening fine
    but
    http://localhost:50060/ is not.

    Since jobtracker is running, so I'm assuming localhost is
    reachable.. am I
    wrong??


    I'd worry more about the machine name in the error

    cs-sy-249.cse.iitkgp.ernet.in <http://cs-sy-249.cse.iitkgp.ernet.in>

    I don't know where Hadoop is getting this name from, but it's the
    one it can't see.
  • Steve Loughran at Oct 12, 2010 at 9:48 am

    On 12/10/10 07:18, siddharth raghuvanshi wrote:
    Hi Steve,

    Can you suggest something on this error??
    That said, I've seen problems if your resolv.conf is a mess and the
    machine can't even work out its own hostname:
    https://issues.apache.org/jira/browse/HDFS-95

    having a functional network with DNS and reverse DNS is a requirement of
    Hadoop

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedOct 9, '10 at 5:10p
activeOct 12, '10 at 9:48a
posts12
users4
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase