On Thu, Aug 11, 2011 at 3:03 PM, A Df wrote:Hi again:
I did format the namenode and it had a problem with a folder being locked. I
tried again and it formatted but still unable to work. I tried to copy input
files and run example jar. It gives:
my-user@ngs:~/hadoop-0.20.2_pseudo> bin/hadoop fs -put input input
11/08/11 10:25:11 WARN hdfs.DFSClient: DataStreamer Exception:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/my-user/input/HadoopInputFile_Request_2011-08-05_162106_1.txt could
only be replicated to 0 nodes, instead of 1
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
at org.apache.hadoop.ipc.Client.call(Client.java:740)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy0.addBlock(Unknown Source)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
11/08/11 10:25:11 WARN hdfs.DFSClient: Error Recovery for block null bad
datanode[0] nodes == null
11/08/11 10:25:11 WARN hdfs.DFSClient: Could not get block locations. Source
file "/user/my-user/input/HadoopInputFile_Request_2011-08-05_162106_1.txt" -
Aborting...
put: java.io.IOException: File
/user/my-user/input/HadoopInputFile_Request_2011-08-05_162106_1.txt could
only be replicated to 0 nodes, instead of 1
11/08/11 10:25:11 ERROR hdfs.DFSClient: Exception closing file
/user/my-user/input/HadoopInputFile_Request_2011-08-05_162106_1.txt :
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/my-user/input/HadoopInputFile_Request_2011-08-05_162106_1.txt could
only be replicated to 0 nodes, instead of 1
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/my-user/input/HadoopInputFile_Request_2011-08-05_162106_1.txt could
only be replicated to 0 nodes, instead of 1
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
at org.apache.hadoop.ipc.Client.call(Client.java:740)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy0.addBlock(Unknown Source)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
my-user@ngs:~/hadoop-0.20.2_pseudo> bin/hadoop fs -ls
Found 1 items
drwxr-xr-x - my-user supergroup 0 2011-08-11 10:25
/user/my-user/input
my-user@ngs:~/hadoop-0.20.2_pseudo> ls
bin docs input logs
build.xml hadoop-0.20.2-ant.jar ivy NOTICE.txt
c++ hadoop-0.20.2-core.jar ivy.xml README.txt
CHANGES.txt hadoop-0.20.2-examples.jar lib src
conf hadoop-0.20.2-test.jar librecordio webapps
contrib hadoop-0.20.2-tools.jar LICENSE.txt
my-user@ngs:~/hadoop-0.20.2_pseudo> bin/hadoop hadoop-0.20.2-examples.jar
wordcount input output
Exception in thread "main" java.lang.NoClassDefFoundError:
hadoop-0/20/2-examples/jar
my-user@ngs:~/hadoop-0.20.2_pseudo> bin/hadoop hadoop-0.20.2-examples.jar
grep input output 'dfs[a-z.]+'
Exception in thread "main" java.lang.NoClassDefFoundError:
hadoop-0/20/2-examples/jar
________________________________
From: Harsh J <harsh@cloudera.com>
To: common-user@hadoop.apache.org
Sent: Thursday, 11 August 2011, 6:28
Subject: Re: Where is web interface in stand alone operation?
Note: NameNode format affects the directory specified by "dfs.name.dir"
On Thu, Aug 11, 2011 at 10:57 AM, Harsh J wrote:Have you done the following?
bin/hadoop namenode -format
On Thu, Aug 11, 2011 at 10:50 AM, A Df <abbey_dragonforest@yahoo.com>
wrote:
Hello Again:
I extracted hadoop and changed the xml as shwon in the tutorial but now
it
seems it connect get a connection. I am using putty to ssh to the server
and
I change the config files to set it up in pseudo mode as shown
conf/core-site.xml:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</
value>
</property>
</configuration>
hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.http.address</name>
<value>0.0.0.0:3500</value>
</property>
</configuration>
conf/mapred-site.xml:
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
<property>
<name>mapred.job.tracker.http.address</name>
<value>0.0.0.0:3501</value>
</property>
</configuration>
I tried to format the namenode, started all processes but I notice that
when
I stop it, it said that the namenode was not running.When I try to run
the
example jar, it keeps timing out when connecting to 127.0.0.1:port#. I
used
various port numbers and tried replacing localhost with the name for the
server but it still times out. It also has a long ip address for
name.server.ac.uk/161.74.12.97:3000 which seems to repeating itself since
name.server.ac.uk already has the ip address of 161.74.12.97. The console
message is shown below. I was also having problems where it did not want
to
format the namenode.
Is there something is wrong with connecting to the namenode and what
cause
it to not format?
2011-08-11 05:49:13,529 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = name.server.ac.uk/161.74.12.97
STARTUP_MSG: args = []
STARTUP_MSG: version = 0.20.2
STARTUP_MSG: build =
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
2011-08-11 05:49:13,663 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
Initializing RPC Metrics with hostName=NameNode, port=3000
2011-08-11 05:49:13,669 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
name.server.ac.uk/161.74.12.97:3000
2011-08-11 05:49:13,672 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializing JVM Metrics with processName=NameNode, sessionId=null
2011-08-11 05:49:13,674 INFO
org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
Initializing
NameNodeMeterics using context
object:org.apache.hadoop.metrics.spi.NullContext
2011-08-11 05:49:13,755 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
fsOwner=my-user,users,cluster_login
2011-08-11 05:49:13,755 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
supergroup=supergroup
2011-08-11 05:49:13,756 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
isPermissionEnabled=true
2011-08-11 05:49:13,768 INFO
org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
Initializing FSNamesystemMetrics using context
object:org.apache.hadoop.metrics.spi.NullContext
2011-08-11 05:49:13,770 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
FSNamesystemStatusMBean
2011-08-11 05:49:13,812 ERROR
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem
initialization failed.
java.io.IOException: NameNode is not formatted.
at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:317)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
2011-08-11 05:49:13,813 INFO org.apache.hadoop.ipc.Server: Stopping
server
on 3000
2011-08-11 05:49:13,814 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
NameNode is not formatted.
at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:317)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:292)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)
2011-08-11 05:49:13,814 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at name.server.ac.uk/161.74.12.97
************************************************************/
Thank you,
A Df
________________________________
From: Harsh J <harsh@cloudera.com>
To: A Df <abbey_dragonforest@yahoo.com>
Sent: Wednesday, 10 August 2011, 15:13
Subject: Re: Where is web interface in stand alone operation?
A Df,
On Wed, Aug 10, 2011 at 7:28 PM, A Df <abbey_dragonforest@yahoo.com>
wrote:
Hello Harsh:
See inline at *
________________________________
From: Harsh J <harsh@cloudera.com>
To: common-user@hadoop.apache.org; A Df <abbey_dragonforest@yahoo.com>
Sent: Wednesday, 10 August 2011, 14:44
Subject: Re: Where is web interface in stand alone operation?
A Df,
The web UIs are a feature of the daemons JobTracker and NameNode. In
standalone/'local'/'file:///' modes, these daemons aren't run
(actually, no daemon is run at all), and hence there would be no 'web'
interface.
*ok, but is there any other way to check the performance in this mode
such
as time to complete etc? I am trying to compare performance between the
two.
And also for the pseudo mode how would I change the ports for the web
interface because I have to connect to a remote server which only allows
certain ports to be accessed from the web?
The ports Kai mentioned above are sourced from the configs:
dfs.http.address (hdfs-site.xml) and mapred.job.tracker.http.address
(mapred-site.xml). You can change them to bind to a host:port of your
preference.
--
Harsh J
--
Harsh J
--
Harsh J