FAQ
Hi, I'm having a few problems getting hadoop to run on a single node. I
had it up and running fine a couple of days ago, and then progressed to
trying to get it going on a small cluster but didn't succeed. So I went
back to running it on a single node, only to find it would no longer
work like that. The HDFS seems to set-up ok, and I can set the browser
to: http://146.169.49.111:50070/dfshealth.jsp and all the details come
up. However, I can't access the jobtracker in a similar way with port
50030. When I try and run a mapreduce job (the wordcount example) I get
the following socket timeout exception:

ray11% bin/hadoop jar hadoop-*-examples.jar wordcount -m 2 -r 3
input/test.txt output
java.net.SocketTimeoutException: timed out waiting for rpc response
at org.apache.hadoop.ipc.Client.call(Client.java:471)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:163)
at $Proxy1.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:247)
at org.apache.hadoop.mapred.JobClient.init(JobClient.java:208)
at org.apache.hadoop.mapred.JobClient.(JobClient.java:528)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:148)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at
org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:143)
at
org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:40)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at org.apache.hadoop.util.RunJar.main(RunJar.java:155)

My hadoop-site.xml file is set up as follows:

<configuration>

<property>
<name>fs.default.name</name>
<value>146.169.49.111:50010</value>
</property>

<property>
<name>mapred.job.tracker</name>
<value>146.169.49.111:50011</value>
</property>

</configuration>

and I'm getting things like this in the log:
2007-06-13 14:55:17,744 WARN org.apache.hadoop.mapred.JobTracker: Error
starting tracker: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind(Native Method)
at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at org.apache.hadoop.ipc.Server$Listener.(Server.java:621)
at org.apache.hadoop.mapred.JobTracker.(JobTracker.java:92)
at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:1670)

Hope you can help, and let me know if you require any more information.

Thanks in advance,
Ollie

Search Discussions

  • Oliver Haggarty at Jun 14, 2007 at 3:17 pm
    Hi, I seem to have sorted this now. I first tried entirely reinstalling
    hadoop, and things started working again. However, I don't think this
    was the problem. I had changed the mapred.jobtracker in the
    hadoop-site.xml file to local.

    WHen I changed it back to the old setting of ipaddress:50011 it no
    longer worked, with the same fault as before. Having searched through
    the mailing list I found a post suggesting using port 40000, which I
    did, and this worked. It then went on to work on a cluster of 8 computers.

    This is fantastic, but I don't understand why port 50011 didn't work -
    other users listed it as the port they used. Does anyone know why
    certain ports work but not others? Is this down to hadoop or my local
    system?
    Thanks again,
    Ollie

    Oliver Haggarty wrote:
    Hi, I'm having a few problems getting hadoop to run on a single node. I
    had it up and running fine a couple of days ago, and then progressed to
    trying to get it going on a small cluster but didn't succeed. So I went
    back to running it on a single node, only to find it would no longer
    work like that. The HDFS seems to set-up ok, and I can set the browser
    to: http://146.169.49.111:50070/dfshealth.jsp and all the details come
    up. However, I can't access the jobtracker in a similar way with port
    50030. When I try and run a mapreduce job (the wordcount example) I get
    the following socket timeout exception:

    ray11% bin/hadoop jar hadoop-*-examples.jar wordcount -m 2 -r 3
    input/test.txt output
    java.net.SocketTimeoutException: timed out waiting for rpc response
    at org.apache.hadoop.ipc.Client.call(Client.java:471)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:163)
    at $Proxy1.getProtocolVersion(Unknown Source)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:247)
    at org.apache.hadoop.mapred.JobClient.init(JobClient.java:208)
    at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:200)
    at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:528)
    at org.apache.hadoop.examples.WordCount.main(WordCount.java:148)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

    at java.lang.reflect.Method.invoke(Method.java:585)
    at
    org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)

    at
    org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:143)
    at
    org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:40)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

    at java.lang.reflect.Method.invoke(Method.java:585)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:155)

    My hadoop-site.xml file is set up as follows:

    <configuration>

    <property>
    <name>fs.default.name</name>
    <value>146.169.49.111:50010</value>
    </property>

    <property>
    <name>mapred.job.tracker</name>
    <value>146.169.49.111:50011</value>
    </property>

    </configuration>

    and I'm getting things like this in the log:
    2007-06-13 14:55:17,744 WARN org.apache.hadoop.mapred.JobTracker: Error
    starting tracker: java.net.BindException: Address already in use
    at sun.nio.ch.Net.bind(Native Method)
    at
    sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:184)
    at org.apache.hadoop.ipc.Server.start(Server.java:621)
    at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:605)
    at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:92)
    at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:1670)

    Hope you can help, and let me know if you require any more information.

    Thanks in advance,
    Ollie
  • Mahajan, Neeraj at Jun 14, 2007 at 4:30 pm
    Your log says "Error starting tracker: java.net.BindException: Address
    already in use", which clearly means that some other program has bind to
    that port and is listening and so hadoop cannot bind to that port. Use
    nmap or some simialr utility to find which binary is using that port. It
    could also happen that some earlier execution of hadoop hasn't
    completely exitted and is still bound to the port. In such a case, you
    will have to kill that java process.

    ~ Neeraj

    -----Original Message-----
    From: Oliver Haggarty
    Sent: Thursday, June 14, 2007 8:18 AM
    To: hadoop-user@lucene.apache.org
    Subject: Re: Problems running Hadoop on single node

    Hi, I seem to have sorted this now. I first tried entirely reinstalling
    hadoop, and things started working again. However, I don't think this
    was the problem. I had changed the mapred.jobtracker in the
    hadoop-site.xml file to local.

    WHen I changed it back to the old setting of ipaddress:50011 it no
    longer worked, with the same fault as before. Having searched through
    the mailing list I found a post suggesting using port 40000, which I
    did, and this worked. It then went on to work on a cluster of 8
    computers.

    This is fantastic, but I don't understand why port 50011 didn't work -
    other users listed it as the port they used. Does anyone know why
    certain ports work but not others? Is this down to hadoop or my local
    system?
    Thanks again,
    Ollie

    Oliver Haggarty wrote:
    Hi, I'm having a few problems getting hadoop to run on a single node.
    I had it up and running fine a couple of days ago, and then progressed
    to trying to get it going on a small cluster but didn't succeed. So I
    went back to running it on a single node, only to find it would no
    longer work like that. The HDFS seems to set-up ok, and I can set the
    browser
    to: http://146.169.49.111:50070/dfshealth.jsp and all the details come
    up. However, I can't access the jobtracker in a similar way with port
    50030. When I try and run a mapreduce job (the wordcount example) I
    get the following socket timeout exception:

    ray11% bin/hadoop jar hadoop-*-examples.jar wordcount -m 2 -r 3
    input/test.txt output
    java.net.SocketTimeoutException: timed out waiting for rpc response
    at org.apache.hadoop.ipc.Client.call(Client.java:471)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:163)
    at $Proxy1.getProtocolVersion(Unknown Source)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:247)
    at org.apache.hadoop.mapred.JobClient.init(JobClient.java:208)
    at
    org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:200)
    at
    org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:528)
    at
    org.apache.hadoop.examples.WordCount.main(WordCount.java:148)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j
    ava:39)

    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccess
    orImpl.java:25)

    at java.lang.reflect.Method.invoke(Method.java:585)
    at
    org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(Program
    Driver.java:71)

    at
    org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:143)
    at
    org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:40)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j
    ava:39)

    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccess
    orImpl.java:25)

    at java.lang.reflect.Method.invoke(Method.java:585)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:155)

    My hadoop-site.xml file is set up as follows:

    <configuration>

    <property>
    <name>fs.default.name</name>
    <value>146.169.49.111:50010</value>
    </property>

    <property>
    <name>mapred.job.tracker</name>
    <value>146.169.49.111:50011</value>
    </property>

    </configuration>

    and I'm getting things like this in the log:
    2007-06-13 14:55:17,744 WARN org.apache.hadoop.mapred.JobTracker:
    Error starting tracker: java.net.BindException: Address already in use
    at sun.nio.ch.Net.bind(Native Method)
    at
    sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119
    )
    at
    sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:184)
    at org.apache.hadoop.ipc.Server.start(Server.java:621)
    at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:605)
    at
    org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:92)
    at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:1670)

    Hope you can help, and let me know if you require any more
    information.
    Thanks in advance,
    Ollie

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedJun 14, '07 at 11:42a
activeJun 14, '07 at 4:30p
posts3
users2
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase