Hi ,
i am using 0.20.2.. when i try to start hadoop,the namenode ,datanode
works will ,but i can't submit jobs.i look at the logs ,and find
the error like:

2010-11-25 12:38:02,623 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:9000. Already tried 0 time(s).
2010-11-25 12:38:03,626 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:9000. Already tried 1 time(s).
2010-11-25 12:38:04,627 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:9000. Already tried 2 time(s).
2010-11-25 12:38:05,629 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:9000. Already tried 3 time(s).
2010-11-25 12:38:06,631 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:9000. Already tried 4 time(s).
2010-11-25 12:38:07,636 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:9000. Already tried 5 time(s).
2010-11-25 12:38:08,641 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:9000. Already tried 6 time(s).
2010-11-25 12:38:09,644 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:9000. Already tried 7 time(s).
2010-11-25 12:38:10,645 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:9000. Already tried 8 time(s).
2010-11-25 12:38:11,649 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:9000. Already tried 9 time(s).
2010-11-25 12:38:11,651 ERROR org.apache.hadoop.mapred.TaskTracker: Caught
exception: java.net.ConnectException: Call to
localhost/127.0.0.1:9000failed on connection exception:
java.net.ConnectException: Connection
refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:767)
at org.apache.hadoop.ipc.Client.call(Client.java:743)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy5.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:170)
at
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:175)
at org.apache.hadoop.mapred.TaskTracker.offerService(TaskTracker.java:1033)
at org.apache.hadoop.mapred.TaskTracker.run(TaskTracker.java:1720)
at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:2833)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:592)
at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:860)
at org.apache.hadoop.ipc.Client.call(Client.java:720)
... 15 more

Who can tell me how to solve this problem?

Search Discussions

  • Rahul patodi at Nov 25, 2010 at 5:06 am
    i think you should check your configuration file in the conf folder and add
    the required entry in
    core-site.xml, mapred-site.xml and hdfs-site.xml
    for pseudo distributed mode you can refer:
    http://hadoop-tutorial.blogspot.com/2010/11/running-hadoop-in-pseudo-distributed.html
    for distributed mode you can refer:
    http://hadoop-tutorial.blogspot.com/2010/11/running-hadoop-in-distributed-mode.html

    if you are using cloudera you can refer:
    http://cloudera-tutorial.blogspot.com/

    if you have any problem please leave a comment


    --
    -Thanks and Regards,
    Rahul Patodi
    Associate Software Engineer,
    Impetus Infotech (India) Private Limited,
    www.impetus.com
    Mob:09907074413
    On Thu, Nov 25, 2010 at 10:23 AM, 祝美祺 wrote:

    Hi ,
    i am using 0.20.2.. when i try to start hadoop,the namenode ,datanode
    works will ,but i can't submit jobs.i look at the logs ,and find
    the error like:

    2010-11-25 12:38:02,623 INFO org.apache.hadoop.ipc.Client: Retrying connect
    to server: localhost/127.0.0.1:9000. Already tried 0 time(s).
    2010-11-25 12:38:03,626 INFO org.apache.hadoop.ipc.Client: Retrying connect
    to server: localhost/127.0.0.1:9000. Already tried 1 time(s).
    2010-11-25 12:38:04,627 INFO org.apache.hadoop.ipc.Client: Retrying connect
    to server: localhost/127.0.0.1:9000. Already tried 2 time(s).
    2010-11-25 12:38:05,629 INFO org.apache.hadoop.ipc.Client: Retrying connect
    to server: localhost/127.0.0.1:9000. Already tried 3 time(s).
    2010-11-25 12:38:06,631 INFO org.apache.hadoop.ipc.Client: Retrying connect
    to server: localhost/127.0.0.1:9000. Already tried 4 time(s).
    2010-11-25 12:38:07,636 INFO org.apache.hadoop.ipc.Client: Retrying connect
    to server: localhost/127.0.0.1:9000. Already tried 5 time(s).
    2010-11-25 12:38:08,641 INFO org.apache.hadoop.ipc.Client: Retrying connect
    to server: localhost/127.0.0.1:9000. Already tried 6 time(s).
    2010-11-25 12:38:09,644 INFO org.apache.hadoop.ipc.Client: Retrying connect
    to server: localhost/127.0.0.1:9000. Already tried 7 time(s).
    2010-11-25 12:38:10,645 INFO org.apache.hadoop.ipc.Client: Retrying connect
    to server: localhost/127.0.0.1:9000. Already tried 8 time(s).
    2010-11-25 12:38:11,649 INFO org.apache.hadoop.ipc.Client: Retrying connect
    to server: localhost/127.0.0.1:9000. Already tried 9 time(s).
    2010-11-25 12:38:11,651 ERROR org.apache.hadoop.mapred.TaskTracker: Caught
    exception: java.net.ConnectException: Call to localhost/127.0.0.1:9000failed on connection exception: java.net.ConnectException: Connection
    refused
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:767)
    at org.apache.hadoop.ipc.Client.call(Client.java:743)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
    at $Proxy5.getProtocolVersion(Unknown Source)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
    at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
    at
    org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:175)
    at
    org.apache.hadoop.mapred.TaskTracker.offerService(TaskTracker.java:1033)
    at org.apache.hadoop.mapred.TaskTracker.run(TaskTracker.java:1720)
    at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:2833)
    Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:592)
    at
    org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
    at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:860)
    at org.apache.hadoop.ipc.Client.call(Client.java:720)
    ... 15 more

    Who can tell me how to solve this problem?
  • Rahul patodi at Nov 25, 2010 at 5:24 am
    also check your other file like /etc/hosts
    On Thu, Nov 25, 2010 at 10:36 AM, rahul patodi wrote:

    i think you should check your configuration file in the conf folder and add
    the required entry in
    core-site.xml, mapred-site.xml and hdfs-site.xml
    for pseudo distributed mode you can refer:
    http://hadoop-tutorial.blogspot.com/2010/11/running-hadoop-in-pseudo-distributed.html
    for distributed mode you can refer:
    http://hadoop-tutorial.blogspot.com/2010/11/running-hadoop-in-distributed-mode.html

    if you are using cloudera you can refer:
    http://cloudera-tutorial.blogspot.com/

    if you have any problem please leave a comment


    --
    -Thanks and Regards,
    Rahul Patodi
    Associate Software Engineer,
    Impetus Infotech (India) Private Limited,
    www.impetus.com
    Mob:09907074413
    On Thu, Nov 25, 2010 at 10:23 AM, 祝美祺 wrote:

    Hi ,
    i am using 0.20.2.. when i try to start hadoop,the namenode ,datanode
    works will ,but i can't submit jobs.i look at the logs ,and find
    the error like:

    2010-11-25 12:38:02,623 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s).
    2010-11-25 12:38:03,626 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s).
    2010-11-25 12:38:04,627 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 2 time(s).
    2010-11-25 12:38:05,629 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 3 time(s).
    2010-11-25 12:38:06,631 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 4 time(s).
    2010-11-25 12:38:07,636 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 5 time(s).
    2010-11-25 12:38:08,641 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 6 time(s).
    2010-11-25 12:38:09,644 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 7 time(s).
    2010-11-25 12:38:10,645 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 8 time(s).
    2010-11-25 12:38:11,649 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 9 time(s).
    2010-11-25 12:38:11,651 ERROR org.apache.hadoop.mapred.TaskTracker: Caught
    exception: java.net.ConnectException: Call to localhost/127.0.0.1:9000failed on connection exception: java.net.ConnectException: Connection
    refused
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:767)
    at org.apache.hadoop.ipc.Client.call(Client.java:743)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
    at $Proxy5.getProtocolVersion(Unknown Source)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
    at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
    at
    org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:175)
    at
    org.apache.hadoop.mapred.TaskTracker.offerService(TaskTracker.java:1033)
    at org.apache.hadoop.mapred.TaskTracker.run(TaskTracker.java:1720)
    at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:2833)
    Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at
    sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:592)
    at
    org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
    at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:860)
    at org.apache.hadoop.ipc.Client.call(Client.java:720)
    ... 15 more

    Who can tell me how to solve this problem?
  • 祝美祺 at Nov 25, 2010 at 5:51 am
    thanks for your reply,here is my configuration.
    master:

    core-site.xml:

    <configuration>
    <property>
    <name>fs.default.name</name>
    <value>hdfs://192.168.0.142:9000</value>
    </property>

    </configuration>


    hdfs-site.xml:

    <configuration>
    <property>
    <name>dfs.name.dir</name>
    <value>/home/hadoop/tmp</value>
    </property>
    <property>
    <property>
    <name>dfs.data.dir</name>
    <value>/home/hadoop/data</value>
    </property>
    <name>dfs.replication</name>
    <value>1</value>
    </property>

    </configuration>

    mapred-site.xml:

    <configuration>
    <property>
    <name>mapred.job.tracker</name>
    <value>192.168.0.142:9200</value>
    <description>jobtracker host port</description>
    </property>
    </configuration>


    And they are the same on slaves.And any problem with these configuration.i
    got only one slave on 192.168.0.177

    2010/11/25 rahul patodi <patodirahul@gmail.com>
    also check your other file like /etc/hosts

    On Thu, Nov 25, 2010 at 10:36 AM, rahul patodi wrote:

    i think you should check your configuration file in the conf folder and
    add the required entry in
    core-site.xml, mapred-site.xml and hdfs-site.xml
    for pseudo distributed mode you can refer:
    http://hadoop-tutorial.blogspot.com/2010/11/running-hadoop-in-pseudo-distributed.html
    for distributed mode you can refer:
    http://hadoop-tutorial.blogspot.com/2010/11/running-hadoop-in-distributed-mode.html

    if you are using cloudera you can refer:
    http://cloudera-tutorial.blogspot.com/

    if you have any problem please leave a comment


    --
    -Thanks and Regards,
    Rahul Patodi
    Associate Software Engineer,
    Impetus Infotech (India) Private Limited,
    www.impetus.com
    Mob:09907074413
    On Thu, Nov 25, 2010 at 10:23 AM, 祝美祺 wrote:

    Hi ,
    i am using 0.20.2.. when i try to start hadoop,the namenode
    ,datanode works will ,but i can't submit jobs.i look at the logs ,and find
    the error like:

    2010-11-25 12:38:02,623 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s).
    2010-11-25 12:38:03,626 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s).
    2010-11-25 12:38:04,627 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 2 time(s).
    2010-11-25 12:38:05,629 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 3 time(s).
    2010-11-25 12:38:06,631 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 4 time(s).
    2010-11-25 12:38:07,636 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 5 time(s).
    2010-11-25 12:38:08,641 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 6 time(s).
    2010-11-25 12:38:09,644 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 7 time(s).
    2010-11-25 12:38:10,645 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 8 time(s).
    2010-11-25 12:38:11,649 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 9 time(s).
    2010-11-25 12:38:11,651 ERROR org.apache.hadoop.mapred.TaskTracker:
    Caught exception: java.net.ConnectException: Call to localhost/
    127.0.0.1:9000 failed on connection exception:
    java.net.ConnectException: Connection refused
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:767)
    at org.apache.hadoop.ipc.Client.call(Client.java:743)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
    at $Proxy5.getProtocolVersion(Unknown Source)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
    at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
    at
    org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:175)
    at
    org.apache.hadoop.mapred.TaskTracker.offerService(TaskTracker.java:1033)
    at org.apache.hadoop.mapred.TaskTracker.run(TaskTracker.java:1720)
    at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:2833)
    Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at
    sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:592)
    at
    org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
    at
    org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
    at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:860)
    at org.apache.hadoop.ipc.Client.call(Client.java:720)
    ... 15 more

    Who can tell me how to solve this problem?

    --
    祝美祺

    电话:010-62789307
    清华大学计算机科学与技术系
    清华大学伟清楼709
    北京,100084

    Zhu Mei qi

    Tel.: +86-10-62789307
    Room 709, WeiQing Building
    Tsinghua University, Beijing 100084, China
  • Rahul patodi at Nov 25, 2010 at 6:00 am
    please correct your hdfs-site.xml contents:

    <configuration>
    <property>
    <name>dfs.name.dir</name>
    <value>/home/hadoop/tmp</value>
    </property>

    <property>
    <name>dfs.data.dir</name>
    <value>/home/hadoop/data</value>
    </property>
    <property>
    <name>dfs.replication</name>
    <value>1</value>
    </property>

    </configuration>

    rest all things are fine
    one more thing:it is best practice that you edit and add details of your
    matsre and slave (which you can get from my blog ) in :
    /etc/hosts
    conf/master
    conf/slaves

    2010/11/25 祝美祺 <henhaozhumeiqi@gmail.com>
    thanks for your reply,here is my configuration.
    master:

    core-site.xml:

    <configuration>
    <property>
    <name>fs.default.name</name>
    <value>hdfs://192.168.0.142:9000</value>
    </property>

    </configuration>


    hdfs-site.xml:

    <configuration>
    <property>
    <name>dfs.name.dir</name>
    <value>/home/hadoop/tmp</value>
    </property>
    <property>
    <property>
    <name>dfs.data.dir</name>
    <value>/home/hadoop/data</value>
    </property>
    <name>dfs.replication</name>
    <value>1</value>
    </property>

    </configuration>

    mapred-site.xml:

    <configuration>
    <property>
    <name>mapred.job.tracker</name>
    <value>192.168.0.142:9200</value>
    <description>jobtracker host port</description>
    </property>
    </configuration>


    And they are the same on slaves.And any problem with these configuration.i
    got only one slave on 192.168.0.177

    2010/11/25 rahul patodi <patodirahul@gmail.com>

    also check your other file like /etc/hosts
    On Thu, Nov 25, 2010 at 10:36 AM, rahul patodi wrote:

    i think you should check your configuration file in the conf folder and
    add the required entry in
    core-site.xml, mapred-site.xml and hdfs-site.xml
    for pseudo distributed mode you can refer:
    http://hadoop-tutorial.blogspot.com/2010/11/running-hadoop-in-pseudo-distributed.html
    for distributed mode you can refer:
    http://hadoop-tutorial.blogspot.com/2010/11/running-hadoop-in-distributed-mode.html

    if you are using cloudera you can refer:
    http://cloudera-tutorial.blogspot.com/

    if you have any problem please leave a comment


    --
    -Thanks and Regards,
    Rahul Patodi
    Associate Software Engineer,
    Impetus Infotech (India) Private Limited,
    www.impetus.com
    Mob:09907074413
    On Thu, Nov 25, 2010 at 10:23 AM, 祝美祺 wrote:

    Hi ,
    i am using 0.20.2.. when i try to start hadoop,the namenode
    ,datanode works will ,but i can't submit jobs.i look at the logs ,and find
    the error like:

    2010-11-25 12:38:02,623 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s).
    2010-11-25 12:38:03,626 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s).
    2010-11-25 12:38:04,627 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 2 time(s).
    2010-11-25 12:38:05,629 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 3 time(s).
    2010-11-25 12:38:06,631 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 4 time(s).
    2010-11-25 12:38:07,636 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 5 time(s).
    2010-11-25 12:38:08,641 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 6 time(s).
    2010-11-25 12:38:09,644 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 7 time(s).
    2010-11-25 12:38:10,645 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 8 time(s).
    2010-11-25 12:38:11,649 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 9 time(s).
    2010-11-25 12:38:11,651 ERROR org.apache.hadoop.mapred.TaskTracker:
    Caught exception: java.net.ConnectException: Call to localhost/
    127.0.0.1:9000 failed on connection exception:
    java.net.ConnectException: Connection refused
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:767)
    at org.apache.hadoop.ipc.Client.call(Client.java:743)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
    at $Proxy5.getProtocolVersion(Unknown Source)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
    at
    org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
    at
    org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
    at
    org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:175)
    at
    org.apache.hadoop.mapred.TaskTracker.offerService(TaskTracker.java:1033)
    at org.apache.hadoop.mapred.TaskTracker.run(TaskTracker.java:1720)
    at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:2833)
    Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at
    sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:592)
    at
    org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
    at
    org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
    at
    org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:860)
    at org.apache.hadoop.ipc.Client.call(Client.java:720)
    ... 15 more

    Who can tell me how to solve this problem?

    --
    祝美祺

    电话:010-62789307
    清华大学计算机科学与技术系
    清华大学伟清楼709
    北京,100084

    Zhu Mei qi

    Tel.: +86-10-62789307
    Room 709, WeiQing Building
    Tsinghua University, Beijing 100084, China


    --
    -Thanks and Regards,
    Rahul Patodi
    Associate Software Engineer,
    Impetus Infotech (India) Private Limited,
    www.impetus.com
    Mob:09907074413
  • 祝美祺 at Nov 25, 2010 at 9:21 am
    Silly mistakes...thanks for you help very much.

    在 2010年11月25日 下午2:00,rahul patodi <patodirahul@gmail.com>写道:
    please correct your hdfs-site.xml contents:

    <configuration>
    <property>
    <name>dfs.name.dir</name>
    <value>/home/hadoop/tmp</value>
    </property>

    <property>
    <name>dfs.data.dir</name>
    <value>/home/hadoop/data</value>
    </property>
    <property>
    <name>dfs.replication</name>
    <value>1</value>
    </property>

    </configuration>

    rest all things are fine
    one more thing:it is best practice that you edit and add details of your
    matsre and slave (which you can get from my blog ) in :
    /etc/hosts
    conf/master
    conf/slaves

    2010/11/25 祝美祺 <henhaozhumeiqi@gmail.com>

    thanks for your reply,here is my configuration.
    master:

    core-site.xml:

    <configuration>
    <property>
    <name>fs.default.name</name>
    <value>hdfs://192.168.0.142:9000</value>
    </property>

    </configuration>


    hdfs-site.xml:

    <configuration>
    <property>
    <name>dfs.name.dir</name>
    <value>/home/hadoop/tmp</value>
    </property>
    <property>
    <property>
    <name>dfs.data.dir</name>
    <value>/home/hadoop/data</value>
    </property>
    <name>dfs.replication</name>
    <value>1</value>
    </property>

    </configuration>

    mapred-site.xml:

    <configuration>
    <property>
    <name>mapred.job.tracker</name>
    <value>192.168.0.142:9200</value>
    <description>jobtracker host port</description>
    </property>
    </configuration>


    And they are the same on slaves.And any problem with these configuration.i
    got only one slave on 192.168.0.177

    2010/11/25 rahul patodi <patodirahul@gmail.com>

    also check your other file like /etc/hosts
    On Thu, Nov 25, 2010 at 10:36 AM, rahul patodi wrote:

    i think you should check your configuration file in the conf folder and
    add the required entry in
    core-site.xml, mapred-site.xml and hdfs-site.xml
    for pseudo distributed mode you can refer:
    http://hadoop-tutorial.blogspot.com/2010/11/running-hadoop-in-pseudo-distributed.html
    for distributed mode you can refer:
    http://hadoop-tutorial.blogspot.com/2010/11/running-hadoop-in-distributed-mode.html

    if you are using cloudera you can refer:
    http://cloudera-tutorial.blogspot.com/

    if you have any problem please leave a comment


    --
    -Thanks and Regards,
    Rahul Patodi
    Associate Software Engineer,
    Impetus Infotech (India) Private Limited,
    www.impetus.com
    Mob:09907074413
    On Thu, Nov 25, 2010 at 10:23 AM, 祝美祺 wrote:

    Hi ,
    i am using 0.20.2.. when i try to start hadoop,the namenode
    ,datanode works will ,but i can't submit jobs.i look at the logs ,and find
    the error like:

    2010-11-25 12:38:02,623 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s).
    2010-11-25 12:38:03,626 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s).
    2010-11-25 12:38:04,627 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 2 time(s).
    2010-11-25 12:38:05,629 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 3 time(s).
    2010-11-25 12:38:06,631 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 4 time(s).
    2010-11-25 12:38:07,636 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 5 time(s).
    2010-11-25 12:38:08,641 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 6 time(s).
    2010-11-25 12:38:09,644 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 7 time(s).
    2010-11-25 12:38:10,645 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 8 time(s).
    2010-11-25 12:38:11,649 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: localhost/127.0.0.1:9000. Already tried 9 time(s).
    2010-11-25 12:38:11,651 ERROR org.apache.hadoop.mapred.TaskTracker:
    Caught exception: java.net.ConnectException: Call to localhost/
    127.0.0.1:9000 failed on connection exception:
    java.net.ConnectException: Connection refused
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:767)
    at org.apache.hadoop.ipc.Client.call(Client.java:743)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
    at $Proxy5.getProtocolVersion(Unknown Source)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
    at
    org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
    at
    org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
    at
    org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:175)
    at
    org.apache.hadoop.mapred.TaskTracker.offerService(TaskTracker.java:1033)
    at org.apache.hadoop.mapred.TaskTracker.run(TaskTracker.java:1720)
    at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:2833)
    Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at
    sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:592)
    at
    org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
    at
    org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
    at
    org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:860)
    at org.apache.hadoop.ipc.Client.call(Client.java:720)
    ... 15 more

    Who can tell me how to solve this problem?

    --
    祝美祺

    电话:010-62789307
    清华大学计算机科学与技术系
    清华大学伟清楼709
    北京,100084

    Zhu Mei qi

    Tel.: +86-10-62789307
    Room 709, WeiQing Building
    Tsinghua University, Beijing 100084, China


    --
    -Thanks and Regards,
    Rahul Patodi
    Associate Software Engineer,
    Impetus Infotech (India) Private Limited,
    www.impetus.com
    Mob:09907074413

    --
    祝美祺

    电话:010-62789307
    清华大学计算机科学与技术系
    清华大学伟清楼709
    北京,100084

    Zhu Mei qi

    Tel.: +86-10-62789307
    Room 709, WeiQing Building
    Tsinghua University, Beijing 100084, China
  • Harsh J at Nov 25, 2010 at 8:51 am
    Hello,

    2010/11/25 祝美祺 <henhaozhumeiqi@gmail.com>:
    <property>
    <name>dfs.name.dir</name>
    <value>/home/hadoop/tmp</value>
    </property>
    (Nitpicking)

    Why set it to a directory named "tmp"? The dfs.name.dir is where the
    NameNode stores all its meta data related to the data across the
    DataNodes. I'd give it a good name than tmp, which gives a sense that
    it isn't important; while it is VERY important for this directory to
    exist in a usable cluster.

    --
    Harsh J
    www.harshj.com
  • 祝美祺 at Nov 25, 2010 at 9:04 am
    en,thanks for you advice , i didn't pay much attention to this .it is so
    important ,I think i should be careful about it.

    2010/11/25 Harsh J <qwertymaniac@gmail.com>
    Hello,

    2010/11/25 祝美祺 <henhaozhumeiqi@gmail.com>:
    <property>
    <name>dfs.name.dir</name>
    <value>/home/hadoop/tmp</value>
    </property>
    (Nitpicking)

    Why set it to a directory named "tmp"? The dfs.name.dir is where the
    NameNode stores all its meta data related to the data across the
    DataNodes. I'd give it a good name than tmp, which gives a sense that
    it isn't important; while it is VERY important for this directory to
    exist in a usable cluster.

    --
    Harsh J
    www.harshj.com


    --
    祝美祺

    电话:010-62789307
    清华大学计算机科学与技术系
    清华大学伟清楼709
    北京,100084

    Zhu Mei qi

    Tel.: +86-10-62789307
    Room 709, WeiQing Building
    Tsinghua University, Beijing 100084, China

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupmapreduce-user @
categorieshadoop
postedNov 25, '10 at 4:53a
activeNov 25, '10 at 9:21a
posts8
users3
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase