FAQ
Hi All:

I am have Hadoop 0.20.2 and I am using cygwin on Windows 7. I modified the files as shown below for the Hadoop configuration.

conf/core-site.xml:

<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9100</value>
</property>
</configuration>


conf/hdfs-site.xml:

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>


conf/mapred-site.xml:

<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9101</value>
</property>
</configuration>

Then I have the PATH variable with
$PATH:/cygdrive/c/cygwin/bin:/cygdrive/c/cygwin/usr/bin

I added JAVA_HOME to the file in cygwin\home\Williams\hadoop-0.20.2\conf\hadoop-env.sh.
My Java home is now at C:\Java\jdk1.6.0_26 so there is not space. I also turned off my firewall.
However, I get the error from the command line:

<CODE>
Williams@TWilliams-LTPC ~
$ pwd
/home/Williams

Williams@TWilliams-LTPC ~
$ cd hadoop-0.20.2

Williams@TWilliams-LTPC ~/hadoop-0.20.2
$ bin/start-all.sh
starting namenode, logging to /home/Williams/hadoop-0.20.2/bin/../logs/hadoop-Wi
lliams-namenode-TWilliams-LTPC.out
localhost: starting datanode, logging to /home/Williams/hadoop-0.20.2/bin/../log
s/hadoop-Williams-datanode-TWilliams-LTPC.out
localhost: starting secondarynamenode, logging to /home/Williams/hadoop-0.20.2/b
in/../logs/hadoop-Williams-secondarynamenode-TWilliams-LTPC.out
starting jobtracker, logging to /home/Williams/hadoop-0.20.2/bin/../logs/hadoop-
Williams-jobtracker-TWilliams-LTPC.out
localhost: starting tasktracker, logging to /home/Williams/hadoop-0.20.2/bin/../
logs/hadoop-Williams-tasktracker-TWilliams-LTPC.out

Williams@TWilliams-LTPC ~/hadoop-0.20.2
$ bin/hadoop fs -put conf input
11/07/27 17:11:28 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
.1:9100. Already tried 0 time(s).
11/07/27 17:11:30 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
.1:9100. Already tried 1 time(s).
11/07/27 17:11:32 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
.1:9100. Already tried 2 time(s).
11/07/27 17:11:34 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
.1:9100. Already tried 3 time(s).
11/07/27 17:11:36 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
.1:9100. Already tried 4 time(s).
11/07/27 17:11:38 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
.1:9100. Already tried 5 time(s).
11/07/27 17:11:40 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
.1:9100. Already tried 6 time(s).
11/07/27 17:11:43 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
.1:9100. Already tried 7 time(s).
11/07/27 17:11:45 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
.1:9100. Already tried 8 time(s).
11/07/27 17:11:47 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
.1:9100. Already tried 9 time(s).
Bad connection to FS. command aborted.

Williams@TWilliams-LTPC ~/hadoop-0.20.2
$ bin/hadoop fs -put conf input
11/07/27 17:17:29 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
.1:9100. Already tried 0 time(s).
11/07/27 17:17:31 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
.1:9100. Already tried 1 time(s).
11/07/27 17:17:33 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
.1:9100. Already tried 2 time(s).
11/07/27 17:17:35 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
.1:9100. Already tried 3 time(s).
11/07/27 17:17:37 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
.1:9100. Already tried 4 time(s).
11/07/27 17:17:39 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
.1:9100. Already tried 5 time(s).
11/07/27 17:17:41 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
.1:9100. Already tried 6 time(s).
11/07/27 17:17:44 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
.1:9100. Already tried 7 time(s).
11/07/27 17:17:46 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
.1:9100. Already tried 8 time(s).
11/07/27 17:17:48 INFO ipc.Client: Retrying connect to server: localhost/127.0.0
.1:9100. Already tried 9 time(s).
Bad connection to FS. command aborted.

Williams@TWilliams-LTPC ~/hadoop-0.20.2
$ ping 127.0.0.1:9100
Ping request could not find host 127.0.0.1:9100. Please check the name and try a
gain.
</CODE>

I am not sure why the ip address seem to have localhost/127.0.0.1 which seems to be repeating itself. The conf files are fine. I also know that when Hadoop is running there is a web interface to check but do the default ones work from cygwin which are:
* NameNode - http://localhost:50070/
* JobTracker - http://localhost:50030/

I wanted to give the cygwin a try once more before just switching to a cloudera hadoop vmware. I was hoping that it would not have so many problems just to get it working on Windows! Thanks again.

Cheers,
A Df

Search Discussions

  • Uma Maheswara Rao G 72686 at Jul 27, 2011 at 4:31 pm
    Hi A Df,

    Did you format the NameNode first?

    Can you check the NN logs whether NN is started or not?

    Regards,
    Uma
    ******************************************************************************************
    This email and its attachments contain confidential information from HUAWEI, which is intended only for the person or entity whose address is listed above. Any use of the information contained here in any way (including, but not limited to, total or partial disclosure, reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this email in error, please notify the sender by phone or email immediately and delete it!
    *****************************************************************************************

    ----- Original Message -----
    From: A Df <abbey_dragonforest@yahoo.com>
    Date: Wednesday, July 27, 2011 9:55 pm
    Subject: cygwin not connecting to Hadoop server
    To: "common-user@hadoop.apache.org" <common-user@hadoop.apache.org>
    Hi All:

    I am have Hadoop 0.20.2 and I am using cygwin on Windows 7. I
    modified the files as shown below for the Hadoop configuration.

    conf/core-site.xml:

    <configuration>
    <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:9100</value>
    </property>
    </configuration>


    conf/hdfs-site.xml:

    <configuration>
    <property>
    <name>dfs.replication</name>
    <value>1</value>
    </property>
    </configuration>


    conf/mapred-site.xml:

    <configuration>
    <property>
    <name>mapred.job.tracker</name>
    <value>localhost:9101</value>
    </property>
    </configuration>

    Then I have the PATH variable with
    $PATH:/cygdrive/c/cygwin/bin:/cygdrive/c/cygwin/usr/bin

    I added JAVA_HOME to the file in cygwin\home\Williams\hadoop-
    0.20.2\conf\hadoop-env.sh.
    My Java home is now at C:\Java\jdk1.6.0_26 so there is not space. I
    also turned off my firewall.
    However, I get the error from the command line:

    <CODE>
    Williams@TWilliams-LTPC ~
    $ pwd
    /home/Williams

    Williams@TWilliams-LTPC ~
    $ cd hadoop-0.20.2

    Williams@TWilliams-LTPC ~/hadoop-0.20.2
    $ bin/start-all.sh
    starting namenode, logging to /home/Williams/hadoop-
    0.20.2/bin/../logs/hadoop-Wi
    lliams-namenode-TWilliams-LTPC.out
    localhost: starting datanode, logging to /home/Williams/hadoop-
    0.20.2/bin/../logs/hadoop-Williams-datanode-TWilliams-LTPC.out
    localhost: starting secondarynamenode, logging to
    /home/Williams/hadoop-0.20.2/b
    in/../logs/hadoop-Williams-secondarynamenode-TWilliams-LTPC.out
    starting jobtracker, logging to /home/Williams/hadoop-
    0.20.2/bin/../logs/hadoop-
    Williams-jobtracker-TWilliams-LTPC.out
    localhost: starting tasktracker, logging to /home/Williams/hadoop-
    0.20.2/bin/../logs/hadoop-Williams-tasktracker-TWilliams-LTPC.out

    Williams@TWilliams-LTPC ~/hadoop-0.20.2
    $ bin/hadoop fs -put conf input
    11/07/27 17:11:28 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 0 time(s).
    11/07/27 17:11:30 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 1 time(s).
    11/07/27 17:11:32 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 2 time(s).
    11/07/27 17:11:34 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 3 time(s).
    11/07/27 17:11:36 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 4 time(s).
    11/07/27 17:11:38 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 5 time(s).
    11/07/27 17:11:40 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 6 time(s).
    11/07/27 17:11:43 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 7 time(s).
    11/07/27 17:11:45 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 8 time(s).
    11/07/27 17:11:47 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 9 time(s).
    Bad connection to FS. command aborted.

    Williams@TWilliams-LTPC ~/hadoop-0.20.2
    $ bin/hadoop fs -put conf input
    11/07/27 17:17:29 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 0 time(s).
    11/07/27 17:17:31 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 1 time(s).
    11/07/27 17:17:33 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 2 time(s).
    11/07/27 17:17:35 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 3 time(s).
    11/07/27 17:17:37 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 4 time(s).
    11/07/27 17:17:39 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 5 time(s).
    11/07/27 17:17:41 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 6 time(s).
    11/07/27 17:17:44 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 7 time(s).
    11/07/27 17:17:46 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 8 time(s).
    11/07/27 17:17:48 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 9 time(s).
    Bad connection to FS. command aborted.

    Williams@TWilliams-LTPC ~/hadoop-0.20.2
    $ ping 127.0.0.1:9100
    Ping request could not find host 127.0.0.1:9100. Please check the
    name and try a
    gain.
    </CODE>

    I am not sure why the ip address seem to have localhost/127.0.0.1
    which seems to be repeating itself. The conf files are fine. I also
    know that when Hadoop is running there is a web interface to check
    but do the default ones work from cygwin which are:
    * NameNode - http://localhost:50070/
    * JobTracker - http://localhost:50030/

    I wanted to give the cygwin a try once more before just switching
    to a cloudera hadoop vmware. I was hoping that it would not have so
    many problems just to get it working on Windows! Thanks again.

    Cheers,
    A Df
  • A Df at Jul 27, 2011 at 5:01 pm
    See inline at **. More questions and many Thanks :D



    ________________________________
    From: Uma Maheswara Rao G 72686 <maheswara@huawei.com>
    To: common-user@hadoop.apache.org; A Df <abbey_dragonforest@yahoo.com>
    Cc: "common-user@hadoop.apache.org" <common-user@hadoop.apache.org>
    Sent: Wednesday, 27 July 2011, 17:31
    Subject: Re: cygwin not connecting to Hadoop server


    Hi A Df,

    Did you format the NameNode first?

    ** I had formatted it already but then I had reinstalled Java and upgraded the plugins in cygwin so I reformatted it again. :D yes it worked!! I am not sure all the steps that got it to finally work but I will have to document it to prevent this headache in the future. Although I typed ssh localhost too , so question is, do I need to type ssh localhost each time I need to run hadoop?? Also, since I need to work with Eclipse maybe you can have a look at my post about the plugin cause I can get the patch to work. The subject is "Re: Cygwin not working with Hadoop and Eclipse Plugin". I plan to read up on how to write programs for Hadoop. I am using the tutorial at Yahoo but if you know of any really good about coding with Hadoop or just about understanding Hadoop then please let me know.

    Can you check the NN logs whether NN is started or not?
    ** I checked and the previous runs had some logs missing but now the last one have all 5 logs and I got two conf files in xml. I also copied out the other output files which I plan to examine. Where do I specify the output extension format that I want for my output file? I was hoping for an txt file it shows the output in a file with no extension even though I can read it in Notepad++. I also got to view the web interface at:
    NameNode - http://localhost:50070/
    JobTracker - http://localhost:50030/

    ** See below for the working version, finally!! Thanks
    <CMD>
    Williams@TWilliams-LTPC ~/hadoop-0.20.2
    $ bin/hadoop jar hadoop-0.20.2-examples.jar grep input
    11/07/27 17:42:20 INFO mapred.FileInputFormat: Total in

    11/07/27 17:42:20 INFO mapred.JobClient: Running job: j
    11/07/27 17:42:21 INFO mapred.JobClient:  map 0% reduce
    11/07/27 17:42:33 INFO mapred.JobClient:  map 15% reduc
    11/07/27 17:42:36 INFO mapred.JobClient:  map 23% reduc
    11/07/27 17:42:39 INFO mapred.JobClient:  map 38% reduc
    11/07/27 17:42:42 INFO mapred.JobClient:  map 38% reduc
    11/07/27 17:42:45 INFO mapred.JobClient:  map 53% reduc
    11/07/27 17:42:48 INFO mapred.JobClient:  map 69% reduc
    11/07/27 17:42:51 INFO mapred.JobClient:  map 76% reduc
    11/07/27 17:42:54 INFO mapred.JobClient:  map 92% reduc
    11/07/27 17:42:57 INFO mapred.JobClient:  map 100% redu
    11/07/27 17:43:06 INFO mapred.JobClient:  map 100% redu
    11/07/27 17:43:09 INFO mapred.JobClient: Job complete:
    11/07/27 17:43:09 INFO mapred.JobClient: Counters: 18
    11/07/27 17:43:09 INFO mapred.JobClient:   Job Counters
    11/07/27 17:43:09 INFO mapred.JobClient:     Launched r
    11/07/27 17:43:09 INFO mapred.JobClient:     Launched m
    11/07/27 17:43:09 INFO mapred.JobClient:     Data-local
    11/07/27 17:43:09 INFO mapred.JobClient:   FileSystemCo
    11/07/27 17:43:09 INFO mapred.JobClient:     FILE_BYTES
    11/07/27 17:43:09 INFO mapred.JobClient:     HDFS_BYTES
    11/07/27 17:43:09 INFO mapred.JobClient:     FILE_BYTES
    11/07/27 17:43:09 INFO mapred.JobClient:     HDFS_BYTES
    11/07/27 17:43:09 INFO mapred.JobClient:   Map-Reduce F
    11/07/27 17:43:09 INFO mapred.JobClient:     Reduce inp
    11/07/27 17:43:09 INFO mapred.JobClient:     Combine ou
    11/07/27 17:43:09 INFO mapred.JobClient:     Map input
    11/07/27 17:43:09 INFO mapred.JobClient:     Reduce shu
    11/07/27 17:43:09 INFO mapred.JobClient:     Reduce out
    11/07/27 17:43:09 INFO mapred.JobClient:     Spilled Re
    11/07/27 17:43:09 INFO mapred.JobClient:     Map output
    11/07/27 17:43:09 INFO mapred.JobClient:     Map input
    11/07/27 17:43:09 INFO mapred.JobClient:     Combine in
    11/07/27 17:43:09 INFO mapred.JobClient:     Map output
    11/07/27 17:43:09 INFO mapred.JobClient:     Reduce inp
    11/07/27 17:43:09 WARN mapred.JobClient: Use GenericOpt
    e arguments. Applications should implement Tool for the
    11/07/27 17:43:09 INFO mapred.FileInputFormat: Total in
    11/07/27 17:43:09 INFO mapred.JobClient: Running job: j
    11/07/27 17:43:10 INFO mapred.JobClient:  map 0% reduce
    11/07/27 17:43:22 INFO mapred.JobClient:  map 100% redu
    11/07/27 17:43:31 INFO mapred.JobClient:  map 100% redu
    11/07/27 17:43:36 INFO mapred.JobClient:  map 100% redu
    11/07/27 17:43:38 INFO mapred.JobClient: Job complete:
    11/07/27 17:43:39 INFO mapred.JobClient: Counters: 18
    11/07/27 17:43:39 INFO mapred.JobClient:   Job Counters
    11/07/27 17:43:39 INFO mapred.JobClient:     Launched r
    11/07/27 17:43:39 INFO mapred.JobClient:     Launched m
    11/07/27 17:43:39 INFO mapred.JobClient:     Data-local
    11/07/27 17:43:39 INFO mapred.JobClient:   FileSystemCo
    11/07/27 17:43:39 INFO mapred.JobClient:     FILE_BYTES
    11/07/27 17:43:39 INFO mapred.JobClient:     HDFS_BYTES
    11/07/27 17:43:39 INFO mapred.JobClient:     FILE_BYTES
    11/07/27 17:43:39 INFO mapred.JobClient:     HDFS_BYTES
    11/07/27 17:43:39 INFO mapred.JobClient:   Map-Reduce F
    11/07/27 17:43:39 INFO mapred.JobClient:     Reduce inp
    11/07/27 17:43:39 INFO mapred.JobClient:     Combine ou
    11/07/27 17:43:39 INFO mapred.JobClient:     Map input
    11/07/27 17:43:39 INFO mapred.JobClient:     Reduce shu
    11/07/27 17:43:39 INFO mapred.JobClient:     Reduce out
    11/07/27 17:43:39 INFO mapred.JobClient:     Spilled Re
    11/07/27 17:43:39 INFO mapred.JobClient:     Map output
    11/07/27 17:43:39 INFO mapred.JobClient:     Map input
    11/07/27 17:43:39 INFO mapred.JobClient:     Combine in
    11/07/27 17:43:39 INFO mapred.JobClient:     Map output
    11/07/27 17:43:39 INFO mapred.JobClient:     Reduce inp

    Williams@TWilliams-LTPC ~/hadoop-0.20.2
    $ bin/hadoop fs -get output output

    Williams@TWilliams-LTPC ~/hadoop-0.20.2
    $ cat output/*
    cat: output/_logs: Is a directory
    3       dfs.class
    2       dfs.period
    1       dfs.file
    1       dfs.replication
    1       dfs.servers
    1       dfsadmin
    1       dfsmetrics.log
    </CMD>

    Regards,
    Uma
    ******************************************************************************************
    This email and its attachments contain confidential information from HUAWEI, which is intended only for the person or entity whose address is listed above. Any use of the information contained here in any way (including, but not limited to, total or partial disclosure, reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this email in error, please notify the sender by phone or email immediately and delete it!
    *****************************************************************************************

    ----- Original Message -----
    From: A Df <abbey_dragonforest@yahoo.com>
    Date: Wednesday, July 27, 2011 9:55 pm
    Subject: cygwin not connecting to Hadoop server
    To: "common-user@hadoop.apache.org" <common-user@hadoop.apache.org>
    Hi All:

    I am have Hadoop 0.20.2 and I am using cygwin on Windows 7. I
    modified the files as shown below for the Hadoop configuration.

    conf/core-site.xml:

    <configuration>
    <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:9100</value>
    </property>
    </configuration>


    conf/hdfs-site.xml:

    <configuration>
    <property>
    <name>dfs.replication</name>
    <value>1</value>
    </property>
    </configuration>


    conf/mapred-site.xml:

    <configuration>
    <property>
    <name>mapred.job.tracker</name>
    <value>localhost:9101</value>
    </property>
    </configuration>

    Then I have the PATH variable with
    $PATH:/cygdrive/c/cygwin/bin:/cygdrive/c/cygwin/usr/bin

    I added JAVA_HOME to the file in cygwin\home\Williams\hadoop-
    0.20.2\conf\hadoop-env.sh.
    My Java home is now at C:\Java\jdk1.6.0_26 so there is not space. I
    also turned off my firewall.
    However, I get the error from the command line:

    <CODE>
    Williams@TWilliams-LTPC ~
    $ pwd
    /home/Williams

    Williams@TWilliams-LTPC ~
    $ cd hadoop-0.20.2

    Williams@TWilliams-LTPC ~/hadoop-0.20.2
    $ bin/start-all.sh
    starting namenode, logging to /home/Williams/hadoop-
    0.20.2/bin/../logs/hadoop-Wi
    lliams-namenode-TWilliams-LTPC.out
    localhost: starting datanode, logging to /home/Williams/hadoop-
    0.20.2/bin/../logs/hadoop-Williams-datanode-TWilliams-LTPC.out
    localhost: starting secondarynamenode, logging to
    /home/Williams/hadoop-0.20.2/b
    in/../logs/hadoop-Williams-secondarynamenode-TWilliams-LTPC.out
    starting jobtracker, logging to /home/Williams/hadoop-
    0.20.2/bin/../logs/hadoop-
    Williams-jobtracker-TWilliams-LTPC.out
    localhost: starting tasktracker, logging to /home/Williams/hadoop-
    0.20.2/bin/../logs/hadoop-Williams-tasktracker-TWilliams-LTPC.out

    Williams@TWilliams-LTPC ~/hadoop-0.20.2
    $ bin/hadoop fs -put conf input
    11/07/27 17:11:28 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 0 time(s).
    11/07/27 17:11:30 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 1 time(s).
    11/07/27 17:11:32 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 2 time(s).
    11/07/27 17:11:34 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 3 time(s).
    11/07/27 17:11:36 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 4 time(s).
    11/07/27 17:11:38 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 5 time(s).
    11/07/27 17:11:40 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 6 time(s).
    11/07/27 17:11:43 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 7 time(s).
    11/07/27 17:11:45 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 8 time(s).
    11/07/27 17:11:47 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 9 time(s).
    Bad connection to FS. command aborted.

    Williams@TWilliams-LTPC ~/hadoop-0.20.2
    $ bin/hadoop fs -put conf input
    11/07/27 17:17:29 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 0 time(s).
    11/07/27 17:17:31 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 1 time(s).
    11/07/27 17:17:33 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 2 time(s).
    11/07/27 17:17:35 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 3 time(s).
    11/07/27 17:17:37 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 4 time(s).
    11/07/27 17:17:39 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 5 time(s).
    11/07/27 17:17:41 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 6 time(s).
    11/07/27 17:17:44 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 7 time(s).
    11/07/27 17:17:46 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 8 time(s).
    11/07/27 17:17:48 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 9 time(s).
    Bad connection to FS. command aborted.

    Williams@TWilliams-LTPC ~/hadoop-0.20.2
    $ ping 127.0.0.1:9100
    Ping request could not find host 127.0.0.1:9100. Please check the
    name and try a
    gain.
    </CODE>

    I am not sure why the ip address seem to have localhost/127.0.0.1
    which seems to be repeating itself. The conf files are fine. I also
    know that when Hadoop is running there is a web interface to check
    but do the default ones work from cygwin which are:
    * NameNode - http://localhost:50070/
    * JobTracker - http://localhost:50030/

    I wanted to give the cygwin a try once more before just switching
    to a cloudera hadoop vmware. I was hoping that it would not have so
    many problems just to get it working on Windows! Thanks again.

    Cheers,
    A Df
  • Uma Maheswara Rao G 72686 at Jul 28, 2011 at 5:53 pm
    Hi A Df,

    see inline at ::::::

    ----- Original Message -----
    From: A Df <abbey_dragonforest@yahoo.com>
    Date: Wednesday, July 27, 2011 10:31 pm
    Subject: Re: cygwin not connecting to Hadoop server
    To: "common-user@hadoop.apache.org" <common-user@hadoop.apache.org>
    See inline at **. More questions and many Thanks :D



    ________________________________
    From: Uma Maheswara Rao G 72686 <maheswara@huawei.com>
    To: common-user@hadoop.apache.org; A Df
    <abbey_dragonforest@yahoo.com>>Cc: "common-user@hadoop.apache.org"
    <common-user@hadoop.apache.org>
    Sent: Wednesday, 27 July 2011, 17:31
    Subject: Re: cygwin not connecting to Hadoop server


    Hi A Df,

    Did you format the NameNode first?

    ** I had formatted it already but then I had reinstalled Java and
    upgraded the plugins in cygwin so I reformatted it again. :D yes it
    worked!! I am not sure all the steps that got it to finally work
    :::::: Great :-)
    but I will have to document it to prevent this headache in the
    future. Although I typed ssh localhost too , so question is, do I
    need to type ssh localhost each time I need to run hadoop?? Also,
    :::::: Actually ssh is an authentication service.
    To save the athentication keys, you can run below commands. which will provide authentication.No need to give password every time.

    ssh-keygen -t rsa -P ""
    cat /root/.ssh/id_rsa.pub > /root/.ssh/authosized_keys

    then exceute
    /etc/init.d/sshd restart

    To connect to remote machines
    cat /root/.ssh/id_rsa.pub | ssh root@<remoteIP> 'cat > /root/.ssh/authorized_keys'

    then in remote machine excute
    /etc/init.d/sshd restart
    since I need to work with Eclipse maybe you can have a look at my
    post about the plugin cause I can get the patch to work. The
    subject is "Re: Cygwin not working with Hadoop and Eclipse Plugin".
    I plan to read up on how to write programs for Hadoop. I am using
    the tutorial at Yahoo but if you know of any really good about
    coding with Hadoop or just about understanding Hadoop then please
    let me know.
    ::::::::Hadoop Definitive guide will the great book for understanding the Hadoop.Some sample prgrams also will be available.
    To check the Hadoop internals:
    http://www.google.co.in/url?sa=t&source=web&cd=8&ved=0CEMQFjAH&url=http%3A%2F%2Findia.paxcel.net%3A6060%2FLargeDataMatters%2Fwp-content%2Fuploads%2F2010%2F09%2FHDFS1.pdf&rct=j&q=hadoop%20internals%20%2B%20part%201&ei=CqAxTtD8C4fprQfYq6DMCw&usg=AFQjCNGYMQbAeGP0cYGl4OJHseRsfEMGvQ&cad=rja

    Can you check the NN logs whether NN is started or not?
    ** I checked and the previous runs had some logs missing but now
    the last one have all 5 logs and I got two conf files in xml. I
    also copied out the other output files which I plan to examine.
    Where do I specify the output extension format that I want for my
    output file? I was hoping for an txt file it shows the output in a
    file with no extension even though I can read it in Notepad++. I
    also got to view the web interface at:
    NameNode - http://localhost:50070/
    JobTracker - http://localhost:50030/

    ** See below for the working version, finally!! Thanks
    <CMD>
    Williams@TWilliams-LTPC ~/hadoop-0.20.2
    $ bin/hadoop jar hadoop-0.20.2-examples.jar grep input
    11/07/27 17:42:20 INFO mapred.FileInputFormat: Total in

    11/07/27 17:42:20 INFO mapred.JobClient: Running job: j
    11/07/27 17:42:21 INFO mapred.JobClient:  map 0% reduce
    11/07/27 17:42:33 INFO mapred.JobClient:  map 15% reduc
    11/07/27 17:42:36 INFO mapred.JobClient:  map 23% reduc
    11/07/27 17:42:39 INFO mapred.JobClient:  map 38% reduc
    11/07/27 17:42:42 INFO mapred.JobClient:  map 38% reduc
    11/07/27 17:42:45 INFO mapred.JobClient:  map 53% reduc
    11/07/27 17:42:48 INFO mapred.JobClient:  map 69% reduc
    11/07/27 17:42:51 INFO mapred.JobClient:  map 76% reduc
    11/07/27 17:42:54 INFO mapred.JobClient:  map 92% reduc
    11/07/27 17:42:57 INFO mapred.JobClient:  map 100% redu
    11/07/27 17:43:06 INFO mapred.JobClient:  map 100% redu
    11/07/27 17:43:09 INFO mapred.JobClient: Job complete:
    11/07/27 17:43:09 INFO mapred.JobClient: Counters: 18
    11/07/27 17:43:09 INFO mapred.JobClient:   Job Counters
    11/07/27 17:43:09 INFO mapred.JobClient:     Launched r
    11/07/27 17:43:09 INFO mapred.JobClient:     Launched m
    11/07/27 17:43:09 INFO mapred.JobClient:     Data-local
    11/07/27 17:43:09 INFO mapred.JobClient:   FileSystemCo
    11/07/27 17:43:09 INFO mapred.JobClient:     FILE_BYTES
    11/07/27 17:43:09 INFO mapred.JobClient:     HDFS_BYTES
    11/07/27 17:43:09 INFO mapred.JobClient:     FILE_BYTES
    11/07/27 17:43:09 INFO mapred.JobClient:     HDFS_BYTES
    11/07/27 17:43:09 INFO mapred.JobClient:   Map-Reduce F
    11/07/27 17:43:09 INFO mapred.JobClient:     Reduce inp
    11/07/27 17:43:09 INFO mapred.JobClient:     Combine ou
    11/07/27 17:43:09 INFO mapred.JobClient:     Map input
    11/07/27 17:43:09 INFO mapred.JobClient:     Reduce shu
    11/07/27 17:43:09 INFO mapred.JobClient:     Reduce out
    11/07/27 17:43:09 INFO mapred.JobClient:     Spilled Re
    11/07/27 17:43:09 INFO mapred.JobClient:     Map output
    11/07/27 17:43:09 INFO mapred.JobClient:     Map input
    11/07/27 17:43:09 INFO mapred.JobClient:     Combine in
    11/07/27 17:43:09 INFO mapred.JobClient:     Map output
    11/07/27 17:43:09 INFO mapred.JobClient:     Reduce inp
    11/07/27 17:43:09 WARN mapred.JobClient: Use GenericOpt
    e arguments. Applications should implement Tool for the
    11/07/27 17:43:09 INFO mapred.FileInputFormat: Total in
    11/07/27 17:43:09 INFO mapred.JobClient: Running job: j
    11/07/27 17:43:10 INFO mapred.JobClient:  map 0% reduce
    11/07/27 17:43:22 INFO mapred.JobClient:  map 100% redu
    11/07/27 17:43:31 INFO mapred.JobClient:  map 100% redu
    11/07/27 17:43:36 INFO mapred.JobClient:  map 100% redu
    11/07/27 17:43:38 INFO mapred.JobClient: Job complete:
    11/07/27 17:43:39 INFO mapred.JobClient: Counters: 18
    11/07/27 17:43:39 INFO mapred.JobClient:   Job Counters
    11/07/27 17:43:39 INFO mapred.JobClient:     Launched r
    11/07/27 17:43:39 INFO mapred.JobClient:     Launched m
    11/07/27 17:43:39 INFO mapred.JobClient:     Data-local
    11/07/27 17:43:39 INFO mapred.JobClient:   FileSystemCo
    11/07/27 17:43:39 INFO mapred.JobClient:     FILE_BYTES
    11/07/27 17:43:39 INFO mapred.JobClient:     HDFS_BYTES
    11/07/27 17:43:39 INFO mapred.JobClient:     FILE_BYTES
    11/07/27 17:43:39 INFO mapred.JobClient:     HDFS_BYTES
    11/07/27 17:43:39 INFO mapred.JobClient:   Map-Reduce F
    11/07/27 17:43:39 INFO mapred.JobClient:     Reduce inp
    11/07/27 17:43:39 INFO mapred.JobClient:     Combine ou
    11/07/27 17:43:39 INFO mapred.JobClient:     Map input
    11/07/27 17:43:39 INFO mapred.JobClient:     Reduce shu
    11/07/27 17:43:39 INFO mapred.JobClient:     Reduce out
    11/07/27 17:43:39 INFO mapred.JobClient:     Spilled Re
    11/07/27 17:43:39 INFO mapred.JobClient:     Map output
    11/07/27 17:43:39 INFO mapred.JobClient:     Map input
    11/07/27 17:43:39 INFO mapred.JobClient:     Combine in
    11/07/27 17:43:39 INFO mapred.JobClient:     Map output
    11/07/27 17:43:39 INFO mapred.JobClient:     Reduce inp

    Williams@TWilliams-LTPC ~/hadoop-0.20.2
    $ bin/hadoop fs -get output output

    Williams@TWilliams-LTPC ~/hadoop-0.20.2
    $ cat output/*
    cat: output/_logs: Is a directory
    3       dfs.class
    2       dfs.period
    1       dfs.file
    1       dfs.replication
    1       dfs.servers
    1       dfsadmin
    1       dfsmetrics.log
    </CMD>

    Regards,
    Uma
    ******************************************************************************************
    This email and its attachments contain confidential information
    from HUAWEI, which is intended only for the person or entity whose
    address is listed above. Any use of the information contained here
    in any way (including, but not limited to, total or partial
    disclosure, reproduction, or dissemination) by persons other than
    the intended recipient(s) is prohibited. If you receive this email
    in error, please notify the sender by phone or email immediately
    and delete it!
    *****************************************************************************************

    ----- Original Message -----
    From: A Df <abbey_dragonforest@yahoo.com>
    Date: Wednesday, July 27, 2011 9:55 pm
    Subject: cygwin not connecting to Hadoop server
    To: "common-user@hadoop.apache.org" <common-user@hadoop.apache.org>
    Hi All:

    I am have Hadoop 0.20.2 and I am using cygwin on Windows 7. I
    modified the files as shown below for the Hadoop configuration.

    conf/core-site.xml:

    <configuration>
    <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:9100</value>
    </property>
    </configuration>


    conf/hdfs-site.xml:

    <configuration>
    <property>
    <name>dfs.replication</name>
    <value>1</value>
    </property>
    </configuration>


    conf/mapred-site.xml:

    <configuration>
    <property>
    <name>mapred.job.tracker</name>
    <value>localhost:9101</value>
    </property>
    </configuration>

    Then I have the PATH variable with
    $PATH:/cygdrive/c/cygwin/bin:/cygdrive/c/cygwin/usr/bin

    I added JAVA_HOME to the file in cygwin\home\Williams\hadoop-
    0.20.2\conf\hadoop-env.sh.
    My Java home is now at C:\Java\jdk1.6.0_26 so there is not
    space. I
    also turned off my firewall.
    However, I get the error from the command line:

    <CODE>
    Williams@TWilliams-LTPC ~
    $ pwd
    /home/Williams

    Williams@TWilliams-LTPC ~
    $ cd hadoop-0.20.2

    Williams@TWilliams-LTPC ~/hadoop-0.20.2
    $ bin/start-all.sh
    starting namenode, logging to /home/Williams/hadoop-
    0.20.2/bin/../logs/hadoop-Wi
    lliams-namenode-TWilliams-LTPC.out
    localhost: starting datanode, logging to /home/Williams/hadoop-
    0.20.2/bin/../logs/hadoop-Williams-datanode-TWilliams-LTPC.out
    localhost: starting secondarynamenode, logging to
    /home/Williams/hadoop-0.20.2/b
    in/../logs/hadoop-Williams-secondarynamenode-TWilliams-LTPC.out
    starting jobtracker, logging to /home/Williams/hadoop-
    0.20.2/bin/../logs/hadoop-
    Williams-jobtracker-TWilliams-LTPC.out
    localhost: starting tasktracker, logging to
    /home/Williams/hadoop-
    0.20.2/bin/../logs/hadoop-Williams-tasktracker-TWilliams-LTPC.out

    Williams@TWilliams-LTPC ~/hadoop-0.20.2
    $ bin/hadoop fs -put conf input
    11/07/27 17:11:28 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 0 time(s).
    11/07/27 17:11:30 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 1 time(s).
    11/07/27 17:11:32 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 2 time(s).
    11/07/27 17:11:34 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 3 time(s).
    11/07/27 17:11:36 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 4 time(s).
    11/07/27 17:11:38 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 5 time(s).
    11/07/27 17:11:40 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 6 time(s).
    11/07/27 17:11:43 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 7 time(s).
    11/07/27 17:11:45 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 8 time(s).
    11/07/27 17:11:47 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 9 time(s).
    Bad connection to FS. command aborted.

    Williams@TWilliams-LTPC ~/hadoop-0.20.2
    $ bin/hadoop fs -put conf input
    11/07/27 17:17:29 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 0 time(s).
    11/07/27 17:17:31 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 1 time(s).
    11/07/27 17:17:33 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 2 time(s).
    11/07/27 17:17:35 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 3 time(s).
    11/07/27 17:17:37 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 4 time(s).
    11/07/27 17:17:39 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 5 time(s).
    11/07/27 17:17:41 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 6 time(s).
    11/07/27 17:17:44 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 7 time(s).
    11/07/27 17:17:46 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 8 time(s).
    11/07/27 17:17:48 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 9 time(s).
    Bad connection to FS. command aborted.

    Williams@TWilliams-LTPC ~/hadoop-0.20.2
    $ ping 127.0.0.1:9100
    Ping request could not find host 127.0.0.1:9100. Please check
    the
    name and try a
    gain.
    </CODE>

    I am not sure why the ip address seem to have
    localhost/127.0.0.1
    which seems to be repeating itself. The conf files are fine. I
    also
    know that when Hadoop is running there is a web interface to
    check
    but do the default ones work from cygwin which are:
    * NameNode - http://localhost:50070/
    * JobTracker - http://localhost:50030/

    I wanted to give the cygwin a try once more before just
    switching
    to a cloudera hadoop vmware. I was hoping that it would not have
    so
    many problems just to get it working on Windows! Thanks again.

    Cheers,
    A Df
  • Uma Maheswara Rao G 72686 at Jul 28, 2011 at 7:50 pm
    Hi A Df,

    see inline at ::::::

    ----- Original Message -----
    From: A Df <abbey_dragonforest@yahoo.com>
    Date: Wednesday, July 27, 2011 10:31 pm
    Subject: Re: cygwin not connecting to Hadoop server
    To: "common-user@hadoop.apache.org" <common-user@hadoop.apache.org>
    See inline at **. More questions and many Thanks :D



    ________________________________
    From: Uma Maheswara Rao G 72686 <maheswara@huawei.com>
    To: common-user@hadoop.apache.org; A Df
    <abbey_dragonforest@yahoo.com>>Cc: "common-user@hadoop.apache.org"
    <common-user@hadoop.apache.org>
    Sent: Wednesday, 27 July 2011, 17:31
    Subject: Re: cygwin not connecting to Hadoop server


    Hi A Df,

    Did you format the NameNode first?

    ** I had formatted it already but then I had reinstalled Java and
    upgraded the plugins in cygwin so I reformatted it again. :D yes it
    worked!! I am not sure all the steps that got it to finally work
    :::::: Great :-)
    but I will have to document it to prevent this headache in the
    future. Although I typed ssh localhost too , so question is, do I
    need to type ssh localhost each time I need to run hadoop?? Also,
    :::::: Actually ssh is an authentication service.
    To save the athentication keys, you can run below commands. which will provide authentication.No need to give password every time.

    ssh-keygen -t rsa -P ""
    cat /root/.ssh/id_rsa.pub > /root/.ssh/authosized_keys

    then exceute
    /etc/init.d/sshd restart

    To connect to remote machines
    cat /root/.ssh/id_rsa.pub | ssh root@<remoteIP> 'cat > /root/.ssh/authorized_keys'

    then in remote machine excute
    /etc/init.d/sshd restart
    since I need to work with Eclipse maybe you can have a look at my
    post about the plugin cause I can get the patch to work. The
    subject is "Re: Cygwin not working with Hadoop and Eclipse Plugin".
    I plan to read up on how to write programs for Hadoop. I am using
    the tutorial at Yahoo but if you know of any really good about
    coding with Hadoop or just about understanding Hadoop then please
    let me know.
    ::::::::Hadoop Definitive guide will the great book for understanding the Hadoop.Some sample prgrams also will be available.
    To check the Hadoop internals:
    http://www.google.co.in/url?sa=t&source=web&cd=8&ved=0CEMQFjAH&url=http%3A%2F%2Findia.paxcel.net%3A6060%2FLargeDataMatters%2Fwp-content%2Fuploads%2F2010%2F09%2FHDFS1.pdf&rct=j&q=hadoop%20internals%20%2B%20part%201&ei=CqAxTtD8C4fprQfYq6DMCw&usg=AFQjCNGYMQbAeGP0cYGl4OJHseRsfEMGvQ&cad=rja

    Can you check the NN logs whether NN is started or not?
    ** I checked and the previous runs had some logs missing but now
    the last one have all 5 logs and I got two conf files in xml. I
    also copied out the other output files which I plan to examine.
    Where do I specify the output extension format that I want for my
    output file? I was hoping for an txt file it shows the output in a
    file with no extension even though I can read it in Notepad++. I
    also got to view the web interface at:
    NameNode - http://localhost:50070/
    JobTracker - http://localhost:50030/

    ** See below for the working version, finally!! Thanks
    <CMD>
    Williams@TWilliams-LTPC ~/hadoop-0.20.2
    $ bin/hadoop jar hadoop-0.20.2-examples.jar grep input
    11/07/27 17:42:20 INFO mapred.FileInputFormat: Total in

    11/07/27 17:42:20 INFO mapred.JobClient: Running job: j
    11/07/27 17:42:21 INFO mapred.JobClient: map 0% reduce
    11/07/27 17:42:33 INFO mapred.JobClient: map 15% reduc
    11/07/27 17:42:36 INFO mapred.JobClient: map 23% reduc
    11/07/27 17:42:39 INFO mapred.JobClient: map 38% reduc
    11/07/27 17:42:42 INFO mapred.JobClient: map 38% reduc
    11/07/27 17:42:45 INFO mapred.JobClient: map 53% reduc
    11/07/27 17:42:48 INFO mapred.JobClient: map 69% reduc
    11/07/27 17:42:51 INFO mapred.JobClient: map 76% reduc
    11/07/27 17:42:54 INFO mapred.JobClient: map 92% reduc
    11/07/27 17:42:57 INFO mapred.JobClient: map 100% redu
    11/07/27 17:43:06 INFO mapred.JobClient: map 100% redu
    11/07/27 17:43:09 INFO mapred.JobClient: Job complete:
    11/07/27 17:43:09 INFO mapred.JobClient: Counters: 18
    11/07/27 17:43:09 INFO mapred.JobClient: Job Counters
    11/07/27 17:43:09 INFO mapred.JobClient: Launched r
    11/07/27 17:43:09 INFO mapred.JobClient: Launched m
    11/07/27 17:43:09 INFO mapred.JobClient: Data-local
    11/07/27 17:43:09 INFO mapred.JobClient: FileSystemCo
    11/07/27 17:43:09 INFO mapred.JobClient: FILE_BYTES
    11/07/27 17:43:09 INFO mapred.JobClient: HDFS_BYTES
    11/07/27 17:43:09 INFO mapred.JobClient: FILE_BYTES
    11/07/27 17:43:09 INFO mapred.JobClient: HDFS_BYTES
    11/07/27 17:43:09 INFO mapred.JobClient: Map-Reduce F
    11/07/27 17:43:09 INFO mapred.JobClient: Reduce inp
    11/07/27 17:43:09 INFO mapred.JobClient: Combine ou
    11/07/27 17:43:09 INFO mapred.JobClient: Map input
    11/07/27 17:43:09 INFO mapred.JobClient: Reduce shu
    11/07/27 17:43:09 INFO mapred.JobClient: Reduce out
    11/07/27 17:43:09 INFO mapred.JobClient: Spilled Re
    11/07/27 17:43:09 INFO mapred.JobClient: Map output
    11/07/27 17:43:09 INFO mapred.JobClient: Map input
    11/07/27 17:43:09 INFO mapred.JobClient: Combine in
    11/07/27 17:43:09 INFO mapred.JobClient: Map output
    11/07/27 17:43:09 INFO mapred.JobClient: Reduce inp
    11/07/27 17:43:09 WARN mapred.JobClient: Use GenericOpt
    e arguments. Applications should implement Tool for the
    11/07/27 17:43:09 INFO mapred.FileInputFormat: Total in
    11/07/27 17:43:09 INFO mapred.JobClient: Running job: j
    11/07/27 17:43:10 INFO mapred.JobClient: map 0% reduce
    11/07/27 17:43:22 INFO mapred.JobClient: map 100% redu
    11/07/27 17:43:31 INFO mapred.JobClient: map 100% redu
    11/07/27 17:43:36 INFO mapred.JobClient: map 100% redu
    11/07/27 17:43:38 INFO mapred.JobClient: Job complete:
    11/07/27 17:43:39 INFO mapred.JobClient: Counters: 18
    11/07/27 17:43:39 INFO mapred.JobClient: Job Counters
    11/07/27 17:43:39 INFO mapred.JobClient: Launched r
    11/07/27 17:43:39 INFO mapred.JobClient: Launched m
    11/07/27 17:43:39 INFO mapred.JobClient: Data-local
    11/07/27 17:43:39 INFO mapred.JobClient: FileSystemCo
    11/07/27 17:43:39 INFO mapred.JobClient: FILE_BYTES
    11/07/27 17:43:39 INFO mapred.JobClient: HDFS_BYTES
    11/07/27 17:43:39 INFO mapred.JobClient: FILE_BYTES
    11/07/27 17:43:39 INFO mapred.JobClient: HDFS_BYTES
    11/07/27 17:43:39 INFO mapred.JobClient: Map-Reduce F
    11/07/27 17:43:39 INFO mapred.JobClient: Reduce inp
    11/07/27 17:43:39 INFO mapred.JobClient: Combine ou
    11/07/27 17:43:39 INFO mapred.JobClient: Map input
    11/07/27 17:43:39 INFO mapred.JobClient: Reduce shu
    11/07/27 17:43:39 INFO mapred.JobClient: Reduce out
    11/07/27 17:43:39 INFO mapred.JobClient: Spilled Re
    11/07/27 17:43:39 INFO mapred.JobClient: Map output
    11/07/27 17:43:39 INFO mapred.JobClient: Map input
    11/07/27 17:43:39 INFO mapred.JobClient: Combine in
    11/07/27 17:43:39 INFO mapred.JobClient: Map output
    11/07/27 17:43:39 INFO mapred.JobClient: Reduce inp

    Williams@TWilliams-LTPC ~/hadoop-0.20.2
    $ bin/hadoop fs -get output output

    Williams@TWilliams-LTPC ~/hadoop-0.20.2
    $ cat output/*
    cat: output/_logs: Is a directory
    3 dfs.class
    2 dfs.period
    1 dfs.file
    1 dfs.replication
    1 dfs.servers
    1 dfsadmin
    1 dfsmetrics.log
    </CMD>

    Regards,
    Uma
    ******************************************************************************************
    This email and its attachments contain confidential information
    from HUAWEI, which is intended only for the person or entity whose
    address is listed above. Any use of the information contained here
    in any way (including, but not limited to, total or partial
    disclosure, reproduction, or dissemination) by persons other than
    the intended recipient(s) is prohibited. If you receive this email
    in error, please notify the sender by phone or email immediately
    and delete it!
    *****************************************************************************************

    ----- Original Message -----
    From: A Df <abbey_dragonforest@yahoo.com>
    Date: Wednesday, July 27, 2011 9:55 pm
    Subject: cygwin not connecting to Hadoop server
    To: "common-user@hadoop.apache.org" <common-user@hadoop.apache.org>
    Hi All:

    I am have Hadoop 0.20.2 and I am using cygwin on Windows 7. I
    modified the files as shown below for the Hadoop configuration.

    conf/core-site.xml:

    <configuration>
    <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:9100</value>
    </property>
    </configuration>


    conf/hdfs-site.xml:

    <configuration>
    <property>
    <name>dfs.replication</name>
    <value>1</value>
    </property>
    </configuration>


    conf/mapred-site.xml:

    <configuration>
    <property>
    <name>mapred.job.tracker</name>
    <value>localhost:9101</value>
    </property>
    </configuration>

    Then I have the PATH variable with
    $PATH:/cygdrive/c/cygwin/bin:/cygdrive/c/cygwin/usr/bin

    I added JAVA_HOME to the file in cygwin\home\Williams\hadoop-
    0.20.2\conf\hadoop-env.sh.
    My Java home is now at C:\Java\jdk1.6.0_26 so there is not
    space. I
    also turned off my firewall.
    However, I get the error from the command line:

    <CODE>
    Williams@TWilliams-LTPC ~
    $ pwd
    /home/Williams

    Williams@TWilliams-LTPC ~
    $ cd hadoop-0.20.2

    Williams@TWilliams-LTPC ~/hadoop-0.20.2
    $ bin/start-all.sh
    starting namenode, logging to /home/Williams/hadoop-
    0.20.2/bin/../logs/hadoop-Wi
    lliams-namenode-TWilliams-LTPC.out
    localhost: starting datanode, logging to /home/Williams/hadoop-
    0.20.2/bin/../logs/hadoop-Williams-datanode-TWilliams-LTPC.out
    localhost: starting secondarynamenode, logging to
    /home/Williams/hadoop-0.20.2/b
    in/../logs/hadoop-Williams-secondarynamenode-TWilliams-LTPC.out
    starting jobtracker, logging to /home/Williams/hadoop-
    0.20.2/bin/../logs/hadoop-
    Williams-jobtracker-TWilliams-LTPC.out
    localhost: starting tasktracker, logging to
    /home/Williams/hadoop-
    0.20.2/bin/../logs/hadoop-Williams-tasktracker-TWilliams-LTPC.out

    Williams@TWilliams-LTPC ~/hadoop-0.20.2
    $ bin/hadoop fs -put conf input
    11/07/27 17:11:28 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 0 time(s).
    11/07/27 17:11:30 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 1 time(s).
    11/07/27 17:11:32 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 2 time(s).
    11/07/27 17:11:34 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 3 time(s).
    11/07/27 17:11:36 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 4 time(s).
    11/07/27 17:11:38 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 5 time(s).
    11/07/27 17:11:40 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 6 time(s).
    11/07/27 17:11:43 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 7 time(s).
    11/07/27 17:11:45 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 8 time(s).
    11/07/27 17:11:47 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 9 time(s).
    Bad connection to FS. command aborted.

    Williams@TWilliams-LTPC ~/hadoop-0.20.2
    $ bin/hadoop fs -put conf input
    11/07/27 17:17:29 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 0 time(s).
    11/07/27 17:17:31 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 1 time(s).
    11/07/27 17:17:33 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 2 time(s).
    11/07/27 17:17:35 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 3 time(s).
    11/07/27 17:17:37 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 4 time(s).
    11/07/27 17:17:39 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 5 time(s).
    11/07/27 17:17:41 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 6 time(s).
    11/07/27 17:17:44 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 7 time(s).
    11/07/27 17:17:46 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 8 time(s).
    11/07/27 17:17:48 INFO ipc.Client: Retrying connect to server:
    localhost/127.0.0.1:9100. Already tried 9 time(s).
    Bad connection to FS. command aborted.

    Williams@TWilliams-LTPC ~/hadoop-0.20.2
    $ ping 127.0.0.1:9100
    Ping request could not find host 127.0.0.1:9100. Please check
    the
    name and try a
    gain.
    </CODE>

    I am not sure why the ip address seem to have
    localhost/127.0.0.1
    which seems to be repeating itself. The conf files are fine. I
    also
    know that when Hadoop is running there is a web interface to
    check
    but do the default ones work from cygwin which are:
    * NameNode - http://localhost:50070/
    * JobTracker - http://localhost:50030/

    I wanted to give the cygwin a try once more before just
    switching
    to a cloudera hadoop vmware. I was hoping that it would not have
    so
    many problems just to get it working on Windows! Thanks again.

    Cheers,
    A Df

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedJul 27, '11 at 4:25p
activeJul 28, '11 at 7:50p
posts5
users2
websitehadoop.apache.org...
irc#hadoop

2 users in discussion

Uma Maheswara Rao G 72686: 3 posts A Df: 2 posts

People

Translate

site design / logo © 2022 Grokbase