FAQ
Hi all,

I am just getting started with hadoop 0.20 and trying to run a job in
pseudo-distributed mode.

I configured hadoop according to the tutorial, but it seems it does
not work as expected.

My map/reduce tasks are running sequencially and output result is
stored on local filesystem instead of the dfs space.
Job tracker does not see the running job at all.
I have checked the logs but don't see any errors either. I have also
copied some files manually to the dfs to make sure it works.

The only difference between the manual and my configuration is that I
had to change the ports for the job tracker and namenode as 9000 and
9001 are already used by other apps on my workstation.

Any hints?

Thanks

Regards,

Vasyl

Search Discussions

  • Aaron Kimball at Jun 2, 2009 at 12:43 am
    Can you post the contents of your hadoop-site.xml file here?
    - Aaron
    On Sat, May 30, 2009 at 2:44 AM, Vasyl Keretsman wrote:

    Hi all,

    I am just getting started with hadoop 0.20 and trying to run a job in
    pseudo-distributed mode.

    I configured hadoop according to the tutorial, but it seems it does
    not work as expected.

    My map/reduce tasks are running sequencially and output result is
    stored on local filesystem instead of the dfs space.
    Job tracker does not see the running job at all.
    I have checked the logs but don't see any errors either. I have also
    copied some files manually to the dfs to make sure it works.

    The only difference between the manual and my configuration is that I
    had to change the ports for the job tracker and namenode as 9000 and
    9001 are already used by other apps on my workstation.

    Any hints?

    Thanks

    Regards,

    Vasyl
  • Vasyl Keretsman at Jun 2, 2009 at 7:50 am
    Sorry it was my fault.

    Instead of starting my job as bin/hadoop jar job.jar I ran it as
    bin/hadoop -cp job.jar.

    I thought it would be the same.

    Thanks anyway

    Vasyl

    2009/6/2 Aaron Kimball <aaron@cloudera.com>:
    Can you post the contents of your hadoop-site.xml file here?
    - Aaron
    On Sat, May 30, 2009 at 2:44 AM, Vasyl Keretsman wrote:

    Hi all,

    I am just getting started with hadoop 0.20 and trying to run a job in
    pseudo-distributed mode.

    I configured hadoop according to the tutorial, but it seems it does
    not work as expected.

    My map/reduce tasks are running sequencially and output result is
    stored on local filesystem instead of the dfs space.
    Job tracker does not see the running job at all.
    I have checked the logs but don't see any errors either. I have also
    copied some files manually to the dfs to make sure it works.

    The only difference between the manual and my configuration is that I
    had to change the ports for the job tracker and namenode as 9000 and
    9001 are already used by other apps on my workstation.

    Any hints?

    Thanks

    Regards,

    Vasyl
  • Aaron Kimball at Jun 2, 2009 at 5:54 pm
    Glad you got it sorted out
    - Aaron
    On Tue, Jun 2, 2009 at 12:49 AM, Vasyl Keretsman wrote:

    Sorry it was my fault.

    Instead of starting my job as bin/hadoop jar job.jar I ran it as
    bin/hadoop -cp job.jar.

    I thought it would be the same.

    Thanks anyway

    Vasyl

    2009/6/2 Aaron Kimball <aaron@cloudera.com>:
    Can you post the contents of your hadoop-site.xml file here?
    - Aaron
    On Sat, May 30, 2009 at 2:44 AM, Vasyl Keretsman wrote:

    Hi all,

    I am just getting started with hadoop 0.20 and trying to run a job in
    pseudo-distributed mode.

    I configured hadoop according to the tutorial, but it seems it does
    not work as expected.

    My map/reduce tasks are running sequencially and output result is
    stored on local filesystem instead of the dfs space.
    Job tracker does not see the running job at all.
    I have checked the logs but don't see any errors either. I have also
    copied some files manually to the dfs to make sure it works.

    The only difference between the manual and my configuration is that I
    had to change the ports for the job tracker and namenode as 9000 and
    9001 are already used by other apps on my workstation.

    Any hints?

    Thanks

    Regards,

    Vasyl

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedMay 30, '09 at 9:44a
activeJun 2, '09 at 5:54p
posts4
users2
websitehadoop.apache.org...
irc#hadoop

2 users in discussion

Aaron Kimball: 2 posts Vasyl Keretsman: 2 posts

People

Translate

site design / logo © 2022 Grokbase