FAQ
Hi,

I'm using the Hadoop 0.14.1 AMI as my master node [ami-64f6130d], and
I've followed up to "test your cluster" bit of the tutorial listed
here:

http://wiki.apache.org/lucene-hadoop/AmazonEC2

However, when I try to run the following command,

$ bin/hadoop jar hadoop-*-examples.jar pi 10 10000000

I get an error "Failed to create file /user/root/test-mini-mr/in/part0
on client 127.0.0.1 because this cluster has no datanodes."

Specifically:

org.apache.hadoop.ipc.RemoteException: java.io.IOException: Failed to
create file /user/root/test-mini-mr/in/part0 on client 127.0.0.1
because this cluster has no datanodes.
at org.apache.hadoop.dfs.FSNamesystem.startFile(FSNamesystem.java:739)

According to the following thread [
http://mail-archives.apache.org/mod_mbox/lucene-hadoop-user/200706.mbox/%3C466DA5DB.9070808@yahoo-inc.com%3E
], I need to check my ip-addresses.

My dynDNS settings are as follows.

hostname: something.webhop.org
wildcard: true
service type: host with IP address
ip address: xx.yy.zz.aa
webhop: http://ec2-xx-yy-zz-aa.z-1.compute-1.amazonaws.com/



For reference, my hadoop-site.xml file is:

<configuration>

<property>
<name>hadoop.tmp.dir</name>
<value>/mnt/hadoop</value>
</property>

<property>
<name>fs.default.name</name>
<value>localhost:50001</value>
</property>

<property>
<name>mapred.job.tracker</name>
<value>localhost:50002</value>
</property>

</configuration>



Any tips would be very welcome...!

Thanks!

[I've also posted this to Amazon's EC2 forum.]

Search Discussions

  • Ted Dunning at Oct 17, 2007 at 9:23 pm
    This usually has to do (in my own limited experience) with host file / dns
    issues or configuration synchronization issues.

    On 10/17/07 12:54 PM, "Tiger Uppercut" wrote:

    Hi,

    I'm using the Hadoop 0.14.1 AMI as my master node [ami-64f6130d], and
    I've followed up to "test your cluster" bit of the tutorial listed
    here:

    http://wiki.apache.org/lucene-hadoop/AmazonEC2

    However, when I try to run the following command,

    $ bin/hadoop jar hadoop-*-examples.jar pi 10 10000000

    I get an error "Failed to create file /user/root/test-mini-mr/in/part0
    on client 127.0.0.1 because this cluster has no datanodes."

    Specifically:

    org.apache.hadoop.ipc.RemoteException: java.io.IOException: Failed to
    create file /user/root/test-mini-mr/in/part0 on client 127.0.0.1
    because this cluster has no datanodes.
    at org.apache.hadoop.dfs.FSNamesystem.startFile(FSNamesystem.java:739)

    According to the following thread [
    http://mail-archives.apache.org/mod_mbox/lucene-hadoop-user/200706.mbox/%3C466
    DA5DB.9070808@yahoo-inc.com%3E
    ], I need to check my ip-addresses.

    My dynDNS settings are as follows.

    hostname: something.webhop.org
    wildcard: true
    service type: host with IP address
    ip address: xx.yy.zz.aa
    webhop: http://ec2-xx-yy-zz-aa.z-1.compute-1.amazonaws.com/



    For reference, my hadoop-site.xml file is:

    <configuration>

    <property>
    <name>hadoop.tmp.dir</name>
    <value>/mnt/hadoop</value>
    </property>

    <property>
    <name>fs.default.name</name>
    <value>localhost:50001</value>
    </property>

    <property>
    <name>mapred.job.tracker</name>
    <value>localhost:50002</value>
    </property>

    </configuration>



    Any tips would be very welcome...!

    Thanks!

    [I've also posted this to Amazon's EC2 forum.]
  • Tom White at Oct 17, 2007 at 9:24 pm
    How many instances are you running (what did you set NO_INSTANCES to)?
    Can you connect to the other instances (look in the slaves file for
    their addresses) and verify that datanodes are indeed running?

    Tom
    On 17/10/2007, Tiger Uppercut wrote:
    Hi,

    I'm using the Hadoop 0.14.1 AMI as my master node [ami-64f6130d], and
    I've followed up to "test your cluster" bit of the tutorial listed
    here:

    http://wiki.apache.org/lucene-hadoop/AmazonEC2

    However, when I try to run the following command,

    $ bin/hadoop jar hadoop-*-examples.jar pi 10 10000000

    I get an error "Failed to create file /user/root/test-mini-mr/in/part0
    on client 127.0.0.1 because this cluster has no datanodes."

    Specifically:

    org.apache.hadoop.ipc.RemoteException: java.io.IOException: Failed to
    create file /user/root/test-mini-mr/in/part0 on client 127.0.0.1
    because this cluster has no datanodes.
    at org.apache.hadoop.dfs.FSNamesystem.startFile(FSNamesystem.java:739)

    According to the following thread [
    http://mail-archives.apache.org/mod_mbox/lucene-hadoop-user/200706.mbox/%3C466DA5DB.9070808@yahoo-inc.com%3E
    ], I need to check my ip-addresses.

    My dynDNS settings are as follows.

    hostname: something.webhop.org
    wildcard: true
    service type: host with IP address
    ip address: xx.yy.zz.aa
    webhop: http://ec2-xx-yy-zz-aa.z-1.compute-1.amazonaws.com/



    For reference, my hadoop-site.xml file is:

    <configuration>

    <property>
    <name>hadoop.tmp.dir</name>
    <value>/mnt/hadoop</value>
    </property>

    <property>
    <name>fs.default.name</name>
    <value>localhost:50001</value>
    </property>

    <property>
    <name>mapred.job.tracker</name>
    <value>localhost:50002</value>
    </property>

    </configuration>



    Any tips would be very welcome...!

    Thanks!

    [I've also posted this to Amazon's EC2 forum.]
  • Tiger Uppercut at Oct 17, 2007 at 11:45 pm
    NO_INSTANCES=2 on my Mac's hadoop-env.sh file:

    # Hostname of master node in the cluster
    MASTER_HOST=something.webhop.org

    # The number of nodes in your cluster.
    NO_INSTANCES=2

    Here's the mapred-default.xml file:

    <configuration>

    <property>
    <name>mapred.map.tasks</name>
    <value>20</value>
    </property>

    <property>
    <name>mapred.reduce.tasks</name>
    <value>6</value>
    </property>

    </configuration>


    Note: I had to hardcode all these values, because hadoop-init wasn't setting
    these values to NO_INSTANCES * 10 and NO_INSTANCES * 3.

    I also had to manually edit hadoop-daemon.sh to set my $HADOOP_MASTER
    address, from

    if [ "$HADOOP_MASTER" != "" ]; then
    echo rsync from $HADOOP_MASTER
    rsync -a -e ssh --delete --exclude=.svn $HADOOP_MASTER/ "$HADOOP_HOME"
    fi

    to:

    if [ "$HADOOP_MASTER" != "" ]; then
    export HADOOP_MASTER=something.webhop.org:/usr/local/hadoop-0.14.1
    echo rsync from $HADOOP_MASTER
    rsync -a -e ssh --delete --exclude=.svn $HADOOP_MASTER/ "$HADOOP_HOME"
    fi





    On 10/17/07, Tom White wrote:

    How many instances are you running (what did you set NO_INSTANCES to)?
    Can you connect to the other instances (look in the slaves file for
    their addresses) and verify that datanodes are indeed running?

    Tom
    On 17/10/2007, Tiger Uppercut wrote:
    Hi,

    I'm using the Hadoop 0.14.1 AMI as my master node [ami-64f6130d], and
    I've followed up to "test your cluster" bit of the tutorial listed
    here:

    http://wiki.apache.org/lucene-hadoop/AmazonEC2

    However, when I try to run the following command,

    $ bin/hadoop jar hadoop-*-examples.jar pi 10 10000000

    I get an error "Failed to create file /user/root/test-mini-mr/in/part0
    on client 127.0.0.1 because this cluster has no datanodes."

    Specifically:

    org.apache.hadoop.ipc.RemoteException: java.io.IOException: Failed to
    create file /user/root/test-mini-mr/in/part0 on client 127.0.0.1
    because this cluster has no datanodes.
    at org.apache.hadoop.dfs.FSNamesystem.startFile(
    FSNamesystem.java:739)
    According to the following thread [
    http://mail-archives.apache.org/mod_mbox/lucene-hadoop-user/200706.mbox/%3C466DA5DB.9070808@yahoo-inc.com%3E
    ], I need to check my ip-addresses.

    My dynDNS settings are as follows.

    hostname: something.webhop.org
    wildcard: true
    service type: host with IP address
    ip address: xx.yy.zz.aa
    webhop: http://ec2-xx-yy-zz-aa.z-1.compute-1.amazonaws.com/



    For reference, my hadoop-site.xml file is:

    <configuration>

    <property>
    <name>hadoop.tmp.dir</name>
    <value>/mnt/hadoop</value>
    </property>

    <property>
    <name>fs.default.name</name>
    <value>localhost:50001</value>
    </property>

    <property>
    <name>mapred.job.tracker</name>
    <value>localhost:50002</value>
    </property>

    </configuration>



    Any tips would be very welcome...!

    Thanks!

    [I've also posted this to Amazon's EC2 forum.]

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedOct 17, '07 at 7:55p
activeOct 17, '07 at 11:45p
posts4
users3
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase