FAQ
Hi,


We are trying to set up a cluster (starting with 2 machines) using the
new 0.20.1 version.

On the master machine, just after the server starts, the name node
dies off with the following exception:

2009-10-13 01:22:24,740 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
Incomplete HDFS URI, no host: hdfs://master_hadoop
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize
(DistributedFileSystem.java:78)
at org.apache.hadoop.fs.FileSystem.createFileSystem
(FileSystem.java:1373)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:
66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:
1385)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:191)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
at org.apache.hadoop.fs.Trash.(NameNode.java:208)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize
(NameNode.java:204)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>
(NameNode.java:279)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode
(NameNode.java:956)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main
(NameNode.java:965)

Can anyone help ? Also can anyone send across example configuration
files for 0.20.1 if they are different than we are using ?

The detail log file is attached along with.

Search Discussions

  • Mikio Uzawa at Oct 13, 2009 at 2:37 pm
    Hi all,

    I posted below three topics:

    NTT focuses on the social infrastructure with clouds
    A major common paper ASAHI talked the about cloud
    NetWorld will dive into the cloud market with Bplats

    http://jclouds.wordpress.com/

    Thanks,

    /mikio uzawa
  • Jun hu at Oct 13, 2009 at 3:38 pm
    I think you should edit the core-site.xml .

    (master and slave machine)
    ------ core-site.xml -------
    <configuration>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master_hadoop: 54310 </value>
    </property>

    <property>
    <name>hadoop.tmp.dir</name>
    <value>/opt/hadoop-0.20.1/tmp</value>
    </property>
    On Tue, Oct 13, 2009 at 10:17 PM, Tejas Lagvankar wrote:

    Hi,
    We are trying to set up a cluster (starting with 2 machines) using the new
    0.20.1 version.

    On the master machine, just after the server starts, the name node dies off
    with the following exception:

    2009-10-13 01:22:24,740 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
    Incomplete HDFS URI, no host: hdfs://master_hadoop
    at
    org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:78)
    at
    org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1373)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1385)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:191)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
    at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier(NameNode.java:208)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:204)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)

    Can anyone help ? Also can anyone send across example configuration files
    for 0.20.1 if they are different than we are using ?

    The detail log file is attached along with.




    The configuration files are as follows:

    MASTER CONFIG
    ------ conf/masters -------
    master_hadoop

    ------ conf/slaves -------
    master_hadoop
    slave_hadoop

    ------ core-site.xml -------
    <configuration>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master_hadoop</value>
    </property>

    <property>
    <name>hadoop.tmp.dir</name>
    <value>/opt/hadoop-0.20.1/tmp</value>
    </property>

    ------ hdfs-site.xml -------
    <property>
    <name>dfs.replication</name>
    <value>2</value>
    </property>


    ------ mapred-site.xml -------
    <property>
    <name>mapred.job.tracker</name>
    <value>tejas_hadoop:9001</value>
    </property>





    SLAVE CONFIG
    ------ core-site.xml -------
    <property>
    <name>hadoop.tmp.dir</name>
    <value>/opt/hadoop-0.20.1/tmp/</value>
    </property>


    <property>
    <name>fs.default.name</name>
    <value>hdfs://master_hadoop</value>
    </property>


    ------ hdfs-site.xml -------
    <property>
    <name>dfs.replication</name>
    <value>2</value>
    </property>

    ------ mapred-site.xml -------
    <property>
    <name>mapred.job.tracker</name>
    <value>tejas_hadoop:9001</value>
    </property>



    Regards,

    Tejas Lagvankar
    meettejas@umbc.edu
    www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>




    --
    Best Regards!
    胡俊
  • Kevin Sweeney at Oct 13, 2009 at 3:44 pm
    Hi Tejas,
    I just upgraded to 20.1 as well and you config all looks the same as mine
    except in the core-site.xml I have:

    <configuration>
    <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:9000</value>
    </property>
    </configuration>

    Maybe you need to add the port on yours. I haven't seen that error before,
    but it seems to be suggesting it can't resolve the host. I'd say
    double-check your names and that they resolve.

    Hope that helps,
    Kevin
    On Tue, Oct 13, 2009 at 2:17 PM, Tejas Lagvankar wrote:

    Hi,


    We are trying to set up a cluster (starting with 2 machines) using the new
    0.20.1 version.

    On the master machine, just after the server starts, the name node dies off
    with the following exception:

    2009-10-13 01:22:24,740 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
    Incomplete HDFS URI, no host: hdfs://master_hadoop
    at
    org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:78)
    at
    org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1373)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1385)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:191)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
    at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier(NameNode.java:208)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:204)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)

    Can anyone help ? Also can anyone send across example configuration files
    for 0.20.1 if they are different than we are using ?

    The detail log file is attached along with.




    The configuration files are as follows:

    MASTER CONFIG
    ------ conf/masters -------
    master_hadoop

    ------ conf/slaves -------
    master_hadoop
    slave_hadoop

    ------ core-site.xml -------
    <configuration>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master_hadoop</value>
    </property>

    <property>
    <name>hadoop.tmp.dir</name>
    <value>/opt/hadoop-0.20.1/tmp</value>
    </property>

    ------ hdfs-site.xml -------
    <property>
    <name>dfs.replication</name>
    <value>2</value>
    </property>


    ------ mapred-site.xml -------
    <property>
    <name>mapred.job.tracker</name>
    <value>tejas_hadoop:9001</value>
    </property>





    SLAVE CONFIG
    ------ core-site.xml -------
    <property>
    <name>hadoop.tmp.dir</name>
    <value>/opt/hadoop-0.20.1/tmp/</value>
    </property>


    <property>
    <name>fs.default.name</name>
    <value>hdfs://master_hadoop</value>
    </property>


    ------ hdfs-site.xml -------
    <property>
    <name>dfs.replication</name>
    <value>2</value>
    </property>

    ------ mapred-site.xml -------
    <property>
    <name>mapred.job.tracker</name>
    <value>tejas_hadoop:9001</value>
    </property>



    Regards,

    Tejas Lagvankar
    meettejas@umbc.edu
    www.umbc.edu/~tej2



  • Chandan Tamrakar at Oct 13, 2009 at 4:19 pm
    I think you need to specify the port as well for following port

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master_hadoop</value>
    </property>

    On Tue, Oct 13, 2009 at 7:17 AM, Tejas Lagvankar wrote:

    Hi,


    We are trying to set up a cluster (starting with 2 machines) using the new
    0.20.1 version.

    On the master machine, just after the server starts, the name node dies off
    with the following exception:

    2009-10-13 01:22:24,740 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
    Incomplete HDFS URI, no host: hdfs://master_hadoop
    at
    org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:78)
    at
    org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1373)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1385)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:191)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
    at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier(NameNode.java:208)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:204)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)

    Can anyone help ? Also can anyone send across example configuration files
    for 0.20.1 if they are different than we are using ?

    The detail log file is attached along with.




    The configuration files are as follows:

    MASTER CONFIG
    ------ conf/masters -------
    master_hadoop

    ------ conf/slaves -------
    master_hadoop
    slave_hadoop

    ------ core-site.xml -------
    <configuration>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master_hadoop</value>
    </property>

    <property>
    <name>hadoop.tmp.dir</name>
    <value>/opt/hadoop-0.20.1/tmp</value>
    </property>

    ------ hdfs-site.xml -------
    <property>
    <name>dfs.replication</name>
    <value>2</value>
    </property>


    ------ mapred-site.xml -------
    <property>
    <name>mapred.job.tracker</name>
    <value>tejas_hadoop:9001</value>
    </property>





    SLAVE CONFIG
    ------ core-site.xml -------
    <property>
    <name>hadoop.tmp.dir</name>
    <value>/opt/hadoop-0.20.1/tmp/</value>
    </property>


    <property>
    <name>fs.default.name</name>
    <value>hdfs://master_hadoop</value>
    </property>


    ------ hdfs-site.xml -------
    <property>
    <name>dfs.replication</name>
    <value>2</value>
    </property>

    ------ mapred-site.xml -------
    <property>
    <name>mapred.job.tracker</name>
    <value>tejas_hadoop:9001</value>
    </property>



    Regards,

    Tejas Lagvankar
    meettejas@umbc.edu
    www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>




    --
    Chandan Tamrakar
  • Tejas Lagvankar at Oct 13, 2009 at 4:34 pm
    I get the same error even if i specify the port number. I have tried
    with port numbers 54310 as well as 9000.


    Regards,
    Tejas
    On Oct 13, 2009, at 12:12 PM, Chandan Tamrakar wrote:

    I think you need to specify the port as well for following port

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master_hadoop</value>
    </property>

    On Tue, Oct 13, 2009 at 7:17 AM, Tejas Lagvankar wrote:

    Hi,


    We are trying to set up a cluster (starting with 2 machines) using
    the new
    0.20.1 version.

    On the master machine, just after the server starts, the name node
    dies off
    with the following exception:

    2009-10-13 01:22:24,740 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
    Incomplete HDFS URI, no host: hdfs://master_hadoop
    at
    org.apache.hadoop.hdfs.DistributedFileSystem.initialize
    (DistributedFileSystem.java:78)
    at
    org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:
    1373)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:
    66)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:
    1385)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:191)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
    at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier
    (NameNode.java:208)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize
    (NameNode.java:204)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>
    (NameNode.java:279)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode
    (NameNode.java:956)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:
    965)

    Can anyone help ? Also can anyone send across example
    configuration files
    for 0.20.1 if they are different than we are using ?

    The detail log file is attached along with.




    The configuration files are as follows:

    MASTER CONFIG
    ------ conf/masters -------
    master_hadoop

    ------ conf/slaves -------
    master_hadoop
    slave_hadoop

    ------ core-site.xml -------
    <configuration>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master_hadoop</value>
    </property>

    <property>
    <name>hadoop.tmp.dir</name>
    <value>/opt/hadoop-0.20.1/tmp</value>
    </property>

    ------ hdfs-site.xml -------
    <property>
    <name>dfs.replication</name>
    <value>2</value>
    </property>


    ------ mapred-site.xml -------
    <property>
    <name>mapred.job.tracker</name>
    <value>tejas_hadoop:9001</value>
    </property>





    SLAVE CONFIG
    ------ core-site.xml -------
    <property>
    <name>hadoop.tmp.dir</name>
    <value>/opt/hadoop-0.20.1/tmp/</value>
    </property>


    <property>
    <name>fs.default.name</name>
    <value>hdfs://master_hadoop</value>
    </property>


    ------ hdfs-site.xml -------
    <property>
    <name>dfs.replication</name>
    <value>2</value>
    </property>

    ------ mapred-site.xml -------
    <property>
    <name>mapred.job.tracker</name>
    <value>tejas_hadoop:9001</value>
    </property>



    Regards,

    Tejas Lagvankar
    meettejas@umbc.edu
    www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>




    --
    Chandan Tamrakar
    Tejas Lagvankar
    meettejas@umbc.edu
    www.umbc.edu/~tej2
  • Kevin Sweeney at Oct 13, 2009 at 4:37 pm
    did you verify the name resolution?
    On Tue, Oct 13, 2009 at 4:34 PM, Tejas Lagvankar wrote:


    I get the same error even if i specify the port number. I have tried with
    port numbers 54310 as well as 9000.


    Regards,
    Tejas


    On Oct 13, 2009, at 12:12 PM, Chandan Tamrakar wrote:

    I think you need to specify the port as well for following port
    <property>
    <name>fs.default.name</name>
    <value>hdfs://master_hadoop</value>
    </property>


    On Tue, Oct 13, 2009 at 7:17 AM, Tejas Lagvankar wrote:

    Hi,

    We are trying to set up a cluster (starting with 2 machines) using the
    new
    0.20.1 version.

    On the master machine, just after the server starts, the name node dies
    off
    with the following exception:

    2009-10-13 01:22:24,740 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
    Incomplete HDFS URI, no host: hdfs://master_hadoop
    at

    org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:78)
    at
    org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1373)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1385)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:191)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
    at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier(NameNode.java:208)
    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:204)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)

    Can anyone help ? Also can anyone send across example configuration
    files
    for 0.20.1 if they are different than we are using ?

    The detail log file is attached along with.




    The configuration files are as follows:

    MASTER CONFIG
    ------ conf/masters -------
    master_hadoop

    ------ conf/slaves -------
    master_hadoop
    slave_hadoop

    ------ core-site.xml -------
    <configuration>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master_hadoop</value>
    </property>

    <property>
    <name>hadoop.tmp.dir</name>
    <value>/opt/hadoop-0.20.1/tmp</value>
    </property>

    ------ hdfs-site.xml -------
    <property>
    <name>dfs.replication</name>
    <value>2</value>
    </property>


    ------ mapred-site.xml -------
    <property>
    <name>mapred.job.tracker</name>
    <value>tejas_hadoop:9001</value>
    </property>





    SLAVE CONFIG
    ------ core-site.xml -------
    <property>
    <name>hadoop.tmp.dir</name>
    <value>/opt/hadoop-0.20.1/tmp/</value>
    </property>


    <property>
    <name>fs.default.name</name>
    <value>hdfs://master_hadoop</value>
    </property>


    ------ hdfs-site.xml -------
    <property>
    <name>dfs.replication</name>
    <value>2</value>
    </property>

    ------ mapred-site.xml -------
    <property>
    <name>mapred.job.tracker</name>
    <value>tejas_hadoop:9001</value>
    </property>



    Regards,

    Tejas Lagvankar
    meettejas@umbc.edu
    www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>




    --
    Chandan Tamrakar
    Tejas Lagvankar
    meettejas@umbc.edu
    www.umbc.edu/~tej2


  • Tejas Lagvankar at Oct 13, 2009 at 4:41 pm
    By name resolution, I assume that you mean the name mentioned in /etc/
    hosts. Yes, in the logs, the IP address appears in the beginning.
    Correct me if I'm wrong
    I will also try with using just the IP's instead of the aliases.
    On Oct 13, 2009, at 12:37 PM, Kevin Sweeney wrote:

    did you verify the name resolution?

    On Tue, Oct 13, 2009 at 4:34 PM, Tejas Lagvankar wrote:

    I get the same error even if i specify the port number. I have tried
    with port numbers 54310 as well as 9000.


    Regards,
    Tejas


    On Oct 13, 2009, at 12:12 PM, Chandan Tamrakar wrote:

    I think you need to specify the port as well for following port

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master_hadoop</value>
    </property>


    On Tue, Oct 13, 2009 at 7:17 AM, Tejas Lagvankar wrote:

    Hi,


    We are trying to set up a cluster (starting with 2 machines) using
    the new
    0.20.1 version.

    On the master machine, just after the server starts, the name node
    dies off
    with the following exception:

    2009-10-13 01:22:24,740 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
    Incomplete HDFS URI, no host: hdfs://master_hadoop
    at
    org.apache.hadoop.hdfs.DistributedFileSystem.initialize
    (DistributedFileSystem.java:78)
    at
    org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1373)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:
    1385)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:191)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
    at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier
    (NameNode.java:208)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize
    (NameNode.java:204)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:
    279)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode
    (NameNode.java:956)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:
    965)

    Can anyone help ? Also can anyone send across example configuration
    files
    for 0.20.1 if they are different than we are using ?

    The detail log file is attached along with.




    The configuration files are as follows:

    MASTER CONFIG
    ------ conf/masters -------
    master_hadoop

    ------ conf/slaves -------
    master_hadoop
    slave_hadoop

    ------ core-site.xml -------
    <configuration>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master_hadoop</value>
    </property>

    <property>
    <name>hadoop.tmp.dir</name>
    <value>/opt/hadoop-0.20.1/tmp</value>
    </property>

    ------ hdfs-site.xml -------
    <property>
    <name>dfs.replication</name>
    <value>2</value>
    </property>


    ------ mapred-site.xml -------
    <property>
    <name>mapred.job.tracker</name>
    <value>tejas_hadoop:9001</value>
    </property>





    SLAVE CONFIG
    ------ core-site.xml -------
    <property>
    <name>hadoop.tmp.dir</name>
    <value>/opt/hadoop-0.20.1/tmp/</value>
    </property>


    <property>
    <name>fs.default.name</name>
    <value>hdfs://master_hadoop</value>
    </property>


    ------ hdfs-site.xml -------
    <property>
    <name>dfs.replication</name>
    <value>2</value>
    </property>

    ------ mapred-site.xml -------
    <property>
    <name>mapred.job.tracker</name>
    <value>tejas_hadoop:9001</value>
    </property>



    Regards,

    Tejas Lagvankar
    meettejas@umbc.edu
    www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>







    --
    Chandan Tamrakar

    Tejas Lagvankar
    meettejas@umbc.edu
    www.umbc.edu/~tej2





    Tejas Lagvankar
    meettejas@umbc.edu
    www.umbc.edu/~tej2
  • Tejas Lagvankar at Oct 13, 2009 at 5:02 pm
    Hey Kevin,

    You were right...
    I changed all my aliases to IP addresses. It worked !

    Thank you all again :)

    Regards,
    Tejas
    On Oct 13, 2009, at 12:41 PM, Tejas Lagvankar wrote:

    By name resolution, I assume that you mean the name mentioned in /
    etc/hosts. Yes, in the logs, the IP address appears in the beginning.
    Correct me if I'm wrong
    I will also try with using just the IP's instead of the aliases.
    On Oct 13, 2009, at 12:37 PM, Kevin Sweeney wrote:

    did you verify the name resolution?

    On Tue, Oct 13, 2009 at 4:34 PM, Tejas Lagvankar <tej2@umbc.edu>
    wrote:

    I get the same error even if i specify the port number. I have
    tried with port numbers 54310 as well as 9000.


    Regards,
    Tejas


    On Oct 13, 2009, at 12:12 PM, Chandan Tamrakar wrote:

    I think you need to specify the port as well for following port

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master_hadoop</value>
    </property>


    On Tue, Oct 13, 2009 at 7:17 AM, Tejas Lagvankar <tej2@umbc.edu>
    wrote:

    Hi,


    We are trying to set up a cluster (starting with 2 machines) using
    the new
    0.20.1 version.

    On the master machine, just after the server starts, the name node
    dies off
    with the following exception:

    2009-10-13 01:22:24,740 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
    Incomplete HDFS URI, no host: hdfs://master_hadoop
    at
    org.apache.hadoop.hdfs.DistributedFileSystem.initialize
    (DistributedFileSystem.java:78)
    at
    org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:
    1373)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:
    1385)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:191)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
    at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier
    (NameNode.java:208)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize
    (NameNode.java:204)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>
    (NameNode.java:279)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode
    (NameNode.java:956)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:
    965)

    Can anyone help ? Also can anyone send across example
    configuration files
    for 0.20.1 if they are different than we are using ?

    The detail log file is attached along with.




    The configuration files are as follows:

    MASTER CONFIG
    ------ conf/masters -------
    master_hadoop

    ------ conf/slaves -------
    master_hadoop
    slave_hadoop

    ------ core-site.xml -------
    <configuration>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master_hadoop</value>
    </property>

    <property>
    <name>hadoop.tmp.dir</name>
    <value>/opt/hadoop-0.20.1/tmp</value>
    </property>

    ------ hdfs-site.xml -------
    <property>
    <name>dfs.replication</name>
    <value>2</value>
    </property>


    ------ mapred-site.xml -------
    <property>
    <name>mapred.job.tracker</name>
    <value>tejas_hadoop:9001</value>
    </property>





    SLAVE CONFIG
    ------ core-site.xml -------
    <property>
    <name>hadoop.tmp.dir</name>
    <value>/opt/hadoop-0.20.1/tmp/</value>
    </property>


    <property>
    <name>fs.default.name</name>
    <value>hdfs://master_hadoop</value>
    </property>


    ------ hdfs-site.xml -------
    <property>
    <name>dfs.replication</name>
    <value>2</value>
    </property>

    ------ mapred-site.xml -------
    <property>
    <name>mapred.job.tracker</name>
    <value>tejas_hadoop:9001</value>
    </property>



    Regards,

    Tejas Lagvankar
    meettejas@umbc.edu
    www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>







    --
    Chandan Tamrakar

    Tejas Lagvankar
    meettejas@umbc.edu
    www.umbc.edu/~tej2





    Tejas Lagvankar
    meettejas@umbc.edu
    www.umbc.edu/~tej2

    Tejas Lagvankar
    meettejas@umbc.edu
    www.umbc.edu/~tej2
  • Todd Lipcon at Oct 13, 2009 at 6:05 pm
    Your issue was probably that slave_hadoop and master_hadoop are not valid
    host names:

    RFCs <http://en.wikipedia.org/wiki/Request_for_Comments> mandate that a
    hostname's labels may contain only the
    ASCII<http://en.wikipedia.org/wiki/ASCII>letters 'a' through 'z'
    (case-insensitive), the digits '0' through '9', and
    the hyphen. Hostname labels cannot begin or end with a hyphen. No other
    symbols, punctuation characters, or blank spaces are permitted.

    from http://en.wikipedia.org/wiki/Hostname

    -Todd
    On Tue, Oct 13, 2009 at 10:01 AM, Tejas Lagvankar wrote:

    Hey Kevin,

    You were right...
    I changed all my aliases to IP addresses. It worked !

    Thank you all again :)

    Regards,
    Tejas


    On Oct 13, 2009, at 12:41 PM, Tejas Lagvankar wrote:

    By name resolution, I assume that you mean the name mentioned in
    /etc/hosts. Yes, in the logs, the IP address appears in the beginning.
    Correct me if I'm wrong
    I will also try with using just the IP's instead of the aliases.

    On Oct 13, 2009, at 12:37 PM, Kevin Sweeney wrote:

    did you verify the name resolution?
    On Tue, Oct 13, 2009 at 4:34 PM, Tejas Lagvankar wrote:

    I get the same error even if i specify the port number. I have tried with
    port numbers 54310 as well as 9000.


    Regards,
    Tejas


    On Oct 13, 2009, at 12:12 PM, Chandan Tamrakar wrote:

    I think you need to specify the port as well for following port

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master_hadoop</value>
    </property>


    On Tue, Oct 13, 2009 at 7:17 AM, Tejas Lagvankar wrote:

    Hi,


    We are trying to set up a cluster (starting with 2 machines) using the
    new
    0.20.1 version.

    On the master machine, just after the server starts, the name node dies
    off
    with the following exception:

    2009-10-13 01:22:24,740 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
    Incomplete HDFS URI, no host: hdfs://master_hadoop
    at

    org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:78)
    at
    org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1373)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1385)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:191)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
    at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier(NameNode.java:208)
    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:204)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:279)
    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)

    Can anyone help ? Also can anyone send across example configuration
    files
    for 0.20.1 if they are different than we are using ?

    The detail log file is attached along with.




    The configuration files are as follows:

    MASTER CONFIG
    ------ conf/masters -------
    master_hadoop

    ------ conf/slaves -------
    master_hadoop
    slave_hadoop

    ------ core-site.xml -------
    <configuration>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master_hadoop</value>
    </property>

    <property>
    <name>hadoop.tmp.dir</name>
    <value>/opt/hadoop-0.20.1/tmp</value>
    </property>

    ------ hdfs-site.xml -------
    <property>
    <name>dfs.replication</name>
    <value>2</value>
    </property>


    ------ mapred-site.xml -------
    <property>
    <name>mapred.job.tracker</name>
    <value>tejas_hadoop:9001</value>
    </property>





    SLAVE CONFIG
    ------ core-site.xml -------
    <property>
    <name>hadoop.tmp.dir</name>
    <value>/opt/hadoop-0.20.1/tmp/</value>
    </property>


    <property>
    <name>fs.default.name</name>
    <value>hdfs://master_hadoop</value>
    </property>


    ------ hdfs-site.xml -------
    <property>
    <name>dfs.replication</name>
    <value>2</value>
    </property>

    ------ mapred-site.xml -------
    <property>
    <name>mapred.job.tracker</name>
    <value>tejas_hadoop:9001</value>
    </property>



    Regards,

    Tejas Lagvankar
    meettejas@umbc.edu
    www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2> <
    http://www.umbc.edu/%7Etej2>







    --
    Chandan Tamrakar

    Tejas Lagvankar
    meettejas@umbc.edu
    www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>






    Tejas Lagvankar
    meettejas@umbc.edu
    www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>


    Tejas Lagvankar
    meettejas@umbc.edu
    www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>


  • Tejas Lagvankar at Oct 13, 2009 at 6:30 pm
    Thanks Todd,

    I never thought of that !!

    Regards,
    Tejas
    On Oct 13, 2009, at 1:50 PM, Todd Lipcon wrote:

    Your issue was probably that slave_hadoop and master_hadoop are not
    valid
    host names:

    RFCs <http://en.wikipedia.org/wiki/Request_for_Comments> mandate
    that a
    hostname's labels may contain only the
    ASCII<http://en.wikipedia.org/wiki/ASCII>letters 'a' through 'z'
    (case-insensitive), the digits '0' through '9', and
    the hyphen. Hostname labels cannot begin or end with a hyphen. No
    other
    symbols, punctuation characters, or blank spaces are permitted.

    from http://en.wikipedia.org/wiki/Hostname

    -Todd
    On Tue, Oct 13, 2009 at 10:01 AM, Tejas Lagvankar wrote:

    Hey Kevin,

    You were right...
    I changed all my aliases to IP addresses. It worked !

    Thank you all again :)

    Regards,
    Tejas


    On Oct 13, 2009, at 12:41 PM, Tejas Lagvankar wrote:

    By name resolution, I assume that you mean the name mentioned in
    /etc/hosts. Yes, in the logs, the IP address appears in the
    beginning.
    Correct me if I'm wrong
    I will also try with using just the IP's instead of the aliases.

    On Oct 13, 2009, at 12:37 PM, Kevin Sweeney wrote:

    did you verify the name resolution?
    On Tue, Oct 13, 2009 at 4:34 PM, Tejas Lagvankar <tej2@umbc.edu>
    wrote:

    I get the same error even if i specify the port number. I have
    tried with
    port numbers 54310 as well as 9000.


    Regards,
    Tejas


    On Oct 13, 2009, at 12:12 PM, Chandan Tamrakar wrote:

    I think you need to specify the port as well for following port

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master_hadoop</value>
    </property>


    On Tue, Oct 13, 2009 at 7:17 AM, Tejas Lagvankar <tej2@umbc.edu>
    wrote:

    Hi,


    We are trying to set up a cluster (starting with 2 machines)
    using the
    new
    0.20.1 version.

    On the master machine, just after the server starts, the name
    node dies
    off
    with the following exception:

    2009-10-13 01:22:24,740 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode:
    java.io.IOException:
    Incomplete HDFS URI, no host: hdfs://master_hadoop
    at

    org.apache.hadoop.hdfs.DistributedFileSystem.initialize
    (DistributedFileSystem.java:78)
    at
    org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:
    1373)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:
    1385)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:191)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
    at org.apache.hadoop.fs.Trash.<init>(Trash.java:62)
    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier
    (NameNode.java:208)
    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize
    (NameNode.java:204)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>
    (NameNode.java:279)
    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode
    (NameNode.java:956)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main
    (NameNode.java:965)

    Can anyone help ? Also can anyone send across example
    configuration
    files
    for 0.20.1 if they are different than we are using ?

    The detail log file is attached along with.




    The configuration files are as follows:

    MASTER CONFIG
    ------ conf/masters -------
    master_hadoop

    ------ conf/slaves -------
    master_hadoop
    slave_hadoop

    ------ core-site.xml -------
    <configuration>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://master_hadoop</value>
    </property>

    <property>
    <name>hadoop.tmp.dir</name>
    <value>/opt/hadoop-0.20.1/tmp</value>
    </property>

    ------ hdfs-site.xml -------
    <property>
    <name>dfs.replication</name>
    <value>2</value>
    </property>


    ------ mapred-site.xml -------
    <property>
    <name>mapred.job.tracker</name>
    <value>tejas_hadoop:9001</value>
    </property>





    SLAVE CONFIG
    ------ core-site.xml -------
    <property>
    <name>hadoop.tmp.dir</name>
    <value>/opt/hadoop-0.20.1/tmp/</value>
    </property>


    <property>
    <name>fs.default.name</name>
    <value>hdfs://master_hadoop</value>
    </property>


    ------ hdfs-site.xml -------
    <property>
    <name>dfs.replication</name>
    <value>2</value>
    </property>

    ------ mapred-site.xml -------
    <property>
    <name>mapred.job.tracker</name>
    <value>tejas_hadoop:9001</value>
    </property>



    Regards,

    Tejas Lagvankar
    meettejas@umbc.edu
    www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2> <
    http://www.umbc.edu/%7Etej2>







    --
    Chandan Tamrakar

    Tejas Lagvankar
    meettejas@umbc.edu
    www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>






    Tejas Lagvankar
    meettejas@umbc.edu
    www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>


    Tejas Lagvankar
    meettejas@umbc.edu
    www.umbc.edu/~tej2 <http://www.umbc.edu/%7Etej2>


    Tejas Lagvankar
    meettejas@umbc.edu
    www.umbc.edu/~tej2

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedOct 13, '09 at 2:17p
activeOct 13, '09 at 6:30p
posts11
users6
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase