FAQ
After creation and startup of the hadoop namenode, you can only connect
to the namenode via hostname and not IP.



EX. hostname for box is sunsystem07, ip is 10.120.16.99



If you use the following url, "hdfs://10.120.16.99", to connect to the
namenode, the following message will be printed:

Wrong FS: hdfs://10.120.16.99:9000/, expected:
hdfs://sunsystem07:9000

You will only be able to connect successfully if
"hdfs://sunsystem07:9000" is used.



It seems reasonable to allow connection either by IP or name. Is there
a reason for this behavior or is it a bug?

Search Discussions

  • Raghu Angadi at Feb 5, 2009 at 10:52 pm
    I don't think it is intentional. Please file a jira with all the details
    about how to reproduce (with actual configuration files).

    thanks,
    Raghu.

    Habermaas, William wrote:
    After creation and startup of the hadoop namenode, you can only connect
    to the namenode via hostname and not IP.



    EX. hostname for box is sunsystem07, ip is 10.120.16.99



    If you use the following url, "hdfs://10.120.16.99", to connect to the
    namenode, the following message will be printed:

    Wrong FS: hdfs://10.120.16.99:9000/, expected:
    hdfs://sunsystem07:9000

    You will only be able to connect successfully if
    "hdfs://sunsystem07:9000" is used.



    It seems reasonable to allow connection either by IP or name. Is there
    a reason for this behavior or is it a bug?


Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedFeb 5, '09 at 4:33p
activeFeb 5, '09 at 10:52p
posts2
users2
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase