-----------------------------------------------------------------------------------------
Key: HADOOP-3395
URL: https://issues.apache.org/jira/browse/HADOOP-3395
Project: Hadoop Core
Issue Type: Bug
Components: fs
Affects Versions: 0.17.0
Reporter: Clint Morgan
When starting the namenode, I would get the following exception:
Caused by: java.io.IOException: Incomplete HDFS URI, no host/port: hdfs://0:0:0:0:0:0:0:0:50051
at org.apache.hadoop.dfs.DistributedFileSystem.initialize(DistributedFileSystem.java:66)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1275)
at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:56)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1286)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:208)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:108)
at org.apache.hadoop.fs.Trash.(NameNode.java:138)
at org.apache.hadoop.dfs.NameNode.(NameNode.java:166)
I tracked this down to NameNode.java line 127 and 128. The socetAddress returned by this.server.getListenerAddress() is an ipv6 style address with colons. Then this is set as the default filesystem which causes problems on the next call to FileSystem.get.
I replacedthe line:
this.nameNodeAddress = this.server.getListenerAddress();
with
this.nameNodeAddress = socAddr;
And this made it work for me. However, I gather this would break support for ephemeral ports? Is there a better way for me to fix this, maybe disabling ipv6 elsewhere?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.