Hey all,
I'm trying to write a simple YARN-based application but I'm stuck at
writing files to HDFS. I've managed to figure out the mechanics of running
jar files in containers on the various machines, but when trying to create
a FileSystem object, I get the following exception:
java.io.IOException: No FileSystem for scheme: hdfs
at
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2138)
at
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2145)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:80)
at
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2184)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2166)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:302)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:158)
The line causing the error is this one:
FileSystem hdfs = FileSystem.get(conf);
The conf object (hadoop.io.conf) is made up of the core-site.xml and
hdfs-site.xml files. I'm running the JAR as a user called "yarn" and the
directory I'm trying to access does exist, though we are not near that
point yet - I'm just trying to create the FileSystem object.
I've checked Google and the only help there is related to a bug that used
to be in HDFS 2.0.0-alpha. I checked the fs.default.name value in the
configuration, and it is reported as "hdfs://namenode.local:8020", which
seems right.
Any help/ideas would be greatly appreciated.
Thanks,
Thinus