FAQ
hi, we have a hadoop cluster set up with hadoop 2.0.0-cdh4.3.1, in my app
code, i tried connect to that cluster to copy file but got the error: No
FileSystem for scheme: hdfs

my code:

_fs = FileSystem.get(URI.create(hdfs), new Configuration());


after googling around seems it's related to the hdfs jar are not loaded, in
my app's pom file, i included hadoop-common, hadoop-hdfs, hadoop-distcp,
hadoop-mapreduce-client-app, hadoop-core,


and my hadoop class path is:

/etc/hadoop/conf:/workplace/cloudera/parcels/CDH-4.3.1-1.cdh4.3.1.p0.110/lib/hadoop/libexec/../../hadoop/lib/*:/workplace/cloudera/parcels/CDH-4.3.1-1.cdh4.3.1.p0.110/lib/hadoop/libexec/../../hadoop/.//*:/workplace/cloudera/parcels/CDH-4.3.1-1.cdh4.3.1.p0.110/lib/hadoop/libexec/../../hadoop-hdfs/./:/workplace/cloudera/parcels/CDH-4.3.1-1.cdh4.3.1.p0.110/lib/hadoop/libexec/../../hadoop-hdfs/lib/*:/workplace/cloudera/parcels/CDH-4.3.1-1.cdh4.3.1.p0.110/lib/hadoop/libexec/../../hadoop-hdfs/.//*:/workplace/cloudera/parcels/CDH-4.3.1-1.cdh4.3.1.p0.110/lib/hadoop/libexec/../../hadoop-yarn/lib/*:/workplace/cloudera/parcels/CDH-4.3.1-1.cdh4.3.1.p0.110/lib/hadoop/libexec/../../hadoop-yarn/.//*:/workplace/cloudera/parcels/CDH-4.3.1-1.cdh4.3.1.p0.110/lib/hadoop/libexec/../../hadoop-0.20-mapreduce/./:/workplace/cloudera/parcels/CDH-4.3.1-1.cdh4.3.1.p0.110/lib/hadoop/libexec/../../hadoop-0.20-mapreduce/lib/*:/workplace/cloudera/parcels/CDH-4.3.1-1.cdh4.3.1.p0.110/lib/hadoop/libexec/../../hadoop-0.20-mapreduce/.//*


i also tried to add the following property in the
/etc/hadoop/conf/hdfs-site.xml

<property>

     <name>fs.hdfs.impl</name>

     <value>org.apache.hadoop.dfs.DistributedFileSystem</value>

   </property>

and that won't work either.


in the directory, there are two folders related to hadoop, /etc/hadoop and
/etc/hadoop-0.20, i guess that's just two different version, how do i know
which one it use? and on the forum people mentioned
core-default.xml/hdfs-default.xml file a lot but i cannot see them anywhere
on my machine, are those coming with old version?


thanks

To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.

Search Discussions

  • Harsh J at Sep 10, 2013 at 9:26 pm
    in my app's pom file, i included hadoop-common, hadoop-hdfs, hadoop-distcp, hadoop-mapreduce-client-app, hadoop-core,
    You only just need "hadoop-client". Please see
    http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/latest/CDH-Version-and-Packaging-Information/cdhvd_topic_8_1.html
    On Tue, Sep 10, 2013 at 11:05 PM, yi dan wrote:
    hi, we have a hadoop cluster set up with hadoop 2.0.0-cdh4.3.1, in my app
    code, i tried connect to that cluster to copy file but got the error: No
    FileSystem for scheme: hdfs

    my code:

    _fs = FileSystem.get(URI.create(hdfs), new Configuration());


    after googling around seems it's related to the hdfs jar are not loaded, in
    my app's pom file, i included hadoop-common, hadoop-hdfs, hadoop-distcp,
    hadoop-mapreduce-client-app, hadoop-core,


    and my hadoop class path is:

    /etc/hadoop/conf:/workplace/cloudera/parcels/CDH-4.3.1-1.cdh4.3.1.p0.110/lib/hadoop/libexec/../../hadoop/lib/*:/workplace/cloudera/parcels/CDH-4.3.1-1.cdh4.3.1.p0.110/lib/hadoop/libexec/../../hadoop/.//*:/workplace/cloudera/parcels/CDH-4.3.1-1.cdh4.3.1.p0.110/lib/hadoop/libexec/../../hadoop-hdfs/./:/workplace/cloudera/parcels/CDH-4.3.1-1.cdh4.3.1.p0.110/lib/hadoop/libexec/../../hadoop-hdfs/lib/*:/workplace/cloudera/parcels/CDH-4.3.1-1.cdh4.3.1.p0.110/lib/hadoop/libexec/../../hadoop-hdfs/.//*:/workplace/cloudera/parcels/CDH-4.3.1-1.cdh4.3.1.p0.110/lib/hadoop/libexec/../../hadoop-yarn/lib/*:/workplace/cloudera/parcels/CDH-4.3.1-1.cdh4.3.1.p0.110/lib/hadoop/libexec/../../hadoop-yarn/.//*:/workplace/cloudera/parcels/CDH-4.3.1-1.cdh4.3.1.p0.110/lib/hadoop/libexec/../../hadoop-0.20-mapreduce/./:/workplace/cloudera/parcels/CDH-4.3.1-1.cdh4.3.1.p0.110/lib/hadoop/libexec/../../hadoop-0.20-mapreduce/lib/*:/workplace/cloudera/parcels/CDH-4.3.1-1.cdh4.3.1.p0.110/lib/hadoop/libexec/../../hadoop-0.20-mapreduce/.//*


    i also tried to add the following property in the
    /etc/hadoop/conf/hdfs-site.xml

    <property>

    <name>fs.hdfs.impl</name>

    <value>org.apache.hadoop.dfs.DistributedFileSystem</value>

    </property>

    and that won't work either.


    in the directory, there are two folders related to hadoop, /etc/hadoop and
    /etc/hadoop-0.20, i guess that's just two different version, how do i know
    which one it use? and on the forum people mentioned
    core-default.xml/hdfs-default.xml file a lot but i cannot see them anywhere
    on my machine, are those coming with old version?


    thanks

    To unsubscribe from this group and stop receiving emails from it, send an
    email to scm-users+unsubscribe@cloudera.org.


    --
    Harsh J

    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.
  • Yi dan at Sep 10, 2013 at 10:07 pm
    in my app, i need to used the distCp tool to copy files from S3 to hdfs, so
    i need to include other hadoop dependencies besides the hadoop-client.


    On Tuesday, September 10, 2013 2:25:55 PM UTC-7, Harsh J wrote:

    in my app's pom file, i included hadoop-common, hadoop-hdfs,
    hadoop-distcp, hadoop-mapreduce-client-app, hadoop-core,

    You only just need "hadoop-client". Please see

    http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/latest/CDH-Version-and-Packaging-Information/cdhvd_topic_8_1.html

    On Tue, Sep 10, 2013 at 11:05 PM, yi dan <mars...@gmail.com <javascript:>>
    wrote:
    hi, we have a hadoop cluster set up with hadoop 2.0.0-cdh4.3.1, in my app
    code, i tried connect to that cluster to copy file but got the error: No
    FileSystem for scheme: hdfs

    my code:

    _fs = FileSystem.get(URI.create(hdfs), new Configuration());


    after googling around seems it's related to the hdfs jar are not loaded, in
    my app's pom file, i included hadoop-common, hadoop-hdfs, hadoop-distcp,
    hadoop-mapreduce-client-app, hadoop-core,


    and my hadoop class path is:

    /etc/hadoop/conf:/workplace/cloudera/parcels/CDH-4.3.1-1.cdh4.3.1.p0.110/lib/hadoop/libexec/../../hadoop/lib/*:/workplace/cloudera/parcels/CDH-4.3.1-1.cdh4.3.1.p0.110/lib/hadoop/libexec/../../hadoop/.//*:/workplace/cloudera/parcels/CDH-4.3.1-1.cdh4.3.1.p0.110/lib/hadoop/libexec/../../hadoop-hdfs/./:/workplace/cloudera/parcels/CDH-4.3.1-1.cdh4.3.1.p0.110/lib/hadoop/libexec/../../hadoop-hdfs/lib/*:/workplace/cloudera/parcels/CDH-4.3.1-1.cdh4.3.1.p0.110/lib/hadoop/libexec/../../hadoop-hdfs/.//*:/workplace/cloudera/parcels/CDH-4.3.1-1.cdh4.3.1.p0.110/lib/hadoop/libexec/../../hadoop-yarn/lib/*:/workplace/cloudera/parcels/CDH-4.3.1-1.cdh4.3.1.p0.110/lib/hadoop/libexec/../../hadoop-yarn/.//*:/workplace/cloudera/parcels/CDH-4.3.1-1.cdh4.3.1.p0.110/lib/hadoop/libexec/../../hadoop-0.20-mapreduce/./:/workplace/cloudera/parcels/CDH-4.3.1-1.cdh4.3.1.p0.110/lib/hadoop/libexec/../../hadoop-0.20-mapreduce/lib/*:/workplace/cloudera/parcels/CDH-4.3.1-1.cdh4.3.1.p0.110/lib/hadoop/libexec/../../hadoop-0.20-mapreduce/.//*

    i also tried to add the following property in the
    /etc/hadoop/conf/hdfs-site.xml

    <property>

    <name>fs.hdfs.impl</name>

    <value>org.apache.hadoop.dfs.DistributedFileSystem</value>

    </property>

    and that won't work either.


    in the directory, there are two folders related to hadoop, /etc/hadoop and
    /etc/hadoop-0.20, i guess that's just two different version, how do i know
    which one it use? and on the forum people mentioned
    core-default.xml/hdfs-default.xml file a lot but i cannot see them anywhere
    on my machine, are those coming with old version?


    thanks

    To unsubscribe from this group and stop receiving emails from it, send an
    email to scm-users+...@cloudera.org <javascript:>.


    --
    Harsh J
    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.
  • Serega Sheypak at Sep 11, 2013 at 6:55 am
    There is a file in hadoop jars with declared services. When jars are
    loaded, lools like it's "overwritten"

    Here it is:

    *hadoop-common-2.0.0-cdh4.3.0.ja*
    r!/META-INF/services/org.apache.hadoop.fs.FileSystem

    With contents:

    org.apache.hadoop.fs.LocalFileSystem

    org.apache.hadoop.fs.viewfs.ViewFileSystem

    org.apache.hadoop.fs.s3.S3FileSystem

    org.apache.hadoop.fs.s3native.NativeS3FileSystem

    org.apache.hadoop.fs.kfs.KosmosFileSystem

    org.apache.hadoop.fs.ftp.FTPFileSystem

    org.apache.hadoop.fs.HarFileSystem


    *hadoop-hdfs-2.0.0-cdh4.3.0.jar*
    !/META-INF/services/org.apache.hadoop.fs.FileSystem
    *org.apache.hadoop.hdfs.DistributedFileSystem*
    org.apache.hadoop.hdfs.HftpFileSystem
    org.apache.hadoop.hdfs.HsftpFileSystem
    org.apache.hadoop.hdfs.web.WebHdfsFileSystem

    11.09.2013 0:07 пользователь "yi dan" <marsgirl@gmail.com> написал:
    in my app, i need to used the distCp tool to copy files from S3 to hdfs,
    so i need to include other hadoop dependencies besides the hadoop-client.


    On Tuesday, September 10, 2013 2:25:55 PM UTC-7, Harsh J wrote:

    in my app's pom file, i included hadoop-common, hadoop-hdfs,
    hadoop-distcp, hadoop-mapreduce-client-app, hadoop-core,

    You only just need "hadoop-client". Please see
    http://www.cloudera.com/**content/cloudera-content/**
    cloudera-docs/CDH4/latest/CDH-**Version-and-Packaging-**
    Information/cdhvd_topic_8_1.**html<http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/latest/CDH-Version-and-Packaging-Information/cdhvd_topic_8_1.html>
    On Tue, Sep 10, 2013 at 11:05 PM, yi dan wrote:
    hi, we have a hadoop cluster set up with hadoop 2.0.0-cdh4.3.1, in my app
    code, i tried connect to that cluster to copy file but got the error: No
    FileSystem for scheme: hdfs

    my code:

    _fs = FileSystem.get(URI.create(**hdfs), new Configuration());


    after googling around seems it's related to the hdfs jar are not
    loaded, in
    my app's pom file, i included hadoop-common, hadoop-hdfs,
    hadoop-distcp,
    hadoop-mapreduce-client-app, hadoop-core,


    and my hadoop class path is:

    /etc/hadoop/conf:/workplace/**cloudera/parcels/CDH-4.3.1-1.**
    cdh4.3.1.p0.110/lib/hadoop/**libexec/../../hadoop/lib/*:/**
    workplace/cloudera/parcels/**CDH-4.3.1-1.cdh4.3.1.p0.110/**
    lib/hadoop/libexec/../../**hadoop/.//*:/workplace/**
    cloudera/parcels/CDH-4.3.1-1.**cdh4.3.1.p0.110/lib/hadoop/**
    libexec/../../hadoop-hdfs/./:/**workplace/cloudera/parcels/**
    CDH-4.3.1-1.cdh4.3.1.p0.110/**lib/hadoop/libexec/../../**
    hadoop-hdfs/lib/*:/workplace/**cloudera/parcels/CDH-4.3.1-1.**
    cdh4.3.1.p0.110/lib/hadoop/**libexec/../../hadoop-hdfs/.//***
    :/workplace/cloudera/parcels/**CDH-4.3.1-1.cdh4.3.1.p0.110/**
    lib/hadoop/libexec/../../**hadoop-yarn/lib/*:/workplace/**
    cloudera/parcels/CDH-4.3.1-1.**cdh4.3.1.p0.110/lib/hadoop/**
    libexec/../../hadoop-yarn/.//***:/workplace/cloudera/parcels/**
    CDH-4.3.1-1.cdh4.3.1.p0.110/**lib/hadoop/libexec/../../**
    hadoop-0.20-mapreduce/./:/**workplace/cloudera/parcels/**
    CDH-4.3.1-1.cdh4.3.1.p0.110/**lib/hadoop/libexec/../../**
    hadoop-0.20-mapreduce/lib/*:/**workplace/cloudera/parcels/**
    CDH-4.3.1-1.cdh4.3.1.p0.110/**lib/hadoop/libexec/../../**hadoop-0.20-mapreduce/.//*

    i also tried to add the following property in the
    /etc/hadoop/conf/hdfs-site.xml

    <property>

    <name>fs.hdfs.impl</name>

    <value>org.apache.hadoop.dfs.**DistributedFileSystem</value>

    </property>

    and that won't work either.


    in the directory, there are two folders related to hadoop, /etc/hadoop and
    /etc/hadoop-0.20, i guess that's just two different version, how do i know
    which one it use? and on the forum people mentioned
    core-default.xml/hdfs-default.**xml file a lot but i cannot see them anywhere
    on my machine, are those coming with old version?


    thanks

    To unsubscribe from this group and stop receiving emails from it, send an
    email to scm-users+...@**cloudera.org.


    --
    Harsh J
    To unsubscribe from this group and stop receiving emails from it, send an
    email to scm-users+unsubscribe@cloudera.org.
    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.
  • Yi dan at Sep 11, 2013 at 5:20 pm
    i tested the same code on my local Mac, it works fine, but when i test it
    on other dev machine, for some reason it only loaded file systems in the fs
    package not those in hdfs package, just wonder is there any configuration i
    need to update?

    after unjar my app.jar file i did find the class file for
    org.apache.hadoop.hdfs.DistributedFileSystem and other filesystem class in
    hdfs package.

    in the pom.xml it included:

    <dependency>

           <groupId>org.apache.hadoop</groupId>

           <artifactId>hadoop-client</artifactId>

           <version>2.0.0-cdh4.3.1</version>

         </dependency>

    <dependency>

           <groupId>org.apache.hadoop</groupId>

           <artifactId>hadoop-common</artifactId>

           <version>2.0.0-cdh4.3.1</version>

         </dependency>


         <dependency>

           <groupId>org.apache.hadoop</groupId>

           <artifactId>hadoop-hdfs</artifactId>

           <version>2.0.0-cdh4.3.1</version>

         </dependency>



         <dependency>

           <groupId>org.apache.hadoop</groupId>

           <artifactId>hadoop-distcp</artifactId>

           <version>2.0.0-cdh4.3.1</version>

         </dependency>


    the config of the file system load files:


    Configuration: core-default.xml, core-site.xml, mapred-default.xml,
    mapred-site.xml, yarn-default.xml, yarn-site.xml


    i cannot find the core-default.xml, mapped-defalt.xml and yarn*.xml but
    here are the core-site.xml file i have:

    /etc/hadoop/conf/core-site.xml

    <configuration>


       <property>

         <name>fs.default.name</name>

         <value>hdfs://mxn01:8020</value>

         <description>The name of the default file system. A URI whose

           scheme and authority determine the FileSystem implementation. The

           uri's scheme determines the config property (fs.SCHEME.impl) naming

           the FileSystem implementation class. The uri's authority is used to

           determine the host, port, etc. for a filesystem.</description>

       </property>


       <property>

         <name>io.file.buffer.size</name>

         <value>65536</value>

         <description>The size of buffer for use in sequence files.

           The size of this buffer should probably be a multiple of hardware

           page size (4096 on Intel x86), and it determines how much data is

           buffered during read and write operations.</description>

       </property>


       <property>

         <name>hadoop.security.authentication</name>

         <!-- A value of "simple" would disable security. -->

         <value>simple</value>

       </property>


       <property>

         <name>hadoop.security.auth_to_local</name>

         <value>DEFAULT</value>

       </property>


       <property>

         <name>io.compression.codecs</name>


    <value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec</value>

         <description>A list of the compression codec classes that can be used

           for compression/decompression.</description>

       </property>


    </configuration>


    On Tuesday, September 10, 2013 11:55:32 PM UTC-7, Serega Sheypak wrote:

    There is a file in hadoop jars with declared services. When jars are
    loaded, lools like it's "overwritten"

    Here it is:

    *hadoop-common-2.0.0-cdh4.3.0.ja*
    r!/META-INF/services/org.apache.hadoop.fs.FileSystem

    With contents:

    org.apache.hadoop.fs.LocalFileSystem

    org.apache.hadoop.fs.viewfs.ViewFileSystem

    org.apache.hadoop.fs.s3.S3FileSystem

    org.apache.hadoop.fs.s3native.NativeS3FileSystem

    org.apache.hadoop.fs.kfs.KosmosFileSystem

    org.apache.hadoop.fs.ftp.FTPFileSystem

    org.apache.hadoop.fs.HarFileSystem


    *hadoop-hdfs-2.0.0-cdh4.3.0.jar*
    !/META-INF/services/org.apache.hadoop.fs.FileSystem
    *org.apache.hadoop.hdfs.DistributedFileSystem*
    org.apache.hadoop.hdfs.HftpFileSystem
    org.apache.hadoop.hdfs.HsftpFileSystem
    org.apache.hadoop.hdfs.web.WebHdfsFileSystem

    11.09.2013 0:07 пользователь "yi dan" <mars...@gmail.com <javascript:>>
    написал:
    in my app, i need to used the distCp tool to copy files from S3 to hdfs,
    so i need to include other hadoop dependencies besides the hadoop-client.


    On Tuesday, September 10, 2013 2:25:55 PM UTC-7, Harsh J wrote:

    in my app's pom file, i included hadoop-common, hadoop-hdfs,
    hadoop-distcp, hadoop-mapreduce-client-app, hadoop-core,

    You only just need "hadoop-client". Please see
    http://www.cloudera.com/**content/cloudera-content/**
    cloudera-docs/CDH4/latest/CDH-**Version-and-Packaging-**
    Information/cdhvd_topic_8_1.**html<http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/latest/CDH-Version-and-Packaging-Information/cdhvd_topic_8_1.html>
    On Tue, Sep 10, 2013 at 11:05 PM, yi dan wrote:
    hi, we have a hadoop cluster set up with hadoop 2.0.0-cdh4.3.1, in my app
    code, i tried connect to that cluster to copy file but got the error: No
    FileSystem for scheme: hdfs

    my code:

    _fs = FileSystem.get(URI.create(**hdfs), new Configuration());


    after googling around seems it's related to the hdfs jar are not
    loaded, in
    my app's pom file, i included hadoop-common, hadoop-hdfs,
    hadoop-distcp,
    hadoop-mapreduce-client-app, hadoop-core,


    and my hadoop class path is:

    /etc/hadoop/conf:/workplace/**cloudera/parcels/CDH-4.3.1-1.**
    cdh4.3.1.p0.110/lib/hadoop/**libexec/../../hadoop/lib/*:/**
    workplace/cloudera/parcels/**CDH-4.3.1-1.cdh4.3.1.p0.110/**
    lib/hadoop/libexec/../../**hadoop/.//*:/workplace/**
    cloudera/parcels/CDH-4.3.1-1.**cdh4.3.1.p0.110/lib/hadoop/**
    libexec/../../hadoop-hdfs/./:/**workplace/cloudera/parcels/**
    CDH-4.3.1-1.cdh4.3.1.p0.110/**lib/hadoop/libexec/../../**
    hadoop-hdfs/lib/*:/workplace/**cloudera/parcels/CDH-4.3.1-1.**
    cdh4.3.1.p0.110/lib/hadoop/**libexec/../../hadoop-hdfs/.//***
    :/workplace/cloudera/parcels/**CDH-4.3.1-1.cdh4.3.1.p0.110/**
    lib/hadoop/libexec/../../**hadoop-yarn/lib/*:/workplace/**
    cloudera/parcels/CDH-4.3.1-1.**cdh4.3.1.p0.110/lib/hadoop/**
    libexec/../../hadoop-yarn/.//***:/workplace/cloudera/parcels/**
    CDH-4.3.1-1.cdh4.3.1.p0.110/**lib/hadoop/libexec/../../**
    hadoop-0.20-mapreduce/./:/**workplace/cloudera/parcels/**
    CDH-4.3.1-1.cdh4.3.1.p0.110/**lib/hadoop/libexec/../../**
    hadoop-0.20-mapreduce/lib/*:/**workplace/cloudera/parcels/**
    CDH-4.3.1-1.cdh4.3.1.p0.110/**lib/hadoop/libexec/../../**hadoop-0.20-mapreduce/.//*

    i also tried to add the following property in the
    /etc/hadoop/conf/hdfs-site.xml

    <property>

    <name>fs.hdfs.impl</name>

    <value>org.apache.hadoop.dfs.**DistributedFileSystem</value>

    </property>

    and that won't work either.


    in the directory, there are two folders related to hadoop, /etc/hadoop and
    /etc/hadoop-0.20, i guess that's just two different version, how do i know
    which one it use? and on the forum people mentioned
    core-default.xml/hdfs-default.**xml file a lot but i cannot see them anywhere
    on my machine, are those coming with old version?


    thanks

    To unsubscribe from this group and stop receiving emails from it, send an
    email to scm-users+...@**cloudera.org.


    --
    Harsh J
    To unsubscribe from this group and stop receiving emails from it, send
    an email to scm-users+...@cloudera.org <javascript:>.
    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupscm-users @
categorieshadoop
postedSep 10, '13 at 5:35p
activeSep 11, '13 at 5:20p
posts5
users3
websitecloudera.com
irc#hadoop

3 users in discussion

Yi dan: 3 posts Harsh J: 1 post Serega Sheypak: 1 post

People

Translate

site design / logo © 2022 Grokbase