FAQ

No FileSystem for scheme: hdfs

Kishore Yellamraju
Apr 24, 2013 at 12:15 am
I have an issue with my storm worker trying to talk to CDH4 hadoop. Its
working fine in cdh3. I am guessing that some client jar files are missing.

I have browsed through forums and some of them suggested using "fs.hdfs.impl"
property, but not sure if this will fix it. I thought i can post this again
to group while i troubleshoot from my side.

Any suggestion is appreciated .


Exception is :

java.io.IOException: No FileSystem for scheme: hdfs
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2250)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2257)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:86)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2296)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2278)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:316)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:194)
at
com.rocketfuel.grid.rtd.storm.SequenceFileWriter.write(SequenceFileWriter.java:78)
at
com.rocketfuel.grid.rtd.storm.bidder.bolt.SupplyMetricsHDFSWriter.write(SupplyMetricsHDFSWriter.java:85)
at
com.rocketfuel.grid.rtd.storm.bidder.bolt.SupplyMetricsAggregatorBolt$SupplyMetricsCache.flush(SupplyMetricsAggregatorBolt.java:168)
at
com.rocketfuel.grid.rtd.storm.CachedAggregator$AggregationCache.flush(CachedAggregator.java:135)
at
com.rocketfuel.grid.rtd.storm.FlushableCache$1.run(FlushableCache.java:74)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
-Thanks
kishore kumar yellamraju

--
reply

Search Discussions

6 responses

  • Andrew Wang at Apr 24, 2013 at 12:26 am
    Hi Kishore,

    Like you guessed, it's probably a classpath issue. It looks like it's not
    picking up your HDFS jars.

    Best,
    Andrew

    On Tue, Apr 23, 2013 at 5:15 PM, Kishore Yellamraju wrote:

    I have an issue with my storm worker trying to talk to CDH4 hadoop. Its
    working fine in cdh3. I am guessing that some client jar files are missing.

    I have browsed through forums and some of them suggested using "fs.hdfs.impl"
    property, but not sure if this will fix it. I thought i can post this again
    to group while i troubleshoot from my side.

    Any suggestion is appreciated .


    Exception is :

    java.io.IOException: No FileSystem for scheme: hdfs
    at
    org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2250)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2257)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:86)
    at
    org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2296)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2278)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:316)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:194)
    at
    com.rocketfuel.grid.rtd.storm.SequenceFileWriter.write(SequenceFileWriter.java:78)
    at
    com.rocketfuel.grid.rtd.storm.bidder.bolt.SupplyMetricsHDFSWriter.write(SupplyMetricsHDFSWriter.java:85)
    at
    com.rocketfuel.grid.rtd.storm.bidder.bolt.SupplyMetricsAggregatorBolt$SupplyMetricsCache.flush(SupplyMetricsAggregatorBolt.java:168)
    at
    com.rocketfuel.grid.rtd.storm.CachedAggregator$AggregationCache.flush(CachedAggregator.java:135)
    at
    com.rocketfuel.grid.rtd.storm.FlushableCache$1.run(FlushableCache.java:74)
    at
    java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
    at
    java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
    at
    java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
    at
    java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
    at
    java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
    at
    java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:619)
    -Thanks
    kishore kumar yellamraju

    --


    --
  • Kishore Yellamraju at Apr 24, 2013 at 12:31 am
    Thank you Andrew for your prompt response. I checked the classpath and i
    see the below jar files , Am i missing anything else ?

    hadoop-core.jar , hadoop-hdfs-2.0.0-cdh4.2.0.jar, hadoop-common.jar

    so you wont suggest to add ""fs.hdfs.impl" property ?

    Thanks Again !!!

    -Thanks
    kishore kumar yellamraju |Ground control
    operations|kishore@rocketfuel.com| 408.203.042
    4


    On Tue, Apr 23, 2013 at 5:26 PM, Andrew Wang wrote:

    Hi Kishore,

    Like you guessed, it's probably a classpath issue. It looks like it's not
    picking up your HDFS jars.

    Best,
    Andrew


    On Tue, Apr 23, 2013 at 5:15 PM, Kishore Yellamraju <
    kishore@rocketfuelinc.com> wrote:
    I have an issue with my storm worker trying to talk to CDH4 hadoop. Its
    working fine in cdh3. I am guessing that some client jar files are missing.

    I have browsed through forums and some of them suggested using "fs.hdfs.impl"
    property, but not sure if this will fix it. I thought i can post this again
    to group while i troubleshoot from my side.

    Any suggestion is appreciated .


    Exception is :

    java.io.IOException: No FileSystem for scheme: hdfs
    at
    org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2250)
    at
    org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2257)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:86)
    at
    org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2296)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2278)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:316)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:194)
    at
    com.rocketfuel.grid.rtd.storm.SequenceFileWriter.write(SequenceFileWriter.java:78)
    at
    com.rocketfuel.grid.rtd.storm.bidder.bolt.SupplyMetricsHDFSWriter.write(SupplyMetricsHDFSWriter.java:85)
    at
    com.rocketfuel.grid.rtd.storm.bidder.bolt.SupplyMetricsAggregatorBolt$SupplyMetricsCache.flush(SupplyMetricsAggregatorBolt.java:168)
    at
    com.rocketfuel.grid.rtd.storm.CachedAggregator$AggregationCache.flush(CachedAggregator.java:135)
    at
    com.rocketfuel.grid.rtd.storm.FlushableCache$1.run(FlushableCache.java:74)
    at
    java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
    at
    java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
    at
    java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
    at
    java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
    at
    java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
    at
    java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:619)
    -Thanks
    kishore kumar yellamraju

    --


    --


    --
  • Andrew Wang at Apr 24, 2013 at 12:44 am
    I don't think that setting that property will help, based on Harsh's
    comment in a similar looking post:

    https://groups.google.com/a/cloudera.org/forum/?fromgroups=#!topic/scm-users/lyho8ptAzE0%5B1-25-false%5D

    Can you verify that you built and ran your code with the correct classpath,
    e.g. the contents of `hadoop classpath`? More details here would be great.

    Thanks,
    Andrew

    On Tue, Apr 23, 2013 at 5:31 PM, Kishore Yellamraju wrote:

    Thank you Andrew for your prompt response. I checked the classpath and i
    see the below jar files , Am i missing anything else ?

    hadoop-core.jar , hadoop-hdfs-2.0.0-cdh4.2.0.jar, hadoop-common.jar

    so you wont suggest to add ""fs.hdfs.impl" property ?

    Thanks Again !!!

    -Thanks
    kishore kumar yellamraju |Ground control operations|
    kishore@rocketfuel.com | 408.203.0424


    On Tue, Apr 23, 2013 at 5:26 PM, Andrew Wang wrote:

    Hi Kishore,

    Like you guessed, it's probably a classpath issue. It looks like it's not
    picking up your HDFS jars.

    Best,
    Andrew


    On Tue, Apr 23, 2013 at 5:15 PM, Kishore Yellamraju <
    kishore@rocketfuelinc.com> wrote:
    I have an issue with my storm worker trying to talk to CDH4 hadoop. Its
    working fine in cdh3. I am guessing that some client jar files are missing.

    I have browsed through forums and some of them suggested using "fs.hdfs.impl"
    property, but not sure if this will fix it. I thought i can post this again
    to group while i troubleshoot from my side.

    Any suggestion is appreciated .


    Exception is :

    java.io.IOException: No FileSystem for scheme: hdfs
    at
    org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2250)
    at
    org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2257)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:86)
    at
    org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2296)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2278)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:316)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:194)
    at
    com.rocketfuel.grid.rtd.storm.SequenceFileWriter.write(SequenceFileWriter.java:78)
    at
    com.rocketfuel.grid.rtd.storm.bidder.bolt.SupplyMetricsHDFSWriter.write(SupplyMetricsHDFSWriter.java:85)
    at
    com.rocketfuel.grid.rtd.storm.bidder.bolt.SupplyMetricsAggregatorBolt$SupplyMetricsCache.flush(SupplyMetricsAggregatorBolt.java:168)
    at
    com.rocketfuel.grid.rtd.storm.CachedAggregator$AggregationCache.flush(CachedAggregator.java:135)
    at
    com.rocketfuel.grid.rtd.storm.FlushableCache$1.run(FlushableCache.java:74)
    at
    java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
    at
    java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
    at
    java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
    at
    java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
    at
    java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
    at
    java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:619)
    -Thanks
    kishore kumar yellamraju

    --


    --


    --


    --
  • Kishore Yellamraju at Apr 24, 2013 at 1:06 am
    we are not exclusively specifying the hadoop classpath. we are just adding
    the hadoop jars via symlink from storm lib to hadoop-client . storm will
    pick these jars from its own lib dir.

    the command thats getting executed on storm side is ( i highlighted the
    hadoop jars in classpath )

    ava -client -Dstorm.options= -Dstorm.home=/dummy//app/storm
    -Djava.library.path=/usr/local/lib:/opt/local/lib:/usr/lib -cp
    /dummy//app/storm/storm-0.7.4.4.jar:/dummy//app/storm/lib/asm-3.2.jar:/dummy//app/storm/lib/carbonite-1.0.1.jar:/dummy//app/storm/lib/clj-time-0.4.1.jar:/dummy//app/storm/lib/clojure-1.4.0.jar:/dummy//app/storm/lib/clout-0.4.1.jar:/dummy//app/storm/lib/commons-codec-1.4.jar:/dummy//app/storm/lib/commons-exec-1.1.jar:/dummy//app/storm/lib/commons-fileupload-1.2.1.jar:/dummy//app/storm/lib/commons-io-1.4.jar:/dummy//app/storm/lib/commons-lang-2.5.jar:/dummy//app/storm/lib/commons-logging-1.1.1.jar:/dummy//app/storm/lib/compojure-0.6.4.jar:/dummy//app/storm/lib/core.incubator-0.1.0.jar:/dummy//app/storm/lib/curator-client-1.0.1.jar:/dummy//app/storm/lib/curator-framework-1.0.1.jar:/dummy//app/storm/lib/guava-10.0.1.jar:/dummy//app/storm/lib/hiccup-0.3.6.jar:/dummy//app/storm/lib/httpclient-4.1.1.jar:/dummy//app/storm/lib/httpcore-4.1.jar:/dummy//app/storm/lib/jetty-6.1.26.jar:/dummy//app/storm/lib/jetty-util-6.1.26.jar:/dummy//app/storm/lib/jline-0.9.94.jar:/dummy//app/storm/lib/joda-time-2.0.jar:/dummy//app/storm/lib/json-simple-1.1.jar:/dummy//app/storm/lib/jsr305-1.3.9.jar:/dummy//app/storm/lib/junit-3.8.1.jar:/dummy//app/storm/lib/jzmq-2.1.0.jar:/dummy//app/storm/lib/kryo-1.04.jar:/dummy//app/storm/lib/libthrift7-0.7.0.jar:/dummy//app/storm/lib/log4j-1.2.16.jar:/dummy//app/storm/lib/math.numeric-tower-0.0.1.jar:/dummy//app/storm/lib/minlog-1.2.jar:/dummy//app/storm/lib/reflectasm-1.01.jar:/dummy//app/storm/lib/ring-core-0.3.10.jar:/dummy//app/storm/lib/ring-jetty-adapter-0.3.11.jar:/dummy//app/storm/lib/ring-servlet-0.3.11.jar:/dummy//app/storm/lib/servlet-api-2.5-20081211.jar:/dummy//app/storm/lib/servlet-api-2.5.jar:/dummy//app/storm/lib/slf4j-api-1.5.8.jar:/dummy//app/storm/lib/slf4j-log4j12-1.5.8.jar:/dummy//app/storm/lib/snakeyaml-1.9.jar:/dummy//app/storm/lib/tools.cli-0.2.1.jar:/dummy//app/storm/lib/tools.logging-0.2.3.jar:/dummy//app/storm/lib/tools.macro-0.1.0.jar:/dummy//app/storm/lib/zookeeper-3.3.3.jar:/dummy//app/storm/lib/
    *
    hadoop-core-cdh4.jar:/dummy//app/storm/lib/commons-configuration.jar:/dummy//app/storm/lib/hadoop-common.jar:/dummy//app/storm/lib/hadoop-core.jar:/dummy//app/storm/lib/hadoop.jar:/dummy//app/storm/lib/hadoop-hdfs-2.0.0-cdh4.2.0.jar:
    */ar-with-dependencies.jar:/home/pvijay/.storm:/dummy//app/storm/bin
    -Dstorm.jar=jar-with-dependencies.jar com.xxx.storm.TopologyReader
    topologies/supply-metrics.topology



    -Thanks
    kishore kumar yellamraju |Ground control
    operations|kishore@rocketfuel.com| 408.203.042
    4


    On Tue, Apr 23, 2013 at 5:44 PM, Andrew Wang wrote:

    I don't think that setting that property will help, based on Harsh's
    comment in a similar looking post:


    https://groups.google.com/a/cloudera.org/forum/?fromgroups=#!topic/scm-users/lyho8ptAzE0%5B1-25-false%5D

    Can you verify that you built and ran your code with the correct
    classpath, e.g. the contents of `hadoop classpath`? More details here would
    be great.

    Thanks,
    Andrew


    On Tue, Apr 23, 2013 at 5:31 PM, Kishore Yellamraju <
    kishore@rocketfuelinc.com> wrote:
    Thank you Andrew for your prompt response. I checked the classpath and i
    see the below jar files , Am i missing anything else ?

    hadoop-core.jar , hadoop-hdfs-2.0.0-cdh4.2.0.jar, hadoop-common.jar

    so you wont suggest to add ""fs.hdfs.impl" property ?

    Thanks Again !!!

    -Thanks
    kishore kumar yellamraju |Ground control operations|
    kishore@rocketfuel.com | 408.203.0424


    On Tue, Apr 23, 2013 at 5:26 PM, Andrew Wang wrote:

    Hi Kishore,

    Like you guessed, it's probably a classpath issue. It looks like it's
    not picking up your HDFS jars.

    Best,
    Andrew


    On Tue, Apr 23, 2013 at 5:15 PM, Kishore Yellamraju <
    kishore@rocketfuelinc.com> wrote:
    I have an issue with my storm worker trying to talk to CDH4 hadoop. Its
    working fine in cdh3. I am guessing that some client jar files are missing.

    I have browsed through forums and some of them suggested using "fs.hdfs.impl"
    property, but not sure if this will fix it. I thought i can post this again
    to group while i troubleshoot from my side.

    Any suggestion is appreciated .


    Exception is :

    java.io.IOException: No FileSystem for scheme: hdfs
    at
    org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2250)
    at
    org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2257)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:86)
    at
    org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2296)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2278)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:316)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:194)
    at
    com.rocketfuel.grid.rtd.storm.SequenceFileWriter.write(SequenceFileWriter.java:78)
    at
    com.rocketfuel.grid.rtd.storm.bidder.bolt.SupplyMetricsHDFSWriter.write(SupplyMetricsHDFSWriter.java:85)
    at
    com.rocketfuel.grid.rtd.storm.bidder.bolt.SupplyMetricsAggregatorBolt$SupplyMetricsCache.flush(SupplyMetricsAggregatorBolt.java:168)
    at
    com.rocketfuel.grid.rtd.storm.CachedAggregator$AggregationCache.flush(CachedAggregator.java:135)
    at
    com.rocketfuel.grid.rtd.storm.FlushableCache$1.run(FlushableCache.java:74)
    at
    java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
    at
    java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
    at
    java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
    at
    java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
    at
    java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
    at
    java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:619)
    -Thanks
    kishore kumar yellamraju

    --


    --


    --


    --


    --
  • Neo at Apr 24, 2013 at 1:20 am
    Hi Kishore

    This problem might be related to high availability.

    what is your fs name in core-site.xml?

    On Apr 24, 2013, at 9:31 AM, Kishore Yellamraju wrote:

    Thank you Andrew for your prompt response. I checked the classpath and i see the below jar files , Am i missing anything else ?

    hadoop-core.jar , hadoop-hdfs-2.0.0-cdh4.2.0.jar, hadoop-common.jar

    so you wont suggest to add ""fs.hdfs.impl" property ?

    Thanks Again !!!

    -Thanks
    kishore kumar yellamraju |Ground control operations|kishore@rocketfuel.com | 408.203.0424




    On Tue, Apr 23, 2013 at 5:26 PM, Andrew Wang wrote:
    Hi Kishore,

    Like you guessed, it's probably a classpath issue. It looks like it's not picking up your HDFS jars.

    Best,
    Andrew


    On Tue, Apr 23, 2013 at 5:15 PM, Kishore Yellamraju wrote:
    I have an issue with my storm worker trying to talk to CDH4 hadoop. Its working fine in cdh3. I am guessing that some client jar files are missing.

    I have browsed through forums and some of them suggested using "fs.hdfs.impl" property, but not sure if this will fix it. I thought i can post this again to group while i troubleshoot from my side.

    Any suggestion is appreciated .


    Exception is :

    java.io.IOException: No FileSystem for scheme: hdfs
    at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2250)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2257)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:86)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2296)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2278)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:316)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:194)
    at com.rocketfuel.grid.rtd.storm.SequenceFileWriter.write(SequenceFileWriter.java:78)
    at com.rocketfuel.grid.rtd.storm.bidder.bolt.SupplyMetricsHDFSWriter.write(SupplyMetricsHDFSWriter.java:85)
    at com.rocketfuel.grid.rtd.storm.bidder.bolt.SupplyMetricsAggregatorBolt$SupplyMetricsCache.flush(SupplyMetricsAggregatorBolt.java:168)
    at com.rocketfuel.grid.rtd.storm.CachedAggregator$AggregationCache.flush(CachedAggregator.java:135)
    at com.rocketfuel.grid.rtd.storm.FlushableCache$1.run(FlushableCache.java:74)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
    at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:619)
    -Thanks
    kishore kumar yellamraju

    --





    --





    --

    --
  • Kishore Yellamraju at Apr 24, 2013 at 1:37 am
    I dont have HA enabled .

    <property>
    <name>fs.defaultFS</name>
    <value>hdfs://hadoopmaster:8xxx</value>
    </property>

    -Thanks
    kishore kumar yellamraju |Ground control
    operations|kishore@rocketfuel.com| 408.203.042
    4


    On Tue, Apr 23, 2013 at 6:20 PM, neo wrote:

    Hi Kishore

    This problem might be related to high availability.

    what is your fs name in core-site.xml?


    On Apr 24, 2013, at 9:31 AM, Kishore Yellamraju wrote:

    Thank you Andrew for your prompt response. I checked the classpath and i
    see the below jar files , Am i missing anything else ?

    hadoop-core.jar , hadoop-hdfs-2.0.0-cdh4.2.0.jar, hadoop-common.jar

    so you wont suggest to add ""fs.hdfs.impl" property ?

    Thanks Again !!!

    -Thanks
    kishore kumar yellamraju |Ground control operations|
    kishore@rocketfuel.com | 408.203.0424


    On Tue, Apr 23, 2013 at 5:26 PM, Andrew Wang wrote:

    Hi Kishore,

    Like you guessed, it's probably a classpath issue. It looks like it's not
    picking up your HDFS jars.

    Best,
    Andrew


    On Tue, Apr 23, 2013 at 5:15 PM, Kishore Yellamraju <
    kishore@rocketfuelinc.com> wrote:
    I have an issue with my storm worker trying to talk to CDH4 hadoop. Its
    working fine in cdh3. I am guessing that some client jar files are missing.

    I have browsed through forums and some of them suggested using "fs.hdfs.impl"
    property, but not sure if this will fix it. I thought i can post this again
    to group while i troubleshoot from my side.

    Any suggestion is appreciated .


    Exception is :

    java.io.IOException: No FileSystem for scheme: hdfs
    at
    org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2250)
    at
    org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2257)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:86)
    at
    org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2296)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2278)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:316)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:194)
    at
    com.rocketfuel.grid.rtd.storm.SequenceFileWriter.write(SequenceFileWriter.java:78)
    at
    com.rocketfuel.grid.rtd.storm.bidder.bolt.SupplyMetricsHDFSWriter.write(SupplyMetricsHDFSWriter.java:85)
    at
    com.rocketfuel.grid.rtd.storm.bidder.bolt.SupplyMetricsAggregatorBolt$SupplyMetricsCache.flush(SupplyMetricsAggregatorBolt.java:168)
    at
    com.rocketfuel.grid.rtd.storm.CachedAggregator$AggregationCache.flush(CachedAggregator.java:135)
    at
    com.rocketfuel.grid.rtd.storm.FlushableCache$1.run(FlushableCache.java:74)
    at
    java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
    at
    java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
    at
    java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
    at
    java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
    at
    java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
    at
    java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:619)
    -Thanks
    kishore kumar yellamraju

    --



    --



    --





    --


    --

Related Discussions

Discussion Navigation
viewthread | post