FAQ
I have written below codes which is working earlier. But its stopped
working as I have received class not found exception etc. Is it a memory
problem as earlier we have only hdfs and mapreduce1 installed in the
cluster. Then we installed solr, zoopkeeper and flume.

FileSystem fs = FileSystem.get(conf);
String sharedConfPath = "/user/hdfs/conf";
DistributedCache.addFileToClassPath(new Path(sharedConfPath), conf, fs);
String sharedLibPath = "/user/hdfs/lib";
Path path = new Path(sharedLibPath);
FileStatus[] file_status = fs.listStatus(path);
for (FileStatus fileStatus : file_status) {
String jarPath = sharedLibPath + "/" + fileStatus.getPath().getName();
DistributedCache.addFileToClassPath(new Path(jarPath), conf, fs);
}

Please help me to resolved this problem.

To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.

Search Discussions

  • Prodiptag at Dec 27, 2013 at 6:09 am
    After encountering the class not found exception. I have put all my project
    dependent jars in CDH-4.5.0-1.cdh4.5.0.p0.30/lib/hadoop-0.20-mapreduce/lib
    and restart the cluster. And modify the previous code as below:
    FileSystem fs = FileSystem.get(conf);
    String sharedConfPath = "/user/hdfs/conf";
    DistributedCache.addFileToClassPath(new Path(sharedConfPath), conf, fs);

    i.e. set the project config files directory in classpath.

    But now I am getting file not found exception.

    On Friday, December 27, 2013 10:52:20 AM UTC+5:30, partn...@nrifintech.com
    wrote:
    I have written below codes which is working earlier. But its stopped
    working as I have received class not found exception etc. Is it a memory
    problem as earlier we have only hdfs and mapreduce1 installed in the
    cluster. Then we installed solr, zoopkeeper and flume.

    FileSystem fs = FileSystem.get(conf);
    String sharedConfPath = "/user/hdfs/conf";
    DistributedCache.addFileToClassPath(new Path(sharedConfPath), conf, fs);
    String sharedLibPath = "/user/hdfs/lib";
    Path path = new Path(sharedLibPath);
    FileStatus[] file_status = fs.listStatus(path);
    for (FileStatus fileStatus : file_status) {
    String jarPath = sharedLibPath + "/" + fileStatus.getPath().getName();
    DistributedCache.addFileToClassPath(new Path(jarPath), conf, fs);
    }

    Please help me to resolved this problem.
    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.
  • Darren Lo at Dec 27, 2013 at 5:39 pm
    Moving to cdh-user, who can better answer CDH questions not related to
    Cloudera Manager.

    On Thu, Dec 26, 2013 at 10:09 PM, wrote:

    After encountering the class not found exception. I have put all my
    project dependent jars in
    CDH-4.5.0-1.cdh4.5.0.p0.30/lib/hadoop-0.20-mapreduce/lib and restart the
    cluster. And modify the previous code as below:
    FileSystem fs = FileSystem.get(conf);
    String sharedConfPath = "/user/hdfs/conf";
    DistributedCache.addFileToClassPath(new Path(sharedConfPath), conf, fs);

    i.e. set the project config files directory in classpath.

    But now I am getting file not found exception.

    On Friday, December 27, 2013 10:52:20 AM UTC+5:30, partn...@nrifintech.comwrote:
    I have written below codes which is working earlier. But its stopped
    working as I have received class not found exception etc. Is it a memory
    problem as earlier we have only hdfs and mapreduce1 installed in the
    cluster. Then we installed solr, zoopkeeper and flume.

    FileSystem fs = FileSystem.get(conf);
    String sharedConfPath = "/user/hdfs/conf";
    DistributedCache.addFileToClassPath(new Path(sharedConfPath), conf, fs);
    String sharedLibPath = "/user/hdfs/lib";
    Path path = new Path(sharedLibPath);
    FileStatus[] file_status = fs.listStatus(path);
    for (FileStatus fileStatus : file_status) {
    String jarPath = sharedLibPath + "/" + fileStatus.getPath().getName();
    DistributedCache.addFileToClassPath(new Path(jarPath), conf, fs);
    }

    Please help me to resolved this problem.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to scm-users+unsubscribe@cloudera.org.


    --
    Thanks,
    Darren

    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupscm-users @
categorieshadoop
postedDec 27, '13 at 5:22a
activeDec 27, '13 at 5:39p
posts3
users3
websitecloudera.com
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase