FAQ
As a more general note -- any jars needed by your mappers and reducers
either need to be in your job jar in the lib/ directory of the .jar file, or
in $HADOOP_HOME/lib/ on all tasktracker nodes where mappers and reducers get
run.

- Aaron

On Fri, Aug 21, 2009 at 10:47 AM, ishwar ramani wrote:

For future reference.

This is a class not found exception for the mysql driver. The
DBOuputFormat converts
it into an IO exception grrrrr.

I had the mysql-connector in both $HADOOP/lib and $HADOOP_CLASSPATH.
That did not help.

I had to pkg the mysql jar into my map reduce jar to fix this problem.

Hope that saves a day for some one!

On Thu, Aug 20, 2009 at 4:52 PM, ishwar ramaniwrote:
Hi,

I am trying to run a simple map reduce that writes the result from the
reducer to a mysql db.

I Keep getting

09/08/20 15:44:59 INFO mapred.JobClient: Task Id :
attempt_200908201210_0013_r_000000_0, Status : FAILED
java.io.IOException: com.mysql.jdbc.Driver
at
org.apache.hadoop.mapred.lib.db.DBOutputFormat.getRecordWriter(DBOutputFormat.java:162)
at
org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:435)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:413)
at org.apache.hadoop.mapred.Child.main(Child.java:170)

when the reducer is run.

Here is my code. The user name and password are valid and works fine.
Is there any way get more info on this exception?



static class MyWritable implements Writable, DBWritable {
long id;
String description;

MyWritable(long mid, String mdescription) {
id = mid;
description = mdescription;
}

public void readFields(DataInput in) throws IOException {
this.id = in.readLong();
this.description = Text.readString(in);
}

public void readFields(ResultSet resultSet)
throws SQLException {
this.id = resultSet.getLong(1);
this.description = resultSet.getString(2);
}

public void write(DataOutput out) throws IOException {
out.writeLong(this.id);
Text.writeString(out, this.description);
}

public void write(PreparedStatement stmt) throws SQLException {
stmt.setLong(1, this.id);
stmt.setString(2, this.description);
}
}






public static class Reduce extends MapReduceBase implements
Reducer<Text, IntWritable, MyWritable, IntWritable> {
public void reduce(Text key, Iterator<IntWritable> values,
OutputCollector<MyWritable, IntWritable> output, Reporter reporter)
throws IOException {
int sum = 0;
while (values.hasNext()) {
sum += values.next().get();
}

output.collect(new MyWritable(sum, key.toString()), new
IntWritable(sum));
}
}





public static void main(String[] args) throws Exception {
JobConf conf = new JobConf(WordCount.class);
conf.setJobName("wordcount");

conf.setMapperClass(Map.class);

conf.setReducerClass(Reduce.class);

DBConfiguration.configureDB(conf, "com.mysql.jdbc.Driver",
"jdbc:mysql://localhost:8100/testvmysqlsb", "dummy", "pass");


String fields[] = {"id", "description"};
DBOutputFormat.setOutput(conf, "funtable", fields);



conf.setNumMapTasks(1);
conf.setNumReduceTasks(1);

conf.setMapOutputKeyClass(Text.class);
conf.setMapOutputValueClass(IntWritable.class);


conf.setOutputKeyClass(MyWritable.class);
conf.setOutputValueClass(IntWritable.class);

conf.setInputFormat(TextInputFormat.class);




FileInputFormat.setInputPaths(conf, new Path(args[0]));


JobClient.runJob(conf);
}

Search Discussions

Discussion Posts

Previous

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 2 of 2 | next ›
Discussion Overview
groupcommon-user @
categorieshadoop
postedAug 20, '09 at 11:52p
activeAug 25, '09 at 12:42a
posts2
users2
websitehadoop.apache.org...
irc#hadoop

2 users in discussion

Aaron Kimball: 1 post Ishwar ramani: 1 post

People

Translate

site design / logo © 2022 Grokbase