I am using Hadoop's Pipes API in a C++ code. I need to make successive
runTask() calls, i.e., I need to do chaining of Map -> Reduce -> Map
In between two successive invocations I need to set new values for
some of the jobconf's parameters, like mapred.input.dir,
mapred.output.dir. Can any one share some ideas, first hand experience
in how to do this using Pipes interface ?
Any pointers, advice is highly appreciated.