Thanks for the response. I was trying to set it up to run inside of
eclipse instead of remote debugging. And I finally got it working. I
have both the nutch and hadoop projects pulled from SVN so I am starting
the dfs and mapreduce servers from the hadoop codebase. I am starting
the servers all on one box inside of eclipse. Setting up the servers to
run in psuedo-distributed mode through eclipse is pretty easy, you just
have to have launchers with the correct configurations (parameters).
If you want to debug the servers and you want to do it inside of eclipse
and not through remote debugging (such as is the case if you are doing
it on windows without cygwin), the setup is not so obvious. The key is
when you are creating debug launchers, especially for TaskTracker, you
have to include the nutch project and the nutch project has to export
its plugin directory and all the jars in the lib directory. This way
the TaskTracker can find the correct plugins and source code when
running tasks. Again the key is the classpath of the launchers, not of
the projects themselves.
If anyone needs a more detailed explanation on how to set this up, send
me an email.
Ben Reed wrote:
I debug using eclipse. For example, if I need to debug JobTracker, I
put the following lines in the hadoop script:
elif [ "$COMMAND" = "jobtracker" ] ; then
the I just start everything up using start-all.sh. I should note that
I always debug code built inside of eclipse. I don't use ant. I just
import Java projects from SVN and export JAR files.
On May 24, 2006, at 8:49 AM, Dennis Kubes wrote:
Has anyone been able to successfully debug DFS and MapReduce servers
running though eclipse. I can get all the servers started and can
run MapReduce tasks inside of eclipse but I am getting both classpath
errors and debugging stalls.
I am just curious what kinds of setups people have for doing large
scale development of DFS and MapReduce?