FAQ
Is it still possible to run hadoop directly out of a svn checkout and
build of trunk? A few weeks ago I was using the three variables
HADOOP_HDFS_HOME/HADOOP_COMMON_HOME/HADOOP_MAPREDUCE_HOME and it all
worked fine. It seems there has been a lot of changes in the scripts,
and I can't get it to work or figure out what else to set either in
the shell env or at the top of hadoop-env.sh. I have checked out
trunk with a dir structure like this:

[trunk]$ pwd
/home/ecaspole/views/hadoop/trunk
[trunk]$ ll
total 12
drwxrwxr-x. 12 ecaspole ecaspole 4096 Jun 21 15:55 common
drwxrwxr-x. 10 ecaspole ecaspole 4096 Jun 21 13:20 hdfs
drwxrwxr-x. 11 ecaspole ecaspole 4096 Jun 21 16:19 mapreduce

[ecaspole@wsp133572wss hdfs]$ env | grep HADOOP
HADOOP_HDFS_HOME=/home/ecaspole/views/hadoop/trunk/hdfs/
HADOOP_COMMON_HOME=/home/ecaspole/views/hadoop/trunk/common
HADOOP_MAPREDUCE_HOME=/home/ecaspole/views/hadoop/trunk/mapreduce/

[hdfs]$ ./bin/start-dfs.sh
./bin/start-dfs.sh: line 54: /home/ecaspole/views/hadoop/trunk/common/
bin/../bin/hdfs: No such file or directory
Starting namenodes on []
localhost: starting namenode, logging to /home/ecaspole/views/hadoop/
trunk/common/logs/ecaspole/hadoop-ecaspole-namenode-
wsp133572wss.amd.com.out
localhost: Hadoop common not found.
localhost: starting datanode, logging to /home/ecaspole/views/hadoop/
trunk/common/logs/ecaspole/hadoop-ecaspole-datanode-
wsp133572wss.amd.com.out
localhost: Hadoop common not found.
Secondary namenodes are not configured. Cannot start secondary
namenodes.

Does anyone else actually run it this way? If so could you show what
variables you set and where so the components can find each other?

Otherwise, what is the recommended way to run a build of trunk?
Thanks,
Eric

Search Discussions

  • Alejandro Abdelnur at Jun 21, 2011 at 10:03 pm
    Eric,

    Yesterday I was trying the same, I've used the script from HADOOP-6846
    (after doing a s/mapred/mapreduce/g)

    then I had to add the hadoop-*JARs to the classpath

    then when trying to start the scripts started complaining about things not
    found in /usr/share

    Then I've given up.

    Thxs.

    Alejandro
    On Tue, Jun 21, 2011 at 2:41 PM, Eric Caspole wrote:

    Is it still possible to run hadoop directly out of a svn checkout and build
    of trunk? A few weeks ago I was using the three variables
    HADOOP_HDFS_HOME/HADOOP_**COMMON_HOME/HADOOP_MAPREDUCE_**HOME and it all
    worked fine. It seems there has been a lot of changes in the scripts, and I
    can't get it to work or figure out what else to set either in the shell env
    or at the top of hadoop-env.sh. I have checked out trunk with a dir
    structure like this:

    [trunk]$ pwd
    /home/ecaspole/views/hadoop/**trunk
    [trunk]$ ll
    total 12
    drwxrwxr-x. 12 ecaspole ecaspole 4096 Jun 21 15:55 common
    drwxrwxr-x. 10 ecaspole ecaspole 4096 Jun 21 13:20 hdfs
    drwxrwxr-x. 11 ecaspole ecaspole 4096 Jun 21 16:19 mapreduce

    [ecaspole@wsp133572wss hdfs]$ env | grep HADOOP
    HADOOP_HDFS_HOME=/home/**ecaspole/views/hadoop/trunk/**hdfs/
    HADOOP_COMMON_HOME=/home/**ecaspole/views/hadoop/trunk/**common
    HADOOP_MAPREDUCE_HOME=/home/**ecaspole/views/hadoop/trunk/**mapreduce/

    [hdfs]$ ./bin/start-dfs.sh
    ./bin/start-dfs.sh: line 54: /home/ecaspole/views/hadoop/**trunk/common/bin/../bin/hdfs:
    No such file or directory
    Starting namenodes on []
    localhost: starting namenode, logging to /home/ecaspole/views/hadoop/**
    trunk/common/logs/ecaspole/**hadoop-ecaspole-namenode-**
    wsp133572wss.amd.com.out
    localhost: Hadoop common not found.
    localhost: starting datanode, logging to /home/ecaspole/views/hadoop/**
    trunk/common/logs/ecaspole/**hadoop-ecaspole-datanode-**
    wsp133572wss.amd.com.out
    localhost: Hadoop common not found.
    Secondary namenodes are not configured. Cannot start secondary namenodes.

    Does anyone else actually run it this way? If so could you show what
    variables you set and where so the components can find each other?

    Otherwise, what is the recommended way to run a build of trunk?
    Thanks,
    Eric

  • Eli Collins at Jun 21, 2011 at 10:20 pm
    Hey Eric,

    It works for hdfs (here are the scripts I used:
    https://github.com/elicollins/hadoop-dev)

    Not long ago it worked for everything, looks like mr was recently broken. I
    think there's a jira for this.

    $ jt2
    /home/eli/src/hadoop2/mapreduce/bin/mapred: line 22:
    /home/eli/src/hadoop2/mapreduce/bin/../libexec/mapred-config.sh: No such
    file or directory

    Thanks,
    Eli
    On Tue, Jun 21, 2011 at 2:41 PM, Eric Caspole wrote:

    Is it still possible to run hadoop directly out of a svn checkout and build
    of trunk? A few weeks ago I was using the three variables
    HADOOP_HDFS_HOME/HADOOP_**COMMON_HOME/HADOOP_MAPREDUCE_**HOME and it all
    worked fine. It seems there has been a lot of changes in the scripts, and I
    can't get it to work or figure out what else to set either in the shell env
    or at the top of hadoop-env.sh. I have checked out trunk with a dir
    structure like this:

    [trunk]$ pwd
    /home/ecaspole/views/hadoop/**trunk
    [trunk]$ ll
    total 12
    drwxrwxr-x. 12 ecaspole ecaspole 4096 Jun 21 15:55 common
    drwxrwxr-x. 10 ecaspole ecaspole 4096 Jun 21 13:20 hdfs
    drwxrwxr-x. 11 ecaspole ecaspole 4096 Jun 21 16:19 mapreduce

    [ecaspole@wsp133572wss hdfs]$ env | grep HADOOP
    HADOOP_HDFS_HOME=/home/**ecaspole/views/hadoop/trunk/**hdfs/
    HADOOP_COMMON_HOME=/home/**ecaspole/views/hadoop/trunk/**common
    HADOOP_MAPREDUCE_HOME=/home/**ecaspole/views/hadoop/trunk/**mapreduce/

    [hdfs]$ ./bin/start-dfs.sh
    ./bin/start-dfs.sh: line 54: /home/ecaspole/views/hadoop/**trunk/common/bin/../bin/hdfs:
    No such file or directory
    Starting namenodes on []
    localhost: starting namenode, logging to /home/ecaspole/views/hadoop/**
    trunk/common/logs/ecaspole/**hadoop-ecaspole-namenode-**
    wsp133572wss.amd.com.out
    localhost: Hadoop common not found.
    localhost: starting datanode, logging to /home/ecaspole/views/hadoop/**
    trunk/common/logs/ecaspole/**hadoop-ecaspole-datanode-**
    wsp133572wss.amd.com.out
    localhost: Hadoop common not found.
    Secondary namenodes are not configured. Cannot start secondary namenodes.

    Does anyone else actually run it this way? If so could you show what
    variables you set and where so the components can find each other?

    Otherwise, what is the recommended way to run a build of trunk?
    Thanks,
    Eric

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-dev @
categorieshadoop
postedJun 21, '11 at 9:42p
activeJun 21, '11 at 10:20p
posts3
users3
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase