FAQ
Dear Mailing list,

I've been trying to simply build the hadoop source code on my Macbook
Pro. I tried following the tutorial mentioned
here<http://wiki.apache.org/hadoop/EclipseEnvironment>.
I grabbed the source code from github, and then I try to run maven:

mvn install -DskipTests

mvn clean package -DskipTests -Pdist -Dtar -Dmaven.javadoc.skip=true


but I always get the following error:

*[INFO]
------------------------------------------------------------------------*
*[INFO] Total time: 14:48.283s*
*[INFO] Finished at: Fri Mar 02 13:29:20 EET 2012*
*[INFO] Final Memory: 68M/123M*
*[INFO]
------------------------------------------------------------------------*
*[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-pdf-plugin:1.1:pdf (pdf) on project
hadoop-distcp: Error during document generation: Error parsing
/Users/SaSa/Desktop/Hadoop/src/hadoop-common/hadoop-tools/hadoop-distcp/target/pdf/site.tmp/xdoc/index.xml:
Error validating the model: Fatal error:*
*[ERROR] Public ID: null*
*[ERROR] System ID: http://maven.apache.org/xsd/xdoc-2.0.xsd*
*[ERROR] Line number: 2699*
*[ERROR] Column number: 5*
*[ERROR] Message: The element type "xs:element" must be terminated by the
matching end-tag "</xs:element>".*
*[ERROR] -> [Help 1]*
*[ERROR] *
*[ERROR] To see the full stack trace of the errors, re-run Maven with the
-e switch.*
*[ERROR] Re-run Maven using the -X switch to enable full debug logging.*
*[ERROR] *
*[ERROR] For more information about the errors and possible solutions,
please read the following articles:*
*[ERROR] [Help 1]
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException*
*[ERROR] *
*[ERROR] After correcting the problems, you can resume the build with the
command*
*[ERROR] mvn <goals> -rf :hadoop-distcp*

I googled for a similar problem, all I got was some people cleared the m2
and ivy2 cahce, and rerun the command. But unfortunately didn't work for
me. I've been stuck here, I really hope anyone can help me with this issue.

Thank you all.
Regards,
SaSa



--
Mohammed El Sayed
Computer Science Department
King Abdullah University of Science and Technology
2351 - 4700 KAUST, Saudi Arabia
Home Page <http://cloud.kaust.edu.sa/SiteCollectionDocuments/melsayed.aspx>

Search Discussions

  • Ranjan Banerjee at Mar 6, 2012 at 10:38 pm
    Hello,
    I am relatively new to Hadoop. Started playing with it around two weeks ago. I have finished running the canonical word count map reduce example. My class project involves coming with a different scheduler for Hadoop. I know that by default Hadoop uses the FIFO and it also has the fair scheduler to be used. Can someone suggest where do I exactly write my scheduler code and what change do I need to do to the Conf object so that my scheduler is used by Hadoop and not the default scheduler in scheduling map reduce jobs.

    Regards,
    Ranjan



    On 03/02/12, mohammed elsaeedy wrote:
    Dear Mailing list,

    I've been trying to simply build the hadoop source code on my Macbook
    Pro. I tried following the tutorial mentioned
    here<http://wiki.apache.org/hadoop/EclipseEnvironment>.
    I grabbed the source code from github, and then I try to run maven:

    mvn install -DskipTests

    mvn clean package -DskipTests -Pdist -Dtar -Dmaven.javadoc.skip=true


    but I always get the following error:

    *[INFO]
    ------------------------------------------------------------------------*
    *[INFO] Total time: 14:48.283s*
    *[INFO] Finished at: Fri Mar 02 13:29:20 EET 2012*
    *[INFO] Final Memory: 68M/123M*
    *[INFO]
    ------------------------------------------------------------------------*
    *[ERROR] Failed to execute goal
    org.apache.maven.plugins:maven-pdf-plugin:1.1:pdf (pdf) on project
    hadoop-distcp: Error during document generation: Error parsing
    /Users/SaSa/Desktop/Hadoop/src/hadoop-common/hadoop-tools/hadoop-distcp/target/pdf/site.tmp/xdoc/index.xml:
    Error validating the model: Fatal error:*
    *[ERROR] Public ID: null*
    *[ERROR] System ID: http://maven.apache.org/xsd/xdoc-2.0.xsd*
    *[ERROR] Line number: 2699*
    *[ERROR] Column number: 5*
    *[ERROR] Message: The element type "xs:element" must be terminated by the
    matching end-tag "</xs:element>".*
    *[ERROR] -> [Help 1]*
    *[ERROR] *
    *[ERROR] To see the full stack trace of the errors, re-run Maven with the
    -e switch.*
    *[ERROR] Re-run Maven using the -X switch to enable full debug logging.*
    *[ERROR] *
    *[ERROR] For more information about the errors and possible solutions,
    please read the following articles:*
    *[ERROR] [Help 1]
    http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException*
    *[ERROR] *
    *[ERROR] After correcting the problems, you can resume the build with the
    command*
    *[ERROR] mvn <goals> -rf :hadoop-distcp*

    I googled for a similar problem, all I got was some people cleared the m2
    and ivy2 cahce, and rerun the command. But unfortunately didn't work for
    me. I've been stuck here, I really hope anyone can help me with this issue.

    Thank you all.
    Regards,
    SaSa



    --
    Mohammed El Sayed
    Computer Science Department
    King Abdullah University of Science and Technology
    2351 - 4700 KAUST, Saudi Arabia
    Home Page <http://cloud.kaust.edu.sa/SiteCollectionDocuments/melsayed.aspx>
  • Harsh J at Mar 6, 2012 at 10:48 pm
    Ranjan,

    Schedulers do not apply per-job. You need to change it at the JobTracker.

    Follow instructions at
    http://hadoop.apache.org/common/docs/r1.0.0/fair_scheduler.html to
    switch scheduler to FairScheduler.
    On Wed, Mar 7, 2012 at 4:08 AM, Ranjan Banerjee wrote:

    Hello,
    I am relatively new to Hadoop. Started playing with it around two weeks ago. I have finished running the canonical word count map reduce example. My class project involves coming with a different scheduler for Hadoop. I know that by default Hadoop uses the FIFO and it also has the fair scheduler to be used. Can someone suggest where do I exactly write my scheduler code and what change do I need to do to the Conf object so that my scheduler is used by Hadoop and not the default scheduler in scheduling map reduce jobs.

    Regards,
    Ranjan



    On 03/02/12, mohammed elsaeedy   wrote:
    Dear Mailing list,

    I've been trying to simply build the hadoop source code on my Macbook
    Pro. I tried following the tutorial mentioned
    here<http://wiki.apache.org/hadoop/EclipseEnvironment>.
    I grabbed the source code from github, and then I try  to run maven:

    mvn install -DskipTests

    mvn clean package -DskipTests -Pdist -Dtar -Dmaven.javadoc.skip=true


    but I always get the following error:

    *[INFO]
    ------------------------------------------------------------------------*
    *[INFO] Total time: 14:48.283s*
    *[INFO] Finished at: Fri Mar 02 13:29:20 EET 2012*
    *[INFO] Final Memory: 68M/123M*
    *[INFO]
    ------------------------------------------------------------------------*
    *[ERROR] Failed to execute goal
    org.apache.maven.plugins:maven-pdf-plugin:1.1:pdf (pdf) on project
    hadoop-distcp: Error during document generation: Error parsing
    /Users/SaSa/Desktop/Hadoop/src/hadoop-common/hadoop-tools/hadoop-distcp/target/pdf/site.tmp/xdoc/index.xml:
    Error validating the model: Fatal error:*
    *[ERROR] Public ID: null*
    *[ERROR] System ID: http://maven.apache.org/xsd/xdoc-2.0.xsd*
    *[ERROR] Line number: 2699*
    *[ERROR] Column number: 5*
    *[ERROR] Message: The element type "xs:element" must be terminated by the
    matching end-tag "</xs:element>".*
    *[ERROR] -> [Help 1]*
    *[ERROR] *
    *[ERROR] To see the full stack trace of the errors, re-run Maven with the
    -e switch.*
    *[ERROR] Re-run Maven using the -X switch to enable full debug logging.*
    *[ERROR] *
    *[ERROR] For more information about the errors and possible solutions,
    please read the following articles:*
    *[ERROR] [Help 1]
    http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException*
    *[ERROR] *
    *[ERROR] After correcting the problems, you can resume the build with the
    command*
    *[ERROR]   mvn <goals> -rf :hadoop-distcp*

    I googled for a similar problem, all I got was some people cleared the m2
    and ivy2 cahce, and rerun the command. But unfortunately didn't work for
    me. I've been stuck here, I really hope anyone can help me with this issue.

    Thank you all.
    Regards,
    SaSa



    --
    Mohammed El Sayed
    Computer Science Department
    King Abdullah University of Science and Technology
    2351 - 4700 KAUST, Saudi Arabia
    Home Page <http://cloud.kaust.edu.sa/SiteCollectionDocuments/melsayed.aspx>


    --
    Harsh J
  • Ranjan Banerjee at Mar 7, 2012 at 12:08 am
    Hello Harsh,
    Thanks for the quick info. I will go through the specifics. I hope you will address any further doubts that I have on the issue.

    Regards,
    Ranjan
    On 03/06/12, Harsh J wrote:
    Ranjan,

    Schedulers do not apply per-job. You need to change it at the JobTracker.

    Follow instructions at
    http://hadoop.apache.org/common/docs/r1.0.0/fair_scheduler.html to
    switch scheduler to FairScheduler.
    On Wed, Mar 7, 2012 at 4:08 AM, Ranjan Banerjee wrote:

    Hello,
    I am relatively new to Hadoop. Started playing with it around two weeks ago. I have finished running the canonical word count map reduce example. My class project involves coming with a different scheduler for Hadoop. I know that by default Hadoop uses the FIFO and it also has the fair scheduler to be used. Can someone suggest where do I exactly write my scheduler code and what change do I need to do to the Conf object so that my scheduler is used by Hadoop and not the default scheduler in scheduling map reduce jobs.

    Regards,
    Ranjan



    On 03/02/12, mohammed elsaeedy   wrote:
    Dear Mailing list,

    I've been trying to simply build the hadoop source code on my Macbook
    Pro. I tried following the tutorial mentioned
    here<http://wiki.apache.org/hadoop/EclipseEnvironment>.
    I grabbed the source code from github, and then I try  to run maven:

    mvn install -DskipTests

    mvn clean package -DskipTests -Pdist -Dtar -Dmaven.javadoc.skip=true


    but I always get the following error:

    *[INFO]
    ------------------------------------------------------------------------*
    *[INFO] Total time: 14:48.283s*
    *[INFO] Finished at: Fri Mar 02 13:29:20 EET 2012*
    *[INFO] Final Memory: 68M/123M*
    *[INFO]
    ------------------------------------------------------------------------*
    *[ERROR] Failed to execute goal
    org.apache.maven.plugins:maven-pdf-plugin:1.1:pdf (pdf) on project
    hadoop-distcp: Error during document generation: Error parsing
    /Users/SaSa/Desktop/Hadoop/src/hadoop-common/hadoop-tools/hadoop-distcp/target/pdf/site.tmp/xdoc/index.xml:
    Error validating the model: Fatal error:*
    *[ERROR] Public ID: null*
    *[ERROR] System ID: http://maven.apache.org/xsd/xdoc-2.0.xsd*
    *[ERROR] Line number: 2699*
    *[ERROR] Column number: 5*
    *[ERROR] Message: The element type "xs:element" must be terminated by the
    matching end-tag "</xs:element>".*
    *[ERROR] -> [Help 1]*
    *[ERROR] *
    *[ERROR] To see the full stack trace of the errors, re-run Maven with the
    -e switch.*
    *[ERROR] Re-run Maven using the -X switch to enable full debug logging.*
    *[ERROR] *
    *[ERROR] For more information about the errors and possible solutions,
    please read the following articles:*
    *[ERROR] [Help 1]
    http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException*
    *[ERROR] *
    *[ERROR] After correcting the problems, you can resume the build with the
    command*
    *[ERROR]   mvn <goals> -rf :hadoop-distcp*

    I googled for a similar problem, all I got was some people cleared the m2
    and ivy2 cahce, and rerun the command. But unfortunately didn't work for
    me. I've been stuck here, I really hope anyone can help me with this issue.

    Thank you all.
    Regards,
    SaSa



    --
    Mohammed El Sayed
    Computer Science Department
    King Abdullah University of Science and Technology
    2351 - 4700 KAUST, Saudi Arabia
    Home Page <http://cloud.kaust.edu.sa/SiteCollectionDocuments/melsayed.aspx>


    --
    Harsh J
  • Merto Mertek at Mar 7, 2012 at 12:45 am
    If you will change the default scheduler take a look at the thread "Dynamic
    changing of slaves" on the mailing list where I described my understanding
    of the scheduling process. However if you will modify some fairscheduling
    code, take a look at classes fairscheduler.java and
    schedulingalgorithms.java from the fairscheduler package. Here you can make
    your own comparator that can compare jobs/pools depending on varying
    parameters or you can make your own method for job/pool share calculation.
    If I am not wrong there is another option just to extend a class xy (do not
    know which) and set jobs weights (check the docs from Harsh)

    On 7 March 2012 01:08, Ranjan Banerjee wrote:

    Hello Harsh,
    Thanks for the quick info. I will go through the specifics. I hope you
    will address any further doubts that I have on the issue.

    Regards,
    Ranjan
    On 03/06/12, Harsh J wrote:
    Ranjan,

    Schedulers do not apply per-job. You need to change it at the JobTracker.

    Follow instructions at
    http://hadoop.apache.org/common/docs/r1.0.0/fair_scheduler.html to
    switch scheduler to FairScheduler.
    On Wed, Mar 7, 2012 at 4:08 AM, Ranjan Banerjee wrote:

    Hello,
    I am relatively new to Hadoop. Started playing with it around two
    weeks ago. I have finished running the canonical word count map reduce
    example. My class project involves coming with a different scheduler for
    Hadoop. I know that by default Hadoop uses the FIFO and it also has the
    fair scheduler to be used. Can someone suggest where do I exactly write my
    scheduler code and what change do I need to do to the Conf object so that
    my scheduler is used by Hadoop and not the default scheduler in scheduling
    map reduce jobs.
    Regards,
    Ranjan



    On 03/02/12, mohammed elsaeedy wrote:
    Dear Mailing list,

    I've been trying to simply build the hadoop source code on my
    Macbook
    Pro. I tried following the tutorial mentioned
    here<http://wiki.apache.org/hadoop/EclipseEnvironment>.
    I grabbed the source code from github, and then I try to run maven:

    mvn install -DskipTests

    mvn clean package -DskipTests -Pdist -Dtar -Dmaven.javadoc.skip=true


    but I always get the following error:

    *[INFO]
    ------------------------------------------------------------------------*
    *[INFO] Total time: 14:48.283s*
    *[INFO] Finished at: Fri Mar 02 13:29:20 EET 2012*
    *[INFO] Final Memory: 68M/123M*
    *[INFO]
    ------------------------------------------------------------------------*
    *[ERROR] Failed to execute goal
    org.apache.maven.plugins:maven-pdf-plugin:1.1:pdf (pdf) on project
    hadoop-distcp: Error during document generation: Error parsing
    /Users/SaSa/Desktop/Hadoop/src/hadoop-common/hadoop-tools/hadoop-distcp/target/pdf/site.tmp/xdoc/index.xml:
    Error validating the model: Fatal error:*
    *[ERROR] Public ID: null*
    *[ERROR] System ID: http://maven.apache.org/xsd/xdoc-2.0.xsd*
    *[ERROR] Line number: 2699*
    *[ERROR] Column number: 5*
    *[ERROR] Message: The element type "xs:element" must be terminated by
    the
    matching end-tag "</xs:element>".*
    *[ERROR] -> [Help 1]*
    *[ERROR] *
    *[ERROR] To see the full stack trace of the errors, re-run Maven with
    the
    -e switch.*
    *[ERROR] Re-run Maven using the -X switch to enable full debug
    logging.*
    *[ERROR] *
    *[ERROR] For more information about the errors and possible solutions,
    please read the following articles:*
    *[ERROR] [Help 1]
    http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException*
    *[ERROR] *
    *[ERROR] After correcting the problems, you can resume the build with
    the
    command*
    *[ERROR] mvn <goals> -rf :hadoop-distcp*

    I googled for a similar problem, all I got was some people cleared
    the m2
    and ivy2 cahce, and rerun the command. But unfortunately didn't work
    for
    me. I've been stuck here, I really hope anyone can help me with this
    issue.
    Thank you all.
    Regards,
    SaSa



    --
    Mohammed El Sayed
    Computer Science Department
    King Abdullah University of Science and Technology
    2351 - 4700 KAUST, Saudi Arabia
    Home Page <
    http://cloud.kaust.edu.sa/SiteCollectionDocuments/melsayed.aspx>


    --
    Harsh J
  • Ranjan Banerjee at Mar 7, 2012 at 1:04 am
    Hello Merto,
    I wish to implement SBF(Stretch Based Fairness), hence I guess the place to look will be dynamic changing of slaves.
    Regards,
    Ranjan
    On 03/06/12, Merto Mertek wrote:
    If you will change the default scheduler take a look at the thread "Dynamic
    changing of slaves" on the mailing list where I described my understanding
    of the scheduling process. However if you will modify some fairscheduling
    code, take a look at classes fairscheduler.java and
    schedulingalgorithms.java from the fairscheduler package. Here you can make
    your own comparator that can compare jobs/pools depending on varying
    parameters or you can make your own method for job/pool share calculation.
    If I am not wrong there is another option just to extend a class xy (do not
    know which) and set jobs weights (check the docs from Harsh)

    On 7 March 2012 01:08, Ranjan Banerjee wrote:

    Hello Harsh,
    Thanks for the quick info. I will go through the specifics. I hope you
    will address any further doubts that I have on the issue.

    Regards,
    Ranjan
    On 03/06/12, Harsh J wrote:
    Ranjan,

    Schedulers do not apply per-job. You need to change it at the JobTracker.

    Follow instructions at
    http://hadoop.apache.org/common/docs/r1.0.0/fair_scheduler.html to
    switch scheduler to FairScheduler.

    On Wed, Mar 7, 2012 at 4:08 AM, Ranjan Banerjee <rbanerjee2@wisc.edu>
    wrote:
    Hello,
    I am relatively new to Hadoop. Started playing with it around two
    weeks ago. I have finished running the canonical word count map reduce
    example. My class project involves coming with a different scheduler for
    Hadoop. I know that by default Hadoop uses the FIFO and it also has the
    fair scheduler to be used. Can someone suggest where do I exactly write my
    scheduler code and what change do I need to do to the Conf object so that
    my scheduler is used by Hadoop and not the default scheduler in scheduling
    map reduce jobs.
    Regards,
    Ranjan



    On 03/02/12, mohammed elsaeedy wrote:
    Dear Mailing list,

    I've been trying to simply build the hadoop source code on my
    Macbook
    Pro. I tried following the tutorial mentioned
    here<http://wiki.apache.org/hadoop/EclipseEnvironment>.
    I grabbed the source code from github, and then I try to run maven:

    mvn install -DskipTests

    mvn clean package -DskipTests -Pdist -Dtar -Dmaven.javadoc.skip=true


    but I always get the following error:

    *[INFO]
    ------------------------------------------------------------------------*
    *[INFO] Total time: 14:48.283s*
    *[INFO] Finished at: Fri Mar 02 13:29:20 EET 2012*
    *[INFO] Final Memory: 68M/123M*
    *[INFO]
    ------------------------------------------------------------------------*
    *[ERROR] Failed to execute goal
    org.apache.maven.plugins:maven-pdf-plugin:1.1:pdf (pdf) on project
    hadoop-distcp: Error during document generation: Error parsing
    /Users/SaSa/Desktop/Hadoop/src/hadoop-common/hadoop-tools/hadoop-distcp/target/pdf/site.tmp/xdoc/index.xml:
    Error validating the model: Fatal error:*
    *[ERROR] Public ID: null*
    *[ERROR] System ID: http://maven.apache.org/xsd/xdoc-2.0.xsd*
    *[ERROR] Line number: 2699*
    *[ERROR] Column number: 5*
    *[ERROR] Message: The element type "xs:element" must be terminated by
    the
    matching end-tag "</xs:element>".*
    *[ERROR] -> [Help 1]*
    *[ERROR] *
    *[ERROR] To see the full stack trace of the errors, re-run Maven with
    the
    -e switch.*
    *[ERROR] Re-run Maven using the -X switch to enable full debug
    logging.*
    *[ERROR] *
    *[ERROR] For more information about the errors and possible solutions,
    please read the following articles:*
    *[ERROR] [Help 1]
    http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException*
    *[ERROR] *
    *[ERROR] After correcting the problems, you can resume the build with
    the
    command*
    *[ERROR] mvn <goals> -rf :hadoop-distcp*

    I googled for a similar problem, all I got was some people cleared
    the m2
    and ivy2 cahce, and rerun the command. But unfortunately didn't work
    for
    me. I've been stuck here, I really hope anyone can help me with this
    issue.
    Thank you all.
    Regards,
    SaSa



    --
    Mohammed El Sayed
    Computer Science Department
    King Abdullah University of Science and Technology
    2351 - 4700 KAUST, Saudi Arabia
    Home Page <
    http://cloud.kaust.edu.sa/SiteCollectionDocuments/melsayed.aspx>


    --
    Harsh J

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-dev @
categorieshadoop
postedMar 2, '12 at 1:46p
activeMar 7, '12 at 1:04a
posts6
users4
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase