FAQ

Search Discussions

23 discussions - 43 posts

  • It seems that the Hadoop conf/slaves file designates 2 things: 1. Where Hadoop should be running (which must be on the search nodes and the crawl nodes at least with Nutch). 2. Which machines are ...
    Scott SimpsonScott Simpson
    Apr 7, 2006 at 1:17 am
    Apr 11, 2006 at 5:23 pm
  • I went through the code JobConf.java and extracted most of the configuration parameters that can be used in a Job configuration file. I was just making sure I understood everything, but this list may ...
    Ben ReedBen Reed
    Apr 24, 2006 at 10:24 pm
    Apr 25, 2006 at 3:59 pm
  • Hi, I am a new user of hadoop. This project looks cool. There is one question about the MapReduce. I want to process a big file. To my understanding, hadoop will partition big file into block and ...
    Lei ChenLei Chen
    Apr 20, 2006 at 6:21 am
    Apr 20, 2006 at 11:42 am
  • Is pure Winxp operation supported now?(ie that is without Cygwin ) as df is supported now (Thanx to the group) In that case, the mapfiles are not getting deleted. The fullydelete fails ( i am able to ...
    Raghavendra PrabhuRaghavendra Prabhu
    Apr 5, 2006 at 5:27 pm
    Apr 5, 2006 at 6:57 pm
  • Hi I currently run Nutch with hadoop The latest thing seems to create two instances of tmp folder One is in the root directory and the other in the directory where the crawl command is run from ... ...
    Raghavendra PrabhuRaghavendra Prabhu
    Apr 1, 2006 at 10:18 am
    Apr 1, 2006 at 6:28 pm
  • Hello Team, I have "svn update'd" my hadoop and my nutch codebases to rev "394960". I notice that "ant" now creates "hadoop/trunk/build/hadoop-0.2-dev.jar". Ordinarily I would copy ...
    Monu OgbeMonu Ogbe
    Apr 18, 2006 at 3:24 pm
    Apr 18, 2006 at 5:32 pm
  • I keep seeing references to job.jar files. Can someone explain what the job.jar files are and are they only used in distributed mode? Dennis
    Dennis KubesDennis Kubes
    Apr 6, 2006 at 4:18 pm
    Apr 6, 2006 at 6:58 pm
  • Hi Doug But Does not it wait in a loop until the job is complete? It seemed that files are getting created when a job is submitted by the client Am i missing something here? What i saw was theat ...
    Raghavendra PrabhuRaghavendra Prabhu
    Apr 5, 2006 at 1:02 pm
    Apr 5, 2006 at 5:09 pm
  • Description: /tmp/hadoop/mapred/local/ temporary files not deleted This used to be previously get deleted ---------------------------------------------------------- I think i have narrowed it down to ...
    Raghavendra PrabhuRaghavendra Prabhu
    Apr 3, 2006 at 9:39 pm
    Apr 3, 2006 at 9:44 pm
  • HI My map file becomes very huge and while reducing i get an out of memory error I increased the the JVM heapsize to 1500M ( Xmx and Xmx ) and then also i get the same erro Is there any way the map ...
    Raghavendra PrabhuRaghavendra Prabhu
    Apr 27, 2006 at 1:29 pm
    Apr 27, 2006 at 1:29 pm
  • My map file becomes very huge and while reducing i get an out of memory error I increased the the JVM heapsize to 1500M ( Xmx and Xmx ) Is there any way the map file can be kept to a specific level
    Raghavendra PrabhuRaghavendra Prabhu
    Apr 27, 2006 at 1:27 pm
    Apr 27, 2006 at 1:27 pm
  • Hi, We have been running with "nutch server" in production without any problems. Our search servers are distributed into less than 20 of Linux servers. We are thinking of using the latest Hadoop ...
    Sam XiaSam Xia
    Apr 14, 2006 at 12:05 am
    Apr 14, 2006 at 12:05 am
  • Can someone explain how duplicate keys are merged inside of a reduce program to give multiple values in the Iterator for the reduce operation.? I think it is happening in the sort of the sequence ...
    Dennis KubesDennis Kubes
    Apr 12, 2006 at 10:20 pm
    Apr 12, 2006 at 10:20 pm
  • Hi Doug The recent release has fixed all the problems I think it was more because of the thing that we changed Current Working Directory and the problem of Absolute Paths Now it seems to be proper. ...
    Raghavendra PrabhuRaghavendra Prabhu
    Apr 8, 2006 at 8:21 am
    Apr 8, 2006 at 8:21 am
  • I am trying to verify something here.. I run mapred with local option. These are trivial issues but need to be addressed at some point of time Right now two /tmp directories are created ( one in the ...
    Raghavendra PrabhuRaghavendra Prabhu
    Apr 7, 2006 at 11:21 am
    Apr 7, 2006 at 11:21 am
  • Hi It will be very helpful if some one can direct me to a page where a high level diagram of hadoop is there. This can help new beginners like me grasp it easily and contribute. Contribution blindly ...
    Raghavendra PrabhuRaghavendra Prabhu
    Apr 5, 2006 at 12:50 pm
    Apr 5, 2006 at 12:50 pm
  • Hi. I've been trying to find a way to force redistribution of the file chunks so that they're evenly distributed over the cluster, even if it's already replicated sufficiently. We added a new node to ...
    Johan OskarssonJohan Oskarsson
    Apr 4, 2006 at 8:36 pm
    Apr 4, 2006 at 8:36 pm
  • Hi, I found a very strange behavior. I have a set of jobs that are successfully completed. But there is one MapTask that is listed as 0.0 completed and one Reduce Task as 0.0 completed. How a job ...
    Stefan GroschupfStefan Groschupf
    Apr 4, 2006 at 2:33 pm
    Apr 4, 2006 at 2:33 pm
  • Hi I have submitted the patch for incorrect map files clean up. If someone can review and commit this soon, it will be helpful. I also had some other doubts Can two instances of an application which ...
    Raghavendra PrabhuRaghavendra Prabhu
    Apr 4, 2006 at 5:00 am
    Apr 4, 2006 at 5:00 am
  • Hi I have attached a patch which cleans up submit_ directories after the submitted job has been completed Can someone with commit access review the patch and commit it? Please let me know whether it ...
    Raghavendra PrabhuRaghavendra Prabhu
    Apr 3, 2006 at 3:56 pm
    Apr 3, 2006 at 3:56 pm
  • Hi static Random r = new Random(); in JobClient I dont know where to ask this question, either to nutch or hadoop But this is a general question. When you run two instances of a application which ...
    Raghavendra PrabhuRaghavendra Prabhu
    Apr 3, 2006 at 1:08 pm
    Apr 3, 2006 at 1:08 pm
  • \tmp\hadoop\mapred\local With the latest build map_ files are not getting deleted after map is reduced Is anyone else facing the same problem? Rgds Prabhu
    Raghavendra PrabhuRaghavendra Prabhu
    Apr 3, 2006 at 11:00 am
    Apr 3, 2006 at 11:00 am
  • Release 0.1.0 of Hadoop is now available. The release may be downloaded from: http://www.apache.org/dyn/closer.cgi/lucene/hadoop/ Doug
    Doug CuttingDoug Cutting
    Apr 2, 2006 at 7:45 pm
    Apr 2, 2006 at 7:45 pm
Group Navigation
period‹ prev | Apr 2006 | next ›
Group Overview
groupcommon-user @
categorieshadoop
discussions23
posts43
users14
websitehadoop.apache.org...
irc#hadoop