Search Discussions

151 discussions - 566 posts

  • Hello I m currently a student at Arizona State University, Tempe, Arizona, pursuing my masters in Computer Science. I m currently involved in a research project that makes use of Hadoop to run ...
    Mithila NagendraMithila Nagendra
    Nov 19, 2008 at 6:12 pm
    Dec 18, 2008 at 7:27 pm
  • Hi all, I am really struggling with splitting a single file into many files using hadoop and would appreciate any help offered. The input file is 150,000,000 rows long today, but will grow to ...
    Tim robertsonTim robertson
    Nov 27, 2008 at 9:55 am
    Dec 10, 2008 at 3:44 am
  • Hi, I accidentally deleted the root folder in our hdfs. I have stopped the hdfs Is there any way to recover the files from secondary namenode Pl help -Sagar
    Sagar NaikSagar Naik
    Nov 14, 2008 at 6:39 pm
    Feb 11, 2009 at 5:57 am
  • The project I focused on has many modules written in different languages (several modules are hadoop jobs). So I'd like to utilize a common record based data file format for data exchange. XML is not ...
    Zhou, YunqingZhou, Yunqing
    Nov 2, 2008 at 2:15 am
    Nov 3, 2008 at 7:41 pm
  • From time to time a message pops up on the mailing list about OOM errors for the namenode because of too many files. Most recently there was a 1.7 million file installation that was failing. I know ...
    Dennis KubesDennis Kubes
    Nov 26, 2008 at 12:12 pm
    Dec 2, 2008 at 11:39 pm
  • Hi all, If I want to have an in memory "lookup" Hashmap that is available in my Map class, where is the best place to initialise this please? I have a shapefile with polygons, and I wish to create ...
    Tim robertsonTim robertson
    Nov 25, 2008 at 7:10 pm
    Dec 1, 2008 at 6:42 am
  • I was wondering if it was possible to read the input for a map function from 2 different files: 1st file --- user-input file from a particular location(path) 2nd file=--- A resultant file (has just ...
    Some speedSome speed
    Nov 10, 2008 at 5:10 am
    Nov 17, 2008 at 4:58 am
  • Hi everyone, I was wondering whether it is possible to control the placement of the blocks of a file in HDFS. Is it possible to instruct HDFS about which nodes will hold the block replicas? Thanks! ...
    Nov 25, 2008 at 3:00 am
    Nov 26, 2008 at 1:56 am
  • Hello, I'm very sorry to trouble you, I'm developing a MapReduce Application, And I can get Log.INFO in InputFormat ,but In Mapper or Reducer , I can't get anything . And Now an error occured in the ...
    ZhiHong FuZhiHong Fu
    Nov 12, 2008 at 2:04 am
    Nov 24, 2008 at 3:56 pm
  • Has anyone configured Apache as a reverse proxy to allow access to your cloud? I'm having trouble doing this. I have a cloud. My datanodes are not visible outside the cloud for security. I'd like to ...
    David RitchDavid Ritch
    Nov 13, 2008 at 8:20 pm
    Nov 22, 2008 at 2:27 am
  • Looking for a way to dynamically terminate a job once Reporter in a Map job hits a threshold, Example: public void map(WritableComparable key, Text values, Output Collector<Text, Text output, ...
    Brian MacKayBrian MacKay
    Nov 7, 2008 at 8:12 pm
    Nov 10, 2008 at 3:34 pm
  • I'm try to write a daemon that periodically wakes up and runs map/reduce jobs, but I've have little luck. I've tried different ways (including using cascading) and I keep arriving at the below ...
    Shahab mehmandoustShahab mehmandoust
    Nov 6, 2008 at 1:10 am
    Nov 6, 2008 at 9:04 pm
  • Hi all I run hadoop cluster on Amazon EC2 servers. I start clusterof 2 data nodes and then open job tracker to read status. It shows me that 2 data nodes as expected. It's ok. Then I open FS ...
    Alexander AristovAlexander Aristov
    Nov 19, 2008 at 2:07 pm
    Nov 22, 2008 at 8:30 pm
  • Hi Folks, I am looking for some advice on some the ways / techniques that people are using to get around namenode failures (Both disk and host). We have a small cluster with several job scheduled for ...
    Goel, AnkurGoel, Ankur
    Nov 10, 2008 at 8:25 am
    Nov 12, 2008 at 5:59 pm
  • Dear Hadoop Users and Developers, I was wondering if there's a plan to add "file info cache" in DFSClient? It could eliminate network travelling cost for contacting Namenode and I think it would ...
    Taeho KangTaeho Kang
    Nov 3, 2008 at 6:57 am
    Nov 9, 2008 at 10:15 pm
  • Hi all, We are using hadoop-0.17.2 for some time now. Since yesterday, We have been seeing jobTracker failing to respond with an OutOfMemory Error very frequently. Things are going fine after ...
    Palleti, PallaviPalleti, Pallavi
    Nov 20, 2008 at 4:43 am
    Dec 7, 2008 at 10:48 pm
  • Hi I have a jar file which takes input from stdin and writes something on stdout. i.e. When I run java -jar A.jar < input It prints the required output. However, when I run it as a mapper in hadoop ...
    Nov 11, 2008 at 6:54 pm
    Dec 4, 2008 at 7:45 am
  • Hi list I am kind of new to Hadoop but have some good background. I am seriously considering adopting Hadoop and especially HDFS first to be able to store various files (in the low hundreds thousands ...
    Nov 14, 2008 at 1:19 am
    Nov 14, 2008 at 6:46 pm
  • I am new to HADOOP, i am trying to understand what is the efficient method to read the output file from HDFS and display the result in simple web application? Thanks -- View this message in context: ...
    Nov 1, 2008 at 5:54 pm
    Nov 2, 2008 at 10:26 pm
  • Hi list, I added a property dfs.hosts.exclude to my conf/hadoop-site.xml. Then refreshed my cluster with command bin/hadoop dfsadmin -refreshNodes It showed that it can only shut down the DataNode ...
    Jeremy ChowJeremy Chow
    Nov 26, 2008 at 7:49 am
    Nov 26, 2008 at 4:00 pm
  • Hi all, I want to retrieve the Rack ID of every datanode. How can I do this? I tried using getNetworkLocation() in org.apache.hadoop.hdfs.protocol.DatanodeInfo. I am getting /default-rack as the ...
    Ramya RRamya R
    Nov 26, 2008 at 6:40 am
    Nov 26, 2008 at 8:36 am
  • I am trying to migrate from 32 bit jvm and 64 bit for namenode only. *setup* NN - 64 bit Secondary namenode (instance 1) - 64 bit Secondary namenode (instance 2) - 32 bit datanode - 32 bit From the ...
    Sagar NaikSagar Naik
    Nov 26, 2008 at 12:00 am
    Nov 26, 2008 at 7:06 am
  • Hello I wonder if hadoop shell command ls has changed output format Trying hadoop-0.18.2 I got next output [root]# hadoop fs -ls / Found 2 items drwxr-xr-x - root supergroup 0 2008-11-21 08:08 /mnt ...
    Alexander AristovAlexander Aristov
    Nov 21, 2008 at 2:04 pm
    Nov 25, 2008 at 5:44 am
  • Hi, I'm running current snapshot (-r709609), doing a simple word count using python over streaming. I'm have a relatively moderate setup of 17 nodes. I'm getting this exception: ...
    Yuri PradkinYuri Pradkin
    Nov 4, 2008 at 11:48 pm
    Nov 21, 2008 at 11:46 pm
  • Some engineers here at Cloudera have been working on a website to report on Hadoop development status, and we're happy to announce that the website is now available! We've written a blog post ...
    Alex LoddengaardAlex Loddengaard
    Nov 20, 2008 at 5:53 pm
    Nov 21, 2008 at 10:42 pm
  • Hi, This is a long mail as I have tried to put in as much details as might help any of the Hadoop dev/users to help us out. The gist is this: We have a long running Hadoop system (masters not ...
    Abhijit BagriAbhijit Bagri
    Nov 15, 2008 at 2:15 pm
    Nov 21, 2008 at 10:38 am
  • Hello, If my understanding is correct, the combiner will read in values for a given key, process it, output it and then **all** values for a key are given to the reducer. Then it ought to be possible ...
    Saptarshi GuhaSaptarshi Guha
    Nov 16, 2008 at 10:19 pm
    Nov 20, 2008 at 9:13 pm
  • I've got a grid which has been up and running for some time. It's been using a 32 bit JVM. I am hitting the wall on memory within NameNode and need to specify max heap size 4G. Is it possible to ...
    C GC G
    Nov 7, 2008 at 4:17 am
    Nov 12, 2008 at 12:43 am
  • Hi All We're thinking of setting up a Hadoop cluster which will be used to create a prototype system for analyzing telecom data. The wiki page on machine scaling ...
    Arijit MukherjeeArijit Mukherjee
    Nov 4, 2008 at 10:17 am
    Nov 4, 2008 at 8:18 pm
  • I have an app that runs for a long time with no problems, but when I signal it to shut down, I get errors like this: java.io.IOException: Filesystem closed at ...
    Bryan DuxburyBryan Duxbury
    Nov 26, 2008 at 12:48 am
    Nov 27, 2008 at 6:30 am
  • Hi, Could you please sanity check this: In Hadoop-site.xml I add: <property <name mapred.child.java.opts</name <value -Xmx1G</value <description Increasing the size of the heap to allow for large in ...
    Tim robertsonTim robertson
    Nov 26, 2008 at 11:54 am
    Nov 26, 2008 at 4:23 pm
  • Hello, Is there an easy way to get Reduce Output Bytes? Thanks, Lohit
    Nov 25, 2008 at 6:29 am
    Nov 25, 2008 at 3:58 pm
  • *Problem:* The "ls" is taking noticeable time to respond. *system:* I have about 1.6 million files and namenode is prety much full with heap(2400MB). I have configured dfs.handler.count to 100, and ...
    Sagar NaikSagar Naik
    Nov 20, 2008 at 5:49 am
    Nov 20, 2008 at 8:18 pm
  • Hi, I am writing a Binary Search Tree on Hadoop and for the same i require to use NLineInputFormat. I'll read n lines at a time, convert the numbers in each line from string to int and then insert ...
    Rahul TenanyRahul Tenany
    Nov 15, 2008 at 1:07 pm
    Nov 20, 2008 at 4:46 am
  • Hi! We would like to run a delete script that deletes all files older than x days that are stored in lib l in hdfs, what is the best way of doing that? Regards Erik
    Erik HolstadErik Holstad
    Nov 15, 2008 at 1:08 am
    Nov 17, 2008 at 4:27 pm
  • Hey all, I noticed that the maximum throttle for the datanode block scanner is hardcoded at 8MB/s. I think this is insufficient; on a fully loaded Sun Thumper, a full scan at 8MB/s would take ...
    Brian BockelmanBrian Bockelman
    Nov 13, 2008 at 4:54 am
    Nov 14, 2008 at 10:30 am
  • Hi, Is there any Utility for Hadoop files which can work same as RandomAccessFile in Java ? Thanks, Wasim
    Wasim BariWasim Bari
    Nov 13, 2008 at 7:40 pm
    Nov 13, 2008 at 11:12 pm
  • Hello, all We are planning to host a Hadoop Beijing meeting on next Sunday(23th of Nov.). We now welcome speakers and participants! If you are interested in cloud computing topics and you can join us ...
    永强 何永强 何
    Nov 12, 2008 at 6:41 am
    Nov 13, 2008 at 2:19 am
  • Hello! I'd like to ask your help in a libhdfs related problem. I'm trying to perform HDFS tests from C by using the libhdfs API. I created a test program, that measures the creation times of 1MB, ...
    Tamás SzokolTamás Szokol
    Nov 6, 2008 at 5:32 pm
    Nov 7, 2008 at 11:43 pm
  • Hello -- I am testing fuse-dfs on a cluster running Hadoop 0.18.1. I can get reads to work properly, but when I try write operations, I get the following: % ./fuse_dfs_wrapper.sh --debug ...
    Brian KarlakBrian Karlak
    Nov 7, 2008 at 3:29 pm
    Nov 7, 2008 at 11:31 pm
  • Is posible to use Hadoop over Internet? frances.x10hosting.com /// La meva pàgina web ///
    Francesc BrugueraFrancesc Bruguera
    Nov 6, 2008 at 7:48 pm
    Nov 7, 2008 at 5:45 am
  • Hi, I am trying to use hadoop 0.18.1. After I start the hadoop, I am able to see namenode running on the master. But, datanode on the client machine is unable to connect to the namenode. I use 2 ...
    Srikanth BondalapatiSrikanth Bondalapati
    Nov 4, 2008 at 5:12 pm
    Nov 5, 2008 at 4:33 pm
  • Dear Hadoop Users and Developers, I have a requirement of monitoring the hadoop-cluster by using x-trace. i found these pathes on http://issues.apache.org/jira/browse/HADOOP-4049 but when i try to ...
    V SchnabelV Schnabel
    Nov 3, 2008 at 12:55 pm
    Nov 4, 2008 at 4:14 pm
  • Hi, Please suggest me a way out. My interest is to bypass ssh and upload files from local filesystem to HDFS without the use of ssh service. Regards. -- View this message in context: ...
    Nov 27, 2008 at 5:36 pm
    Nov 28, 2008 at 5:25 pm
  • Hi, I am getting "Check sum ok was sent" errors when I am using hadoop. Can someone please let me know why this error is coming and how to avoid it. It was running perfectly fine when I used ...
    Palleti, PallaviPalleti, Pallavi
    Nov 27, 2008 at 6:10 pm
    Nov 28, 2008 at 6:04 am
  • Hi all, Can someone pls guide me on how to get a directory listing of files on HDFS using the java API (0.19.0)? Regards, Shane
    Shane ButlerShane Butler
    Nov 26, 2008 at 4:05 am
    Nov 26, 2008 at 11:21 pm
  • Hi all I am testing s3n file system facilities and try to copy from hdfs to S3 in original format And I get next errors 08/11/24 05:04:49 INFO mapred.JobClient: Running job: job_200811240437_0004 ...
    Alexander AristovAlexander Aristov
    Nov 24, 2008 at 10:12 am
    Nov 26, 2008 at 10:53 pm
  • Dear all, Does anyone knows how to integrate hadoop to web applications? I want to startup a hadoop job by the Java Servlet (in web server servlet container), then get the result and send result back ...
    Nov 24, 2008 at 1:41 am
    Nov 25, 2008 at 3:18 am
  • Hi all, I am running MR which is scanning 130M records and then trying to group them into around 64,000 files. The Map does the grouping of the record by determining the key, and then I use a ...
    Tim robertsonTim robertson
    Nov 24, 2008 at 6:22 am
    Nov 24, 2008 at 7:55 am
  • Hi, My map job has some user defined counters, these are displayed correctly after the job is finished, but while the job is running they only show up intermittently on the jobdetails.jsp page. ...
    Arthur van HoffArthur van Hoff
    Nov 21, 2008 at 1:25 am
    Nov 22, 2008 at 10:09 pm
Group Navigation
period‹ prev | Nov 2008 | next ›
Group Overview
groupcommon-user @

174 users for November 2008

Alex Loddengaard: 25 posts Lohit: 18 posts Tim robertson: 18 posts Sagar Naik: 16 posts Brian Bockelman: 15 posts Steve Loughran: 15 posts Mithila Nagendra: 13 posts Pete Wyckoff: 13 posts Alexander Aristov: 12 posts Owen O'Malley: 12 posts Amareshwari Sriramadasu: 11 posts Raghu Angadi: 11 posts Dhruba Borthakur: 9 posts Aaron Kimball: 8 posts Allen Wittenauer: 8 posts Amar Kamat: 8 posts Ricky Ho: 8 posts Saptarshi Guha: 8 posts ZhiHong Fu: 8 posts Bryan Duxbury: 7 posts
show more