Search Discussions

104 discussions - 362 posts

  • Hi, We have some application which generates SQL queries and connects to RDBMS using connectors like JDBC for mysql. Now if we generate HQL using our application is there any way to connect to ...
    Sandeep Reddy PSandeep Reddy P
    Jul 5, 2012 at 3:03 pm
    Jul 5, 2012 at 8:11 pm
  • 11


    Hi All, I am getting problem that job is running in localrunner rather than the cluster enviormnent. And when am running the job i would not be able to see the job id in the resource manager UI Can ...
    abhiTowson calabhiTowson cal
    Jul 29, 2012 at 6:31 pm
    Jul 29, 2012 at 10:48 pm
  • Hi, I'm trying to load data into hdfs from local linux file system using java code from a windows machine.But i'm getting the error java.lang.IllegalArgumentException: Wrong FS ...
    Sandeep Reddy PSandeep Reddy P
    Jul 18, 2012 at 3:33 pm
    Jul 30, 2012 at 8:24 pm
  • I am writing a sample application to analyze some log files of webpage accesses. Basically, the log files record which products where accessed, and on what date. I want to write a MapReduce program ...
    Shailesh SamudralaShailesh Samudrala
    Jul 3, 2012 at 10:24 pm
    Jul 5, 2012 at 7:40 am
  • Hi All, I have a Hadoop 2.0 alpha(cdh4) hadoop/hbase cluster runnning on CentOS6.0. The cluster has 4 admin nodes and 8 data nodes. I have the RM and History server running on one machine. RM web ...
    Anil guptaAnil gupta
    Jul 27, 2012 at 6:24 pm
    Aug 2, 2012 at 11:26 pm
  • I set the following params to be false in my pig script (0.10.0) SET mapred.map.tasks.speculative.execution false; SET mapred.reduce.tasks.speculative.execution false; I also verified in the ...
    Jul 12, 2012 at 3:47 am
    Jul 12, 2012 at 6:40 am
  • Hi User list, While I am trying to remove the hadoop version, i am getting following error. root@md-trngpoc1:/usr/local/hadoop_dir# sudo apt-get remove hadoop* Reading package lists... Done Building ...
    prabhu Kprabhu K
    Jul 6, 2012 at 8:17 am
    Jul 6, 2012 at 1:08 pm
  • *Hi all,* * * *I am trying to setup a one-way cross realm trust between a MIT KDC and an active directory server and up to now I did not success.* *I hope someone in this list will be able to help ...
    Ivan FrainIvan Frain
    Jul 25, 2012 at 9:30 am
    Oct 15, 2012 at 1:16 pm
  • Hey guys, I have a cluster with 11 nodes (1 NN and 10 DNs) which is running and working. However my datanodes keep having the same errors, over and over. I googled the problems and tried different ...
    Pablo MusaPablo Musa
    Jul 20, 2012 at 1:07 pm
    Jul 23, 2012 at 2:46 pm
  • Hi, Is it possible to implement triggers or listeners or observers in hadoop filesystem. Some change in hadoop filesystem notify to client application???-- Thanks, sandeep
    Sandeep Reddy PSandeep Reddy P
    Jul 26, 2012 at 2:09 am
    Dec 1, 2012 at 7:40 am
  • Re-posting as I haven't got a solution yet. Sorry for spamming. I won't be able to proceed in my code until I get a JSON response using AppMaster REST URL. :( Thanks, Prajakta
    Prajakta KalmeghPrajakta Kalmegh
    Jul 6, 2012 at 8:30 am
    Jul 25, 2012 at 8:09 pm
  • Hi, I am a complete noob with Hadoop and MapReduce and I have a question that is probably silly, but I still don't know the answer. For the purposes of discussion I'll assume that I'm using a ...
    Peter MarronPeter Marron
    Jul 23, 2012 at 2:25 pm
    Jul 23, 2012 at 11:06 pm
  • Hi, The security documentation specifies how to test a secure cluster by using kinit and thus adding the Kerberos principal TGT to the ticket cache in which the hadoop client code uses to acquire ...
    Tony DeanTony Dean
    Jul 1, 2012 at 5:46 pm
    Jul 2, 2012 at 8:46 pm
  • Hi, I just set up a 2 node POC cluster and I am currently having an issue with it. I ran a wordcount MR test on my cluster to see if it was working and noticed that the Web ui at localhost:50030 ...
    Barry, Sean FBarry, Sean F
    Jul 26, 2012 at 11:20 pm
    Aug 1, 2012 at 5:41 pm
  • Hi Robert I figured out the problem just now. To avoid the below error, I had to set the 'hadoop.http.staticuser.user' property in core-site.xml (defaults to dr.who). I can now get runtime data from ...
    Prajakta KalmeghPrajakta Kalmegh
    Jul 9, 2012 at 11:08 am
    Jul 27, 2012 at 4:09 pm
  • I am out of the office until 07/25/2012. I am out of office. For HAMSTER related things, you can contact Jason(Deng Peng Zhou/China/IBM) For CFM related things, you can contact Daniel(Liang SH ...
    Yuan JinYuan Jin
    Jul 23, 2012 at 8:15 pm
    Jul 24, 2012 at 2:45 am
  • Hello I'm new to Hadoop and I'm trying to do something I *think* should be easy but having some trouble. Here's the details. 1. I'm running Hadoop version 1.0.2 2. I have a 2 Node Hadoop Cluster up ...
    Corbett MartinCorbett Martin
    Jul 18, 2012 at 4:52 pm
    Jul 19, 2012 at 1:02 am
  • Hi, I can't get HDFS to leave safe mode automatically. Here is what I did: -- there was a dead node -- I stopped dfs -- I restarted dfs -- Safe mode wouldn't leave automatically I am using ...
    Juan PinoJuan Pino
    Jul 13, 2012 at 1:36 pm
    Jul 18, 2012 at 3:47 pm
  • HI all, Iam trying to start nodemanager but it does not start. i have installed CDH4 AND YARN all datanodes are running Resource manager is also running WHEN I CHECK LOG FILES,IT SAYS CONNECTION ...
    abhiTowson calabhiTowson cal
    Jul 28, 2012 at 3:16 am
    Jul 29, 2012 at 12:19 am
  • If I set my reducer output to map file output format and the job would say have 100 reducers, will the output generate 100 different index file (one for each reducer) or one index file for all the ...
    Mike SMike S
    Jul 23, 2012 at 8:10 pm
    Jul 27, 2012 at 10:08 pm
  • Hello, I'm trying to trigger a Mahout job from inside my Java application (running in Eclipse), and get it running on my cluster. I have a main class that simply contains: String[] args = new ...
    Steve ArmstrongSteve Armstrong
    Jul 26, 2012 at 11:19 pm
    Jul 27, 2012 at 11:37 am
  • Does anyone know about the feature about using multiple thread in map task or reduce task? Is it a good way to use multithread in map task? -- View this message in context ...
    Jul 26, 2012 at 2:57 am
    Jul 26, 2012 at 1:36 pm
  • Hi, just a short question. Is there any way to figure out the physical storage location of a given block? I don't mean just a list of hostnames (which I know how to obtain), but actually the file ...
    Jul 25, 2012 at 9:17 pm
    Jul 25, 2012 at 10:52 pm
  • Strictly from speed and performance perspective, is Avro as fast as protocol buffer?
    Mike SMike S
    Jul 16, 2012 at 8:50 pm
    Jul 20, 2012 at 10:10 pm
  • Hi users, I have installed hadoop 1.0.3 version, completed the single node setup. and then run the start-all.sh script, am getting the following output ...
    prabhu Kprabhu K
    Jul 9, 2012 at 11:59 am
    Jul 9, 2012 at 2:28 pm
  • Hi guys, I am sorry to bother you, but I have a cluster already configured and running with the following packages (cdh3): hadoop-0.20.noarch 0.20.2+923.256-1 hadoop-hbase.noarch 0.90.6+84.29-1 ...
    Pablo MusaPablo Musa
    Jul 3, 2012 at 8:18 pm
    Jul 5, 2012 at 5:35 pm
  • Hello, I have downloaded hadoop_1.0.3-1_x86_64.deb from hadoop official website, and installed using command under root privileged. dpkg -i hadoop_1.0.3-1_x86_64.deb But there is an error: chown ...
    Ying HuangYing Huang
    Jul 4, 2012 at 3:03 am
    Jul 4, 2012 at 4:55 am
  • I am seeing these exceptions, anyone know what they might be caused due to? Case of corrupt file? java.io.IOException: too many length or distance symbols at ...
    Prashant KommireddiPrashant Kommireddi
    Jul 20, 2012 at 8:36 pm
    Jul 29, 2012 at 7:51 pm
  • I'm plagued with this error: Retrying connect to server: localhost/ I'm trying to set up hadoop on a new machine, just a basic pseudo-distributed setup. I've done this quite a few ...
    Keith WileyKeith Wiley
    Jul 27, 2012 at 6:23 pm
    Jul 27, 2012 at 8:54 pm
  • Hi all, I installed Hadoop 1.0.3 and am running it as a single node cluster. I noticed that start-daemon.sh only starts Namenode, Secondary Namenode and the JobTracker daemon. Datanode and ...
    Dinesh JoshiDinesh Joshi
    Jul 27, 2012 at 9:23 am
    Jul 27, 2012 at 12:19 pm
  • Hi guys : I want my tasks to end/fail, but I don't want to kill my entire hadoop job. I have a hadoop job that runs 5 hadoop jobs in a row. Im on the last of those sub-jobs, and want to fail all ...
    Jay vyasJay vyas
    Jul 20, 2012 at 9:18 pm
    Jul 21, 2012 at 3:39 am
  • how do i manage concurrency in hadoop like we do in teradata. We need to have a read and write lock when simultaneous the same table is being hit with a read query and write query
    Saubhagya deySaubhagya dey
    Jul 18, 2012 at 3:39 pm
    Jul 19, 2012 at 3:27 am
  • Hi All, I was wondering if anyone could help me figure out what's going wrong in my five node Hadoop cluster, please? It consists of: 1. NameNode hduser@namenode:/usr/local/hadoop$ jps 13049 DataNode ...
    Ronan LehaneRonan Lehane
    Jul 16, 2012 at 6:35 pm
    Jul 17, 2012 at 7:28 pm
  • Hi, I have done setup numerous times but this time i did after some break. I managed to get the cluster up and running fine but when I do hadoop dfs -ls / it actually shows me contents of linux file ...
    Nitin PawarNitin Pawar
    Jul 13, 2012 at 1:11 pm
    Jul 16, 2012 at 1:45 pm
  • To debug an specific file, I need to run hadoop in eclipse and eclipse keep throwing the Too Many Open File Ecxception. I followed the post out there to increase the number of open file per process ...
    Mike SMike S
    Jul 11, 2012 at 9:34 pm
    Jul 12, 2012 at 2:55 pm
  • Hi all, If the job is failing because of some bad records.How would I know which records are bad.Can I put them in log file and skip those records Regards Abhi Sent from my iPhone
    Jul 7, 2012 at 8:42 pm
    Jul 7, 2012 at 10:55 pm
  • I have one MBP with 10.7.4 and one laptop with Ubuntu 12.04. Is it possible to set up a hadoop cluster by such mixed environment? Best Regards, -- Welcome to my ET Blog http://www.jdxyw.com
    Yongwei XingYongwei Xing
    Jul 6, 2012 at 8:20 am
    Jul 6, 2012 at 3:33 pm
  • Hi, The input of my map reduce is a binary file with no record begin and end marker. The only thing is that each record is a fixed 180bytes size in the binary file. How do I make Hadoop to properly ...
    MJ SamMJ Sam
    Jul 5, 2012 at 6:55 pm
    Jul 6, 2012 at 1:07 am
  • I am wondering what's the right way to go about designing reading input and output where file format may change over period. For instance we might start with "field1,field2,field3" but at some point ...
    Mohit AnchliaMohit Anchlia
    Jul 2, 2012 at 9:10 pm
    Jul 3, 2012 at 4:40 am
  • Hi, Is there a way to get around the 1KB limitation of the hadoop fs -tail command (http://hadoop.apache.org/common/docs/r0.20.0/hdfs_shell.html#tail)? In my application some of the records can have ...
    Sukhendu ChakrabortySukhendu Chakraborty
    Jul 16, 2012 at 11:50 pm
    Sep 8, 2012 at 7:35 pm
  • "hadoop.tmp.dir" points to the directory on local disk to store intermediate task related data. It's currently mounted to "/tmp/hadoop" for me. Some of my jobs are running and Filesystem on which ...
    Abhay RatnaparkhiAbhay Ratnaparkhi
    Jul 25, 2012 at 5:14 pm
    Jul 27, 2012 at 5:18 pm
  • Hello. I have a problem with the filesystem closing. The filesystem was closed when the hive query is running. It is 'select' query and the data size is about 1TB. I'm using hadoop-0.20.2 and ...
    Jul 10, 2012 at 4:29 pm
    Jul 26, 2012 at 1:28 pm
  • Hi, I was planning to use DataJoin jar (located in $HADOOP_INSTALL/contrib/datajoin) for reduce-side join (version 1.0.3). It looks like DataJoinMapperBase implements Mapper interface (according to ...
    Abhinav M KulkarniAbhinav M Kulkarni
    Jul 23, 2012 at 5:22 am
    Jul 25, 2012 at 8:28 pm
  • Since FileSystem is a Closeable i would expect code using it to be like this: FileSystem fs = path.getFileSystem(conf); try { // do something with fs, such as read from the path } finally { ...
    Koert KuipersKoert Kuipers
    Jul 24, 2012 at 2:35 pm
    Jul 24, 2012 at 5:51 pm
  • I'm curious about the relationship between the namenode/job/task trackers and the machine's web server? Do the former require the latter? Does successful connection to the trackers imply that the ...
    Keith WileyKeith Wiley
    Jul 20, 2012 at 10:28 pm
    Jul 23, 2012 at 5:33 am
  • Hadoop newbie here... Trying to make a REST call from curl but no matter what I try I'm always getting this exception: curl -i -X PUT ...
    Corbett MartinCorbett Martin
    Jul 18, 2012 at 8:48 pm
    Jul 18, 2012 at 9:32 pm
  • Hi all, I have a Hadoop cluster which uses Samba to map an Active Directory domain to my CentOS 5.7 Hadoop cluster. However, I notice a strange mismatch with groups. Does anyone have any debugging ...
    Clay B.Clay B.
    Jul 16, 2012 at 7:06 pm
    Jul 16, 2012 at 8:44 pm
  • Hi Users, Can you please any one provide me in-depth Hadoop Administrator related web links and ppts. Thanks, Prabhu.
    prabhu Kprabhu K
    Jul 15, 2012 at 12:18 pm
    Jul 15, 2012 at 1:36 pm
  • Hi guys : Whats the idiomatic way to iterate through the k/v pairs in a text file ? been playing with almost everything everything with SequenceFiles and almost forgot :) my text output actually has ...
    Jay VyasJay Vyas
    Jul 14, 2012 at 12:30 am
    Jul 14, 2012 at 2:19 am
  • hi all, Does CDH4 have append option?? Regards Abhi
    abhiTowson calabhiTowson cal
    Jul 13, 2012 at 8:20 pm
    Jul 13, 2012 at 8:30 pm
Group Navigation
period‹ prev | Jul 2012 | next ›
Group Overview
groupcommon-user @

125 users for July 2012

Harsh J: 41 posts abhiTowson cal: 14 posts Sandeep Reddy P: 14 posts Anil Gupta: 12 posts prabhu K: 10 posts MJ Sam: 9 posts Prajakta Kalmegh: 9 posts Robert Evans: 9 posts Bejoy KS: 7 posts Nitin Pawar: 7 posts Edward Capriolo: 6 posts Jay Vyas: 6 posts Michael Segel: 6 posts Pablo Musa: 6 posts Syed kather: 6 posts Tony Dean: 6 posts Yang: 6 posts Chen He: 5 posts Corbett Martin: 5 posts Abhinav M Kulkarni: 4 posts
show more