FAQ

Search Discussions

32 discussions - 101 posts

  • I have a very general question on the usefulness of HDFS for purposes other than running distributed compute jobs for Hadoop. Hadoop and HDFS seem very popular these days, but the use of HDFS for ...
    Nathan RutmanNathan Rutman
    Jan 25, 2011 at 8:37 pm
    Feb 5, 2011 at 6:52 am
  • We were restarting the namenode and datanode processes on our cluster (due to changing some configuration options), however the namenode failed to restart with the error I've pasted below. (If it ...
    Adam PhelpsAdam Phelps
    Jan 12, 2011 at 6:44 pm
    Feb 23, 2011 at 10:30 pm
  • Dear all, I want to know if their is any class or any way to access the file list and meta data from a remote HDFS namenode. for example, there are two hadoop instances, which mean two namenodes (nn1 ...
    SimonSimon
    Jan 11, 2011 at 7:37 pm
    Jan 12, 2011 at 4:42 am
  • I see that there was a thread on this in December, but I can't retrieve it to reply properly, oh well. So, I have a 30 node cluster (plus separate namenode, jobtracker, etc). Each is a 12 disk ...
    Jonathan DisherJonathan Disher
    Jan 3, 2011 at 3:20 am
    Jan 4, 2011 at 5:34 pm
  • I'm getting the below exception on my secondary namenode. As far as I can tell the edits isn't being reconciled as it should be (i.e. edits.NEW continues to grow) on the namenode. I've searched ...
    Tyler CoffinTyler Coffin
    Jan 5, 2011 at 4:33 pm
    Jan 7, 2011 at 4:18 pm
  • Hi guys, I have a small cluster that each machine have two NICs one is configured with external IP and another is configured with internal IP. Right now all the machines are communicating with each ...
    Felix gaoFelix gao
    Jan 25, 2011 at 10:25 pm
    Feb 1, 2011 at 8:22 pm
  • In case others are interested, I ran a comparison of TestDFSIO on HDFS vs Lustre. This is on an 8-node Infiniband-connected cluster. For the Lustre test, we replaced the HTTP transfer during the ...
    Nathan RutmanNathan Rutman
    Jan 27, 2011 at 9:44 pm
    Jan 29, 2011 at 6:34 am
  • Moving discussion to hdfs-user mailing list: hdfs-user@hadoop.apache.com Would help to know what the cause for the throw of a RemoteException was. -- Harsh J www.harshj.com
    Harsh JHarsh J
    Jan 29, 2011 at 5:54 am
    Jan 30, 2011 at 12:43 am
  • Hi, I have gone through the file deletion flow and came to know that Replication Monitor is responsible for File Deletions and these configurations will affect the block deletion ...
    SravankumarSravankumar
    Jan 13, 2011 at 5:33 pm
    Jan 17, 2011 at 5:41 pm
  • Hi,all, *I have created a hadoop cluster on 2 hosts. When i enter "start-dfs.sh" on namenode host, the console shows this log:* ------------------------ starting namenode, logging to ...
    姜晓东姜晓东
    Jan 13, 2011 at 3:29 am
    Jan 14, 2011 at 10:06 am
  • Hi, I'm planing for a youtube-like video online site and looking for a suitable file system. The high performance and reliability of HDFS seems to be the great candidate. But somebody told me that ...
    KevinKueiKevinKuei
    Jan 6, 2011 at 4:05 am
    Jan 6, 2011 at 4:02 pm
  • Hi, Our current cluster runs with 22 data nodes - each with 4TB . We should be installing new data nodes on this existing cluster , but each will have 8TB of storage capacity. I am wondering how will ...
    David GinzburgDavid Ginzburg
    Jan 20, 2011 at 8:42 am
    Jan 23, 2011 at 1:09 pm
  • I am seeing this log on the screen when running from my client (hadoop -copyFromLocal) 11/01/04 06:49:24 INFO hdfs.DFSClient: Exception in createBlockOutputStream java .net.ConnectException: ...
    Hiller, Dean (Contractor)Hiller, Dean (Contractor)
    Jan 4, 2011 at 8:56 pm
    Jan 4, 2011 at 11:45 pm
  • Hello Hadoopers, I need to distcp data across two clusters. For security reasons I can not use hdfs based distcp. HFTP based distcp is failing with following Ioexception. Stack trace. Copy failed: ...
    Ravi PhulariRavi Phulari
    Jan 4, 2011 at 10:38 pm
    Jan 4, 2011 at 10:58 pm
  • I'd like to understand how HDFS handle Datanode failure gracefully. Let's suppose a replication factor of 3 is used in HDFS for this discussion. After 'DataStreamer' receives a list of Datanodes A, ...
    Sean BigdatafunSean Bigdatafun
    Jan 3, 2011 at 8:50 am
    Jan 4, 2011 at 5:57 am
  • GZIP is not splittable. Does that mean a GZIP block compressed sequencefile can't take advantage of MR parallelism? How to control the size of block to be compressed in SequenceFile? -- --Sean
    Sean BigdatafunSean Bigdatafun
    Jan 31, 2011 at 8:26 am
    Jan 31, 2011 at 5:12 pm
  • Is there a tool similar to rsync for hdfs? I would like to check for: ctime and size (the default behavior of rsync) and then sync if necessary. This would be a nice feature to add in the dfs ...
    Mag GamMag Gam
    Jan 12, 2011 at 2:33 am
    Jan 12, 2011 at 7:58 pm
  • Hi, I found out that : https://github.com/cloudera/hue/blob/master/desktop/libs/hadoop/src/hadoop/fs/hadoopfs.py can be used to write data directly to HDFS without writing to a local filesystem but I ...
    Mapred LearnMapred Learn
    Jan 12, 2011 at 10:09 am
    Jan 12, 2011 at 4:12 pm
  • No, he meant to move the discussion to the hdfs-user@hadoop.apache.org list for HDFS queries. Sending to hdfs-user, bcc'ing to general. [This the right way?] -- Harsh J www.harshj.com
    Harsh JHarsh J
    Jan 6, 2011 at 12:44 pm
    Jan 6, 2011 at 12:57 pm
  • Please submit user queries to the appropriate user mailing list and not development ones. Moving this question to the hdfs-user list at hdfs-user@hadoop.apache.org -- Harsh J www.harshj.com
    Harsh JHarsh J
    Jan 29, 2011 at 5:05 pm
    Jan 29, 2011 at 5:05 pm
  • Hey hdfs gurus - One of my clusters is going through disk upgrades and not all machines have a homogenous disk layout during the transition period. At first I started looking into auto-generating ...
    Travis CrawfordTravis Crawford
    Jan 27, 2011 at 10:55 pm
    Jan 27, 2011 at 10:55 pm
  • What is the default timeout value to detect a dead node? I would like to decrease this if possible Rita
    RitaRita
    Jan 26, 2011 at 11:20 pm
    Jan 26, 2011 at 11:20 pm
  • Dear all: If namenode, secondarynamenode both crash, the metadata & editlog are not exist anymore, is there any solution or process can recover or rebuild metadata for the file stored in HDFS ? Best ...
    YhuangqYhuangq
    Jan 26, 2011 at 5:35 am
    Jan 26, 2011 at 5:35 am
  • We are running hadoop-0.20.1. I did not set this cluster up, and the person who did is unavailable, so I apologize for any of the following that is unclear. We would like to (re)start a secondary ...
    Charlie wCharlie w
    Jan 22, 2011 at 12:37 am
    Jan 22, 2011 at 12:37 am
  • Hi Guys, I'm copying a big file (9GB) to the hdfs using the command line interface zcat wpc_ALL_200910.log.gz | dfs -copyFromLocal - /user/cdh-hadoop/mscdata/wpc_ALL_200910.log I'm getting the ...
    Charles GonçalvesCharles Gonçalves
    Jan 19, 2011 at 1:06 am
    Jan 19, 2011 at 1:06 am
  • Hi, I am using hadoop-0.20.2 version and hbase-0.20.6. HBase Region Servers shutsdown because of the following error in file/block creation. The following is the sequence of events related to a ...
    Charan kumarCharan kumar
    Jan 13, 2011 at 8:30 pm
    Jan 13, 2011 at 8:30 pm
  • Hello Friends, I am seeing Hadoop log timestamps & file timestamps not same as system time. I found that this problem was discussed on on mailing list earlier but there was no solution posted. ...
    Ravi PhulariRavi Phulari
    Jan 11, 2011 at 7:24 am
    Jan 11, 2011 at 7:24 am
  • Hello Friends, I am seeing Hadoop log timestamps & file timestamps not same as system time. I found that this problem was discussed on on mailing list earlier but there was no solution posted. ...
    Ravi PhulariRavi Phulari
    Jan 10, 2011 at 10:02 pm
    Jan 10, 2011 at 10:02 pm
  • Well, now the decommission is still running 12 hours later. I only have 1.8 gig in hdfs and only .06+.25 needs to be moved. Should this really be taking more than 12 hours? Here is the report Name: ...
    Hiller, Dean (Contractor)Hiller, Dean (Contractor)
    Jan 4, 2011 at 1:13 pm
    Jan 4, 2011 at 1:13 pm
  • Thanks, Dean This message and any attachments are intended only for the use of the addressee and may contain information that is privileged and confidential. If the reader of the message is not the ...
    Hiller, Dean (Contractor)Hiller, Dean (Contractor)
    Jan 4, 2011 at 1:29 am
    Jan 4, 2011 at 1:29 am
  • So I edited /etc/hosts with the ugly 127.0.0.1 <FQDN <hostname And nodes now decommission. This is horrible from adding a node perspective. We want to quickly add a box, upload the same image, turn ...
    Hiller, Dean (Contractor)Hiller, Dean (Contractor)
    Jan 4, 2011 at 1:23 am
    Jan 4, 2011 at 1:23 am
  • Luckily I am in dev so not a biggie, but datanode seems to be reading from /etc/hosts(ie. Java calls to InetAddress.getLocalHost return 127.0.0.1 instead of the ip) when displaying the name of the ...
    Hiller, Dean (Contractor)Hiller, Dean (Contractor)
    Jan 4, 2011 at 1:03 am
    Jan 4, 2011 at 1:03 am
Group Navigation
period‹ prev | Jan 2011 | next ›
Group Overview
grouphdfs-user @
categorieshadoop
discussions32
posts101
users40
websitehadoop.apache.org...
irc#hadoop

40 users for January 2011

Nathan Rutman: 7 posts Harsh J: 6 posts Hiller, Dean (Contractor): 6 posts Ravi Phulari: 5 posts Todd Lipcon: 5 posts Stu24mail: 4 posts Adam Phelps: 4 posts Gerrit Jansen van Vuuren: 4 posts Simon: 4 posts Allen Wittenauer: 3 posts Ayon Sinha: 3 posts Dhruba Borthakur: 3 posts Eli Collins: 3 posts Friso van Vollenhoven: 3 posts Jonathan Disher: 3 posts Mag Gam: 3 posts Rita: 3 posts Sean Bigdatafun: 3 posts Tyler Coffin: 3 posts David Ginzburg: 2 posts
show more