FAQ

Search Discussions

18 discussions - 49 posts

  • Had a really peculiar thing happen today: a file that a job of mine created on HDFS seems to have disappeared, and I'm scratching my head as to how this could have happened without any errors getting ...
    David RosenstrauchDavid Rosenstrauch
    Nov 11, 2010 at 5:31 pm
    Nov 18, 2010 at 6:58 pm
  • Hi all, After reading the appenddesign3.pdf in HDFS-256, and looking at the BlockReceiver.java code in 0.21.0, I am confused by the following. The document says that: *For each packet, a DataNode in ...
    Thanh DoThanh Do
    Nov 11, 2010 at 4:26 am
    Nov 11, 2010 at 7:21 pm
  • Hi all, I check the source code of Trash, seems it will periodically remove files under ${USER_HOME}/.Trash But I did not find source that that will put the files under ${USER_HOME}/.Trash when user ...
    Jeff ZhangJeff Zhang
    Nov 25, 2010 at 9:25 am
    Nov 30, 2010 at 5:11 am
  • hi guys, We plan to use hadoop hdfs as the storage to store lots of little files. According to the document , it is recommended to use hadoop Archive to compress those little files to get better ...
    Jason JiJason Ji
    Nov 24, 2010 at 5:52 pm
    Nov 25, 2010 at 5:12 am
  • Hi all, Can somebody let me know what is this parameter used for: dfs.datanode.transferTo.allowed It is not in default config, and the maxChunksPerPacket depends on it. Thanks so much. Thanh
    Thanh DoThanh Do
    Nov 14, 2010 at 5:31 pm
    Nov 19, 2010 at 11:22 pm
  • Hello I am not sure where to find the answers to questions such as below. Any pointer or answer would be appreciated. I would like to know some details such as how many heartbeats are sent per second ...
    Fatemeh PanahiFatemeh Panahi
    Nov 15, 2010 at 9:26 pm
    Nov 15, 2010 at 9:35 pm
  • Hi, I was looking at the test cases for HDFS and found the following test - org.apache.hadoop.hdfs.TestSetTimes.testTimes Is this true? System.out.println("Creating testdir1 and ...
    Vivekanand VellankiVivekanand Vellanki
    Nov 11, 2010 at 8:44 am
    Nov 11, 2010 at 12:55 pm
  • Hi all, When a datanode receive a block, the datanode write the block into 2 streams on disk: - the data stream (dataOut) - the checksum stream (checksumOut) While the checksumOut is created with ...
    Thanh DoThanh Do
    Nov 4, 2010 at 9:58 pm
    Nov 6, 2010 at 3:15 pm
  • Hi! I'm having some trouble with Map/Reduce jobs failing due to HDFS errors. I've been digging around the logs trying to figure out what's happening, and I see the following in the datanode logs: ...
    Erik ForsbergErik Forsberg
    Nov 24, 2010 at 9:30 am
    Dec 3, 2010 at 7:40 am
  • Hi list, We're considering to provide our users with FTP and WebDAV interfaces (with software provided here: http://www.hadoop.iponweb.net/). These both support user accounts, so we'll be able to ...
    Evert LammertsEvert Lammerts
    Nov 25, 2010 at 7:18 pm
    Nov 27, 2010 at 11:30 am
  • Hi all, Is there any benchmarks for HDFS available) (measuring read/write throughput, latency and such). It would be great if somebody point me to any source. Thanks much Thanh
    Thanh DoThanh Do
    Nov 23, 2010 at 1:24 am
    Nov 23, 2010 at 2:06 pm
  • Hi list, If I open up port 50070 on my namenode to the world, in order to provide the web interface to HDFS, will the /fsck namespace be available to arbitrary people? Cheers, Evert Lammerts ...
    Evert LammertsEvert Lammerts
    Nov 19, 2010 at 10:39 am
    Nov 19, 2010 at 5:34 pm
  • Currently in my mini cluster I have one active and one backup NameNode. Whenever I need backup NameNode to be active/regular NameNode, I shutdown it and restart in active mode. As far as I understand ...
    Ozcan ILIKHANOzcan ILIKHAN
    Nov 18, 2010 at 3:20 am
    Nov 18, 2010 at 10:30 am
  • I have a Amazon cluster which is using HDFS (not S3). Is it possible to use distcp to copy file from a HDFS running on Amazon to another cluster? The other cluster is not running on Amazon. It ...
    Robert GoodmanRobert Goodman
    Nov 16, 2010 at 5:39 pm
    Nov 16, 2010 at 10:45 pm
  • We have a corrupted file which has only one block. It turns out that all checksum files of the replicas are corrupted... but the data files are OK... How to recover this file? I can think of trying ...
    Thanh DoThanh Do
    Nov 25, 2010 at 2:35 am
    Nov 25, 2010 at 2:35 am
  • Hi Everyone, What options do I have to do something like a md5sum checksum on files/directories living on HDFS ? We are making backups of files on hdfs and I'd like to do something similar to the ...
    Scott GolbyScott Golby
    Nov 19, 2010 at 5:36 pm
    Nov 19, 2010 at 5:36 pm
  • Hello, I want to merge all files from output to one text file. I looked into http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/fs/FileUtil.html and found copyMerge method. I trying ...
    Pavel NuzhdinPavel Nuzhdin
    Nov 7, 2010 at 4:50 pm
    Nov 7, 2010 at 4:50 pm
  • Hi all, Is dfs.datanode.max.xceivers should be set to be above the maximum writers to that datanode. Say I expect a datanode in my system to receive 10 writers at the same time, i should set this ...
    Thanh DoThanh Do
    Nov 5, 2010 at 6:19 pm
    Nov 5, 2010 at 6:19 pm
Group Navigation
period‹ prev | Nov 2010 | next ›
Group Overview
grouphdfs-user @
categorieshadoop
discussions18
posts49
users24
websitehadoop.apache.org...
irc#hadoop

24 users for November 2010

Thanh Do: 11 posts David Rosenstrauch: 7 posts Todd Lipcon: 5 posts Evert Lammerts: 3 posts Jeff Zhang: 2 posts Ozcan ILIKHAN: 2 posts Vivekanand Vellanki: 2 posts Boris Shkolnik: 1 post Christian Baun: 1 post Eli Collins: 1 post Erik Forsberg: 1 post Fatemeh Panahi: 1 post Gerrit Jansen van Vuuren: 1 post Harsh J: 1 post Jakob Homan: 1 post Jason Ji: 1 post Kiss Tibor: 1 post Konstantin Shvachko: 1 post Pavel Nuzhdin: 1 post Philip Zeyliger: 1 post
show more