FAQ
Are you using Hue 2.0?

There was an uploading bug fixed in Hue 2.1/CDH4.1 that could cause a
similar problem. It was creating too many blocks, even for small files (so
not a space problem but a number of blocks problem).

Romain


On Sat, Jan 26, 2013 at 12:18 PM, Frederick Tucker
wrote:
This is an old post I know, but I'm having the same issue.

On Thursday, August 30, 2012 8:32:13 AM UTC-4, jrask wrote:

Hi, I recently downloaded and installed CDH4 that I run in in virtual box
on my Mac.

I am able to run examples, experiment with them and make them use my own
files,

BUT.. Everything works fine until I try to upload larger files ( ~ >
40mb), then I get this error in logs/hadoop-hdfs-datanode-**
localhost.localdomain.log.

------

2012-08-30 10:48:09,023 INFO org.apache.hadoop.hdfs.server.**datanode.DataNode: opWriteBlock BP-610054326-127.0.0.1-**1340249102664:blk_**1714736346331226743_7956 received exception org.apache.hadoop.util.**DiskChecker$**DiskOutOfSpaceException: Insufficient space for appending to FinalizedReplica, blk_1714736346331226743_7956, FINALIZED
getNumBytes() = 35389190
getBytesOnDisk() = 35389190
getVisibleLength()= 35389190
getVolume() = /var/lib/hadoop-hdfs/cache/**hdfs/dfs/data/current
getBlockFile() = /var/lib/hadoop-hdfs/cache/**hdfs/dfs/data/current/BP-**610054326-127.0.0.1-**1340249102664/current/**finalized/subdir48/blk_**1714736346331226743
unlinked =true
2012-08-30 10:48:09,023 INFO org.apache.hadoop.hdfs.**DFSClient: Exception in createBlockOutputStream
java.io.EOFException: Premature EOF: no length prefix available
at org.apache.hadoop.hdfs.**protocol.HdfsProtoUtil.**vintPrefixed(HdfsProtoUtil.**java:162)
at org.apache.hadoop.hdfs.**DFSOutputStream$DataStreamer.**createBlockOutputStream(**DFSOutputStream.java:1045)
at org.apache.hadoop.hdfs.**DFSOutputStream$DataStreamer.**setupPipelineForAppendOrRecove**ry(DFSOutputStream.java:943)
at org.apache.hadoop.hdfs.**DFSOutputStream$DataStreamer.**run(DFSOutputStream.java:461)
2012-08-30 10:48:09,023 WARN org.apache.hadoop.hdfs.**DFSClient: DataStreamer Exception
java.lang.NullPointerException
at org.apache.hadoop.hdfs.**DFSOutputStream$DataStreamer.**run(DFSOutputStream.java:510)
2012-08-30 10:48:09,024 ERROR org.apache.hadoop.hdfs.**DFSClient: Failed to close file /tmp/hue-uploads/tmp.10.0.100.**49.17221047615114504826
java.io.IOException: All datanodes 127.0.0.1:50010 are bad. Aborting...
at org.apache.hadoop.hdfs.**DFSOutputStream$DataStreamer.**setupPipelineForAppendOrRecove**ry(DFSOutputStream.java:911)
at org.apache.hadoop.hdfs.**DFSOutputStream$DataStreamer.**run(DFSOutputStream.java:461)
2012-08-30 10:48:09,025 ERROR org.apache.hadoop.security.**UserGroupInformation: PriviledgedActionException as:hdfs (auth:SIMPLE) via hue (auth:SIMPLE) cause:java.io.IOException: All datanodes 127.0.0.1:50010 are bad. Aborting...
2012-08-30 10:48:09,088 ERROR org.apache.hadoop.hdfs.server.**datanode.DataNode: localhost.localdomain:50010:**DataXceiver error processing WRITE_BLOCK operation src: /127.0.0.1:41303 dest: /127.0.0.1:50010
org.apache.hadoop.util.**DiskChecker$**DiskOutOfSpaceException: Insufficient space for appending to FinalizedReplica, blk_1714736346331226743_7956, FINALIZED
getNumBytes() = 35389190
getBytesOnDisk() = 35389190
getVisibleLength()= 35389190
getVolume() = /var/lib/hadoop-hdfs/cache/**hdfs/dfs/data/current
getBlockFile() = /var/lib/hadoop-hdfs/cache/**hdfs/dfs/data/current/BP-**610054326-127.0.0.1-**1340249102664/current/**finalized/subdir48/blk_**1714736346331226743
unlinked =true
at org.apache.hadoop.hdfs.server.**datanode.fsdataset.impl.**FsDatasetImpl.append(**FsDatasetImpl.java:517)
at org.apache.hadoop.hdfs.server.**datanode.fsdataset.impl.**FsDatasetImpl.append(**FsDatasetImpl.java:491)
at org.apache.hadoop.hdfs.server.**datanode.fsdataset.impl.**FsDatasetImpl.append(**FsDatasetImpl.java:87)
at org.apache.hadoop.hdfs.server.**datanode.BlockReceiver.<init>(**BlockReceiver.java:164)
at org.apache.hadoop.hdfs.server.**datanode.DataXceiver.**writeBlock(DataXceiver.java:**365)
at org.apache.hadoop.hdfs.**protocol.datatransfer.**Receiver.opWriteBlock(**Receiver.java:98)
at org.apache.hadoop.hdfs.**protocol.datatransfer.**Receiver.processOp(Receiver.**java:66)
at org.apache.hadoop.hdfs.server.**datanode.DataXceiver.run(**DataXceiver.java:189)
at java.lang.Thread.run(Thread.**java:662)
-----

I have been trying to figure out what "Insufficient space for appending to FinalizedReplica" means but I am unable to find any information about this but I assum that


this has to do with diskspace....?

So the next thing is to do df -h but it seems to be lots of diskspace left.
--
[cloudera@localhost ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 9.4G 3.8G 5.6G 41% /


/dev/root 9.4G 3.8G 5.6G 41% /
/dev 1.7G 92K 1.7G 1% /dev
/dev/sda2 95M 17M 77M 18% /boot
tmpfs 1.7G 0 1.7G 0% /dev/shm
--


Next thing I did was to run hdfs dfsamin -report

----
[cloudera@localhost ~]$ hdfs dfsadmin -report
Configured Capacity: 10079059968 (9.39 GB)
Present Capacity: 6422970368 (5.98 GB)
DFS Remaining: 5979701248 (5.57 GB)
DFS Used: 443269120 (422.73 MB)
DFS Used%: 6.9%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

------------------------------
-------------------
Datanodes available: 1 (1 total, 0 dead)

Live datanodes:
Name: 127.0.0.1:50010 (localhost.localdomain)
Hostname: localhost.localdomain
Decommission Status : Normal
Configured Capacity: 10079059968 (9.39 GB)
DFS Used: 443269120 (422.73 MB)
Non DFS Used: 3656089600 (3.4 GB)
DFS Remaining: 5979701248 (5.57 GB)
DFS Used%: 4.4%
DFS Remaining%: 59.33%
Last contact: Thu Aug 30 08:16:24 EDT 2012
----

The next issue is that once I have got this error I am unable to upload
any file, no matter how small the file is. All
other attempts to upload files result in the following error.

2012-08-30 08:28:53,083 WARN org.apache.hadoop.hdfs.
DFSClient: DataStreamer Exception
java.io.IOException: File /tmp/hue-uploads/tmp.10.0.100.**49.13336237847498719520
could only be replicated to 0 nodes instead of minReplication (=1). There
are 1 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.**blockmanagement.BlockManager.*
*chooseTarget(BlockManager.**java:1256)
at org.apache.hadoop.hdfs.server.**namenode.FSNamesystem.**
getAdditionalBlock(**FSNamesystem.java:1977)
at org.apache.hadoop.hdfs.server.**namenode.NameNodeRpcServer.**
addBlock(NameNodeRpcServer.**java:470)
at org.apache.hadoop.hdfs.**protocolPB.**
ClientNamenodeProtocolServerSi**deTranslatorPB.addBlock(**
ClientNamenodeProtocolServerSi**deTranslatorPB.java:292)
at org.apache.hadoop.hdfs.**protocol.proto.**
ClientNamenodeProtocolProtos$**ClientNamenodeProtocol$2.**
callBlockingMethod(**ClientNamenodeProtocolProtos.**java:42602)
at org.apache.hadoop.ipc.**ProtobufRpcEngine$Server$**
ProtoBufRpcInvoker.call(**ProtobufRpcEngine.java:427)
at org.apache.hadoop.ipc.RPC$**Server.call(RPC.java:916)
at org.apache.hadoop.ipc.Server$**Handler$1.run(Server.java:**
1692)
at org.apache.hadoop.ipc.Server$**Handler$1.run(Server.java:**
1688)
at java.security.**AccessController.doPrivileged(**Native Method)
at javax.security.auth.Subject.**doAs(Subject.java:396)
at org.apache.hadoop.security.**UserGroupInformation.doAs(**
UserGroupInformation.java:**1232)
at org.apache.hadoop.ipc.Server$**Handler.run(Server.java:1686)

at org.apache.hadoop.ipc.Client.**call(Client.java:1161)
at org.apache.hadoop.ipc.**ProtobufRpcEngine$Invoker.**
invoke(ProtobufRpcEngine.java:**184)
at $Proxy63.addBlock(Unknown Source)
at sun.reflect.**NativeMethodAccessorImpl.**invoke0(Native
Method)
at sun.reflect.**NativeMethodAccessorImpl.**invoke(**
NativeMethodAccessorImpl.java:**39)
at sun.reflect.**DelegatingMethodAccessorImpl.**invoke(**
DelegatingMethodAccessorImpl.**java:25)
at java.lang.reflect.Method.**invoke(Method.java:597)
at org.apache.hadoop.io.retry.**RetryInvocationHandler.**
invokeMethod(**RetryInvocationHandler.java:**165)
at org.apache.hadoop.io.retry.**RetryInvocationHandler.invoke(**
RetryInvocationHandler.java:**84)
at $Proxy63.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.**protocolPB.**
ClientNamenodeProtocolTranslat**orPB.addBlock(**
ClientNamenodeProtocolTranslat**orPB.java:285)
at org.apache.hadoop.hdfs.**DFSOutputStream$DataStreamer.**
locateFollowingBlock(**DFSOutputStream.java:1104)
at org.apache.hadoop.hdfs.**DFSOutputStream$DataStreamer.**
nextBlockOutputStream(**DFSOutputStream.java:979)
at org.apache.hadoop.hdfs.**DFSOutputStream$DataStreamer.**
run(DFSOutputStream.java:455)
2012-08-30 08:28:53,085 ERROR org.apache.hadoop.hdfs.**DFSClient: Failed
to close file /tmp/hue-uploads/tmp.10.0.100.**49.13336237847498719520
java.io.IOException: File /tmp/hue-uploads/tmp.10.0.100.**49.13336237847498719520
could only be replicated to 0 nodes instead of minReplication (=1). There
are 1 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.**blockmanagement.BlockManager.*
*chooseTarget(BlockManager.**java:1256)
at org.apache.hadoop.hdfs.server.**namenode.FSNamesystem.**
getAdditionalBlock(**FSNamesystem.java:1977)
at org.apache.hadoop.hdfs.server.**namenode.NameNodeRpcServer.**
addBlock(NameNodeRpcServer.**java:470)
at org.apache.hadoop.hdfs.**protocolPB.**
ClientNamenodeProtocolServerSi**deTranslatorPB.addBlock(**
ClientNamenodeProtocolServerSi**deTranslatorPB.java:292)
at org.apache.hadoop.hdfs.**protocol.proto.**
ClientNamenodeProtocolProtos$**ClientNamenodeProtocol$2.**callB:


A restart of the datanode restores the functionality again except that it
is still not possible to upload large files.

Please, any ideas will be highly appreciated!

Kind regards, Johan

--

Search Discussions

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouphue-user @
categorieshadoop
postedJan 28, '13 at 10:55p
activeJan 28, '13 at 10:55p
posts1
users1
websitecloudera.com
irc#hadoop

1 user in discussion

Romain Rigaux: 1 post

People

Translate

site design / logo © 2022 Grokbase