FAQ
Hello, we were importing several TB of data overnight and it seemed one
of the loads
failed. We're running Hadoop 0.18.3, and there are 6 nodes in the
cluster, all are
dual quad core with 6 gigs of ram. We were using hadoop dfs -put to
load the data
from both the namenode server and the secondary namenode server in
parallel. Space
is not the issue as we have many terabytes of space still remaining.

The load from the namenode is still going, the load from the secondary
namenode failed.

This is the error we got:

dfs.DFSClient: Exception in createBlockOutputStream java.net.SocketTimeoutException: 69000 millis
timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/x.x.x.x:55748 remote=/x.x.x.x:50010]
09/08/19 07:50:45 INFO dfs.DFSClient: Abandoning block blk_8258931159385721568_6046
09/08/19 07:50:59 INFO dfs.DFSClient: Waiting to find target node: x.x.x.x:50010
09/08/19 07:55:19 INFO dfs.DFSClient: Exception in createBlockOutputStream java.net.SocketTimeoutException: 69000 millis timeout while waiting for
channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/x.x.x.x:47409 remote=/x.x.x.x:50010]
09/08/19 07:55:19 INFO dfs.DFSClient: Abandoning block blk_-6648842835159477749_6046
09/08/19 07:55:41 INFO dfs.DFSClient: Waiting to find target node: x.x.x.x:50010


I thought maybe whatever configuration value set to 69000 was too low,
but there is nothing in hadoop-site or hadoop-default
using a value of 69000.

Can anyone shed some light on this?

thanks,
M

Search Discussions

Discussion Posts

Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 1 of 2 | next ›
Discussion Overview
groupcommon-user @
categorieshadoop
postedAug 19, '09 at 6:04p
activeAug 20, '09 at 7:09a
posts2
users2
websitehadoop.apache.org...
irc#hadoop

2 users in discussion

Mayuran Yogarajah: 1 post Raghu Angadi: 1 post

People

Translate

site design / logo © 2021 Grokbase