FAQ
I have set-up Hadoop on OpenSuse 11.2 VM using Virtualbox. I ran Hadoop examples in the standalone mode successfully.
Now, I want to run in distributed mode using 2 nodes.
Hadoop starts fine and jps lists all the nodes. But when i try to put any file or run any example, I get error. For e.g. :

hadoop@master:~/hadoop> ./bin/hadoop dfs -copyFromLocal ./input inputsample
10/04/17 14:42:46 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Operation not supported
10/04/17 14:42:46 INFO hdfs.DFSClient: Abandoning block blk_8951413748418693186_1080
....
10/04/17 14:43:04 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available
10/04/17 14:43:04 INFO hdfs.DFSClient: Abandoning block blk_838428157309440632_1081
10/04/17 14:43:10 WARN hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block.
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2845)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)

10/04/17 14:43:10 WARN hdfs.DFSClient: Error Recovery for block blk_838428157309440632_1081 bad datanode[0] nodes == null
10/04/17 14:43:10 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/hadoop/inputsample/check" - Aborting...
copyFromLocal: Protocol not available
10/04/17 14:43:10 ERROR hdfs.DFSClient: Exception closing file /user/hadoop/inputsample/check : java.net.SocketException: Protocol not available
java.net.SocketException: Protocol not available
at sun.nio.ch.Net.getIntOption0(Native Method)
at sun.nio.ch.Net.getIntOption(Net.java:178)
at sun.nio.ch.SocketChannelImpl$1.getInt(SocketChannelImpl.java:419)
at sun.nio.ch.SocketOptsImpl.getInt(SocketOptsImpl.java:60)
at sun.nio.ch.SocketOptsImpl.sendBufferSize(SocketOptsImpl.java:156)
at sun.nio.ch.SocketOptsImpl$IP$TCP.sendBufferSize(SocketOptsImpl.java:286)
at sun.nio.ch.OptionAdaptor.getSendBufferSize(OptionAdaptor.java:129)
at sun.nio.ch.SocketAdaptor.getSendBufferSize(SocketAdaptor.java:328)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:2873)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2826)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)


I can see the files on HDFS through the web interface but they are empty.
Any suggestion on how can I get over this ?

Search Discussions

  • Manas.tomar at Apr 21, 2010 at 6:40 am
    I have set-up Hadoop on OpenSuse 11.2 VM using Virtualbox. I ran Hadoop examples in the standalone mode successfully.
    Now, I want to run in distributed mode using 2 nodes.
    Hadoop starts fine and jps lists all the nodes. But when i try to put any file or run any example, I get error. For e.g. :

    hadoop@master:~/hadoop> ./bin/hadoop dfs -copyFromLocal ./input inputsample
    10/04/17 14:42:46 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Operation not supported
    10/04/17 14:42:46 INFO hdfs.DFSClient: Abandoning block blk_8951413748418693186_1080
    ....
    10/04/17 14:43:04 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available
    10/04/17 14:43:04 INFO hdfs.DFSClient: Abandoning block blk_838428157309440632_1081
    10/04/17 14:43:10 WARN hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block.
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2845)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)

    10/04/17 14:43:10 WARN hdfs.DFSClient: Error Recovery for block blk_838428157309440632_1081 bad datanode[0] nodes == null
    10/04/17 14:43:10 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/hadoop/inputsample/check" - Aborting...
    copyFromLocal: Protocol not available
    10/04/17 14:43:10 ERROR hdfs.DFSClient: Exception closing file /user/hadoop/inputsample/check : java.net.SocketException: Protocol not available
    java.net.SocketException: Protocol not available
    at sun.nio.ch.Net.getIntOption0(Native Method)
    at sun.nio.ch.Net.getIntOption(Net.java:178)
    at sun.nio.ch.SocketChannelImpl$1.getInt(SocketChannelImpl.java:419)
    at sun.nio.ch.SocketOptsImpl.getInt(SocketOptsImpl.java:60)
    at sun.nio.ch.SocketOptsImpl.sendBufferSize(SocketOptsImpl.java:156)
    at sun.nio.ch.SocketOptsImpl$IP$TCP.sendBufferSize(SocketOptsImpl.java:286)
    at sun.nio.ch.OptionAdaptor.getSendBufferSize(OptionAdaptor.java:129)
    at sun.nio.ch.SocketAdaptor.getSendBufferSize(SocketAdaptor.java:328)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:2873)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2826)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)


    I can see the files on HDFS through the web interface but they are empty.
    Any suggestion on how can I get over this ?
  • Steve Loughran at Apr 21, 2010 at 9:32 am

    manas.tomar wrote:
    I have set-up Hadoop on OpenSuse 11.2 VM using Virtualbox. I ran Hadoop examples in the standalone mode successfully.
    Now, I want to run in distributed mode using 2 nodes.
    Hadoop starts fine and jps lists all the nodes. But when i try to put any file or run any example, I get error. For e.g. :

    hadoop@master:~/hadoop> ./bin/hadoop dfs -copyFromLocal ./input inputsample
    10/04/17 14:42:46 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Operation not supported
    10/04/17 14:42:46 INFO hdfs.DFSClient: Abandoning block blk_8951413748418693186_1080
    ....
    10/04/17 14:43:04 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available
    10/04/17 14:43:04 INFO hdfs.DFSClient: Abandoning block blk_838428157309440632_1081
    10/04/17 14:43:10 WARN hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block.
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2845)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)

    10/04/17 14:43:10 WARN hdfs.DFSClient: Error Recovery for block blk_838428157309440632_1081 bad datanode[0] nodes == null
    10/04/17 14:43:10 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/hadoop/inputsample/check" - Aborting...
    copyFromLocal: Protocol not available
    10/04/17 14:43:10 ERROR hdfs.DFSClient: Exception closing file /user/hadoop/inputsample/check : java.net.SocketException: Protocol not available
    java.net.SocketException: Protocol not available
    at sun.nio.ch.Net.getIntOption0(Native Method)
    at sun.nio.ch.Net.getIntOption(Net.java:178)
    at sun.nio.ch.SocketChannelImpl$1.getInt(SocketChannelImpl.java:419)
    at sun.nio.ch.SocketOptsImpl.getInt(SocketOptsImpl.java:60)
    at sun.nio.ch.SocketOptsImpl.sendBufferSize(SocketOptsImpl.java:156)
    at sun.nio.ch.SocketOptsImpl$IP$TCP.sendBufferSize(SocketOptsImpl.java:286)
    at sun.nio.ch.OptionAdaptor.getSendBufferSize(OptionAdaptor.java:129)
    at sun.nio.ch.SocketAdaptor.getSendBufferSize(SocketAdaptor.java:328)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:2873)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2826)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)


    I can see the files on HDFS through the web interface but they are empty.
    Any suggestion on how can I get over this ?
    That is a very low-level socket error; I would file a bugrep on hadoop
    and include all machine details, as there is something very odd about
    your underlying machine or network stack that is stopping hadoop
    tweaking TCP buffer sizes
  • Manas.tomar at Apr 22, 2010 at 9:59 am
    ---- On Wed, 21 Apr 2010 15:01:17 +0530 Steve Loughran wrote ----
    manas.tomar wrote:
    I have set-up Hadoop on OpenSuse 11.2 VM using Virtualbox. I ran Hadoop examples in the standalone mode successfully.
    Now, I want to run in distributed mode using 2 nodes.
    Hadoop starts fine and jps lists all the nodes. But when i try to put any file or run any example, I get error. For e.g. :

    hadoop@master:~/hadoop> ./bin/hadoop dfs -copyFromLocal ./input inputsample
    10/04/17 14:42:46 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Operation not supported
    10/04/17 14:42:46 INFO hdfs.DFSClient: Abandoning block blk_8951413748418693186_1080
    ....
    10/04/17 14:43:04 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available
    10/04/17 14:43:04 INFO hdfs.DFSClient: Abandoning block blk_838428157309440632_1081
    10/04/17 14:43:10 WARN hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block.
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2845)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)

    10/04/17 14:43:10 WARN hdfs.DFSClient: Error Recovery for block blk_838428157309440632_1081 bad datanode[0] nodes == null
    10/04/17 14:43:10 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/hadoop/inputsample/check" - Aborting...
    copyFromLocal: Protocol not available
    10/04/17 14:43:10 ERROR hdfs.DFSClient: Exception closing file /user/hadoop/inputsample/check : java.net.SocketException: Protocol not available
    java.net.SocketException: Protocol not available
    at sun.nio.ch.Net.getIntOption0(Native Method)
    at sun.nio.ch.Net.getIntOption(Net.java:178)
    at sun.nio.ch.SocketChannelImpl$1.getInt(SocketChannelImpl.java:419)
    at sun.nio.ch.SocketOptsImpl.getInt(SocketOptsImpl.java:60)
    at sun.nio.ch.SocketOptsImpl.sendBufferSize(SocketOptsImpl.java:156)
    at sun.nio.ch.SocketOptsImpl$IP$TCP.sendBufferSize(SocketOptsImpl.java:286)
    at sun.nio.ch.OptionAdaptor.getSendBufferSize(OptionAdaptor.java:129)
    at sun.nio.ch.SocketAdaptor.getSendBufferSize(SocketAdaptor.java:328)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:2873)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2826)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)


    I can see the files on HDFS through the web interface but they are empty.
    Any suggestion on how can I get over this ?
    That is a very low-level socket error; I would file a bugrep on hadoop
    and include all machine details, as there is something very odd about
    your underlying machine or network stack that is stopping hadoop
    tweaking TCP buffer sizes
    Thanks.
    any suggestions on how to zero down the cause?
    I want to know whether it is Hadoop or my network config
    i.e. any of Opensuse/VirtualBox or Vista before i file a bugrep.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedApr 17, '10 at 10:00a
activeApr 22, '10 at 9:59a
posts4
users2
websitehadoop.apache.org...
irc#hadoop

2 users in discussion

Manas.tomar: 3 posts Steve Loughran: 1 post

People

Translate

site design / logo © 2021 Grokbase