FAQ
Hi all

I met a problem when I try to balance certain hdfs directory among the
clusters. For example, I have a directory "/user/xxx/", and there 100
blocks. I want to balance them among my 5 nodes clusters. Each node has 40
blocks (2 replicas). The problem is about transfer block from one datanode
to another. Actually, I followed the balancer's method. However, it always
waits for the response of destination datanode and halt. I attached the
code:
.................................................

Socket sock = new Socket();

DataOutputStream out = null;

DataInputStream in = null;

try{

sock.connect(NetUtils.createSocketAddr(

target.getName()), HdfsConstants.READ_TIMEOUT);

sock.setKeepAlive(true);

System.out.println(sock.isConnected());

out = new DataOutputStream( new BufferedOutputStream(

sock.getOutputStream(), FSConstants.BUFFER_SIZE));

out.writeShort(DataTransferProtocol.DATA_TRANSFER_VERSION);

out.writeByte(DataTransferProtocol.OP_REPLACE_BLOCK);

out.writeLong(block2move.getBlockId());

out.writeLong(block2move.getGenerationStamp());

Text.writeString(out, source.getStorageID());

System.out.println("Ready to move");

source.write(out);

System.out.println("Write to output Stream");

out.flush();

System.out.println("out has been flushed!");

in = new DataInputStream( new BufferedInputStream(

sock.getInputStream(), FSConstants.BUFFER_SIZE));

It stop here and wait for response.

short status = in.readShort();

System.out.println("Got the response from input stream!"+status);

if (status != DataTransferProtocol.OP_STATUS_SUCCESS) {

throw new IOException("block move is failed\t"+status);

}



} catch (IOException e) {

LOG.warn("Error moving block "+block2move.getBlockId()+

" from " + source.getName() + " to " +

target.getName() + " through " +

source.getName() +

": "+e.toString());


} finally {

IOUtils.closeStream(out);

IOUtils.closeStream(in);

IOUtils.closeSocket(sock);
}
..........................................

Any reply will be appreciated. Thank you in advance!

Chen

Search Discussions

  • Icebergs at Mar 4, 2011 at 5:15 am
    try this command
    hadoop fs -setrep -R -w 2 xx
    maybe help

    2011/3/2 He Chen <[email protected]>
    Hi all

    I met a problem when I try to balance certain hdfs directory among the
    clusters. For example, I have a directory "/user/xxx/", and there 100
    blocks. I want to balance them among my 5 nodes clusters. Each node has 40
    blocks (2 replicas). The problem is about transfer block from one datanode
    to another. Actually, I followed the balancer's method. However, it always
    waits for the response of destination datanode and halt. I attached the
    code:
    .................................................

    Socket sock = new Socket();

    DataOutputStream out = null;

    DataInputStream in = null;

    try{

    sock.connect(NetUtils.createSocketAddr(

    target.getName()), HdfsConstants.READ_TIMEOUT);

    sock.setKeepAlive(true);

    System.out.println(sock.isConnected());

    out = new DataOutputStream( new BufferedOutputStream(

    sock.getOutputStream(), FSConstants.BUFFER_SIZE));

    out.writeShort(DataTransferProtocol.DATA_TRANSFER_VERSION);

    out.writeByte(DataTransferProtocol.OP_REPLACE_BLOCK);

    out.writeLong(block2move.getBlockId());

    out.writeLong(block2move.getGenerationStamp());

    Text.writeString(out, source.getStorageID());

    System.out.println("Ready to move");

    source.write(out);

    System.out.println("Write to output Stream");

    out.flush();

    System.out.println("out has been flushed!");

    in = new DataInputStream( new BufferedInputStream(

    sock.getInputStream(), FSConstants.BUFFER_SIZE));

    It stop here and wait for response.

    short status = in.readShort();

    System.out.println("Got the response from input stream!"+status);

    if (status != DataTransferProtocol.OP_STATUS_SUCCESS) {

    throw new IOException("block move is failed\t"+status);

    }



    } catch (IOException e) {

    LOG.warn("Error moving block "+block2move.getBlockId()+

    " from " + source.getName() + " to " +

    target.getName() + " through " +

    source.getName() +

    ": "+e.toString());


    } finally {

    IOUtils.closeStream(out);

    IOUtils.closeStream(in);

    IOUtils.closeSocket(sock);
    }
    ..........................................

    Any reply will be appreciated. Thank you in advance!

    Chen
  • He Chen at Mar 4, 2011 at 3:28 pm
    Thank you very much Icebergs.

    I rewrite the balancer. Now, given a directory like "/user/foo/", I can
    balance the blocks under this directory evenly to every node in the cluster.

    Best wishes!

    Chen
    On Thu, Mar 3, 2011 at 11:14 PM, icebergs wrote:

    try this command
    hadoop fs -setrep -R -w 2 xx
    maybe help

    2011/3/2 He Chen <[email protected]>
    Hi all

    I met a problem when I try to balance certain hdfs directory among the
    clusters. For example, I have a directory "/user/xxx/", and there 100
    blocks. I want to balance them among my 5 nodes clusters. Each node has 40
    blocks (2 replicas). The problem is about transfer block from one datanode
    to another. Actually, I followed the balancer's method. However, it always
    waits for the response of destination datanode and halt. I attached the
    code:
    .................................................

    Socket sock = new Socket();

    DataOutputStream out = null;

    DataInputStream in = null;

    try{

    sock.connect(NetUtils.createSocketAddr(

    target.getName()), HdfsConstants.READ_TIMEOUT);

    sock.setKeepAlive(true);

    System.out.println(sock.isConnected());

    out = new DataOutputStream( new BufferedOutputStream(

    sock.getOutputStream(), FSConstants.BUFFER_SIZE));

    out.writeShort(DataTransferProtocol.DATA_TRANSFER_VERSION);

    out.writeByte(DataTransferProtocol.OP_REPLACE_BLOCK);

    out.writeLong(block2move.getBlockId());

    out.writeLong(block2move.getGenerationStamp());

    Text.writeString(out, source.getStorageID());

    System.out.println("Ready to move");

    source.write(out);

    System.out.println("Write to output Stream");

    out.flush();

    System.out.println("out has been flushed!");

    in = new DataInputStream( new BufferedInputStream(

    sock.getInputStream(), FSConstants.BUFFER_SIZE));

    It stop here and wait for response.

    short status = in.readShort();

    System.out.println("Got the response from input stream!"+status);

    if (status != DataTransferProtocol.OP_STATUS_SUCCESS) {

    throw new IOException("block move is failed\t"+status);

    }



    } catch (IOException e) {

    LOG.warn("Error moving block "+block2move.getBlockId()+

    " from " + source.getName() + " to " +

    target.getName() + " through " +

    source.getName() +

    ": "+e.toString());


    } finally {

    IOUtils.closeStream(out);

    IOUtils.closeStream(in);

    IOUtils.closeSocket(sock);
    }
    ..........................................

    Any reply will be appreciated. Thank you in advance!

    Chen

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedMar 1, '11 at 4:40p
activeMar 4, '11 at 3:28p
posts3
users2
websitehadoop.apache.org...
irc#hadoop

2 users in discussion

He Chen: 2 posts Icebergs: 1 post

People

Translate

site design / logo © 2023 Grokbase