FAQ
Not considering replication, if I use following command from a hadoop client
outside the cluster(the client is not a datanode)

hadoop dfs -put <localfilename> hdfs://<datanode ip>:50010/<filename>

Can I make HDFS to locate the first block of the file on that specific
datanode?

I tried to do that and I got this error:

put: Call to /xxx.xxx.xxx.xxx(ip of my datanode):50010 failed on local
exception: java.io.EOFException

Any help is greatly appreciated.

--
View this message in context: http://old.nabble.com/Error-when-Using-URI-in--put-command-tp32104146p32104146.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.

Search Discussions

  • Rajiv Chittajallu at Jul 21, 2011 at 3:48 am
    the fs uri is  hdfs://<namenode>:<nn rpc port>// .


    ----- Original Message -----
    From: Cheny <coconuttree9999@gmail.com>
    To: core-user@hadoop.apache.org
    Cc:
    Sent: Wednesday, July 20, 2011 6:34 PM
    Subject: Error when Using URI in -put command


    Not considering replication, if I use following command from a hadoop client
    outside the cluster(the client is not a datanode)

    hadoop dfs -put <localfilename> hdfs://<datanode
    ip>:50010/<filename>

    Can I make HDFS to locate the first block of the file on that specific
    datanode?

    I tried to do that and I got this error:

    put: Call to /xxx.xxx.xxx.xxx(ip of my datanode):50010 failed on local
    exception: java.io.EOFException

    Any help is greatly appreciated.

    --
    View this message in context:
    http://old.nabble.com/Error-when-Using-URI-in--put-command-tp32104146p32104146.html
    Sent from the Hadoop core-user mailing list archive at Nabble.com.
  • Harsh J at Jul 21, 2011 at 3:55 am
    Cheny,
    On Thu, Jul 21, 2011 at 7:04 AM, Cheny wrote:
    Can I make HDFS to locate the first block of the file on that specific
    datanode?
    No. There is no way to induce this unless you upload from the DN machine itself.

    --
    Harsh J
  • Uma Mahesh at Jul 21, 2011 at 10:34 am
    Hi Cheny,
    When its creating the file, it should talk with NN. Here, since you
    mentioned destination file path with DN ip port as full URI, it may treat
    that as NN ip port and try to connect. So, it is failing .....

    Absolute paths in DFS will be hdfs://NN_IP:NN_Port/fileneme. This will be
    treated as separte file in DFS.

    Regards,
    Uma
    HUAWEI TECHNOLOGIES CO.,LTD.
    huawei_logo
    Address: Huawei Industrial Base
    Bantian Longgang
    Shenzhen 518129, P.R.China
    www.huawei.com
    ----------------------------------------------------------------------------
    ---------------------------------------------------------
    This e-mail and its attachments contain confidential information from
    HUAWEI, which
    is intended only for the person or entity whose address is listed above. Any
    use of the
    information contained herein in any way (including, but not limited to,
    total or partial
    disclosure, reproduction, or dissemination) by persons other than the
    intended
    recipient(s) is prohibited. If you receive this e-mail in error, please
    notify the sender by
    phone or email immediately and delete it!


    -----Original Message-----
    From: Cheny
    Sent: Thursday, July 21, 2011 7:04 AM
    To: core-user@hadoop.apache.org
    Subject: Error when Using URI in -put command


    Not considering replication, if I use following command from a hadoop client
    outside the cluster(the client is not a datanode)

    hadoop dfs -put <localfilename> hdfs://<datanode ip>:50010/<filename>

    Can I make HDFS to locate the first block of the file on that specific
    datanode?

    I tried to do that and I got this error:

    put: Call to /xxx.xxx.xxx.xxx(ip of my datanode):50010 failed on local
    exception: java.io.EOFException

    Any help is greatly appreciated.

    --
    View this message in context:
    http://old.nabble.com/Error-when-Using-URI-in--put-command-tp32104146p321041
    46.html
    Sent from the Hadoop core-user mailing list archive at Nabble.com.
  • Eric Payne at Jul 22, 2011 at 3:01 pm
    Hi Cheny,

    I'm pretty sure you should provide the namenode's IP and datanode's prot.

    Something more like this:
    hadoop dfs -put <localfilename> hdfs://<namenode ip>:8020/<filename> -Eric
    -----Original Message-----
    From: Cheny
    Sent: Wednesday, July 20, 2011 8:34 PM
    To: core-user@hadoop.apache.org
    Subject: Error when Using URI in -put command


    Not considering replication, if I use following command from a hadoop
    client
    outside the cluster(the client is not a datanode)

    hadoop dfs -put <localfilename> hdfs://<datanode ip>:50010/<filename>

    Can I make HDFS to locate the first block of the file on that specific
    datanode?

    I tried to do that and I got this error:

    put: Call to /xxx.xxx.xxx.xxx(ip of my datanode):50010 failed on local
    exception: java.io.EOFException

    Any help is greatly appreciated.

    --
    View this message in context: http://old.nabble.com/Error-when-Using-URI-
    in--put-command-tp32104146p32104146.html
    Sent from the Hadoop core-user mailing list archive at Nabble.com.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedJul 21, '11 at 1:34a
activeJul 22, '11 at 3:01p
posts5
users5
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2021 Grokbase