FAQ
Dear all,
I am running cloudera manager 4 in 2 node cluster node. But when I want to
start httpfs the error shoing,
July 5 2012 11:47 AM ERROR org.apache.hadoop.hdfs.DFSClient

Failed to close file /tmp/hue-uploads/tmp.192.168.10.55.4753504131917571218
java.io.IOException: Failed to add a datanode. User may turn off this feature by setting dfs.client.block.write.replace-datanode-on-failure.policy in configuration, where the current policy is DEFAULT. (Nodes: current=[192.168.10.15:50010, 192.168.10.85:50010], original=[192.168.10.15:50010, 192.168.10.85:50010])
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:838)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:934)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)

July 5 2012 11:47 AM ERROR org.apache.hadoop.security.UserGroupInformation

PriviledgedActionException as:hadoop (auth:PROXY) via httpfs (auth:SIMPLE) cause:java.io.IOException: Failed to add a datanode. User may turn off this feature by setting dfs.client.block.write.replace-datanode-on-failure.policy in configuration, where the current policy is DEFAULT. (Nodes: current=[192.168.10.15:50010, 192.168.10.85:50010], original=[192.168.10.15:50010, 192.168.10.85:50010])

July 5 2012 11:47 AM WARN httpfsaudit

FAILED [POST:/v1/tmp/hue-uploads/tmp.192.168.10.55.4753504131917571218] response [Internal Server Error] Failed to add a datanode. User may turn off this feature by setting dfs.client.block.write.replace-datanode-on-failure.policy in configuration, where the current policy is DEFAULT. (Nodes: current=[192.168.10.15:50010, 192.168.10.85:50010], original=[192.168.10.15:50010, 192.168.10.85:50010])

July 5 2012 11:47 AM WARN
org.apache.hadoop.fs.http.server.HttpFSExceptionProvider

[POST:/v1/tmp/hue-uploads/tmp.192.168.10.55.4753504131917571218] response [Internal Server Error] Failed to add a datanode. User may turn off this feature by setting dfs.client.block.write.replace-datanode-on-failure.policy in configuration, where the current policy is DEFAULT. (Nodes: current=[192.168.10.15:50010, 192.168.10.85:50010], original=[192.168.10.15:50010, 192.168.10.85:50010])
java.io.IOException: Failed to add a datanode. User may turn off this feature by setting dfs.client.block.write.replace-datanode-on-failure.policy in configuration, where the current policy is DEFAULT. (Nodes: current=[192.168.10.15:50010, 192.168.10.85:50010], original=[192.168.10.15:50010, 192.168.10.85:50010])
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:838)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:934)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)

July 5 2012 11:48 AM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 11:53 AM INFO httpfsaudit

[/user/bt_data2]

July 5 2012 11:53 AM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 11:55 AM INFO httpfsaudit

[/user/bt_data2]

July 5 2012 11:55 AM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 11:57 AM INFO httpfsaudit

[/user/hadoop]

July 5 2012 11:57 AM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 11:59 AM INFO httpfsaudit

[/user/bt_data2]

July 5 2012 11:59 AM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 12:01 PM INFO httpfsaudit

[/user]

July 5 2012 12:18 PM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 12:18 PM INFO httpfsaudit

[/user/bt_data2]

July 5 2012 12:18 PM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 12:24 PM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 12:24 PM INFO httpfsaudit

[/user/bt_data2]

July 5 2012 12:24 PM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 12:24 PM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 12:26 PM INFO httpfsaudit

[/user/hadoop]

July 5 2012 12:26 PM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 12:27 PM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 12:27 PM INFO httpfsaudit

[/]

July 5 2012 12:27 PM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 12:28 PM INFO httpfsaudit

[/user/hadoop]

July 5 2012 12:28 PM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 12:30 PM INFO httpfsaudit

[/]

July 5 2012 12:30 PM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 12:31 PM INFO httpfsaudit

[/user/hadoop]

July 5 2012 12:31 PM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 12:32 PM INFO httpfsaudit

[/user/hadoop]

July 5 2012 12:32 PM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 12:32 PM INFO httpfsaudit

[/]

July 5 2012 12:32 PM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 12:33 PM INFO httpfsaudit

[/user/hadoop]

July 5 2012 12:33 PM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 12:33 PM INFO httpfsaudit

[/user/bt_data2]

July 5 2012 12:33 PM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 12:33 PM INFO httpfsaudit

[/user/hadoop]

July 5 2012 12:33 PM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 12:33 PM INFO httpfsaudit

[/]

July 5 2012 12:33 PM INFO httpfsaudit

[/user]

July 5 2012 12:33 PM INFO httpfsaudit

[/]

July 5 2012 12:33 PM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 12:33 PM INFO httpfsaudit

[/user/hadoop]

July 5 2012 12:33 PM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 12:33 PM INFO httpfsaudit

[/user/hadoop]

July 5 2012 12:33 PM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 12:33 PM INFO httpfsaudit

[/user]

July 5 2012 12:33 PM INFO httpfsaudit

[/]

July 5 2012 12:52 PM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 12:52 PM INFO httpfsaudit

[/user]

July 5 2012 12:52 PM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 12:54 PM INFO httpfsaudit

[/user]

July 5 2012 12:54 PM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 1:08 PM INFO httpfsaudit

[/user/hadoop]

July 5 2012 1:08 PM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 1:08 PM INFO httpfsaudit

[/user]

July 5 2012 1:08 PM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 1:09 PM INFO httpfsaudit

[/]

July 5 2012 1:48 PM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 1:49 PM INFO httpfsaudit

[/user/bt_data2]

July 5 2012 1:49 PM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 1:50 PM INFO httpfsaudit

[/user/bt_data2]

July 5 2012 1:50 PM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 1:56 PM INFO httpfsaudit

[/user/hadoop]

July 5 2012 1:56 PM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 1:56 PM INFO httpfsaudit

[/user/bt_data2]

July 5 2012 1:56 PM INFO httpfsaudit

Proxy user [hue] DoAs user [hadoop]

July 5 2012 1:58 PM INFO httpfsaudit

[/user]

July 5 2012 2:04 PM INFO httpfsaudit

Proxy user [hue] DoAs user [hdfs]

July 5 2012 2:11 PM INFO httpfsaudit

[/tmp/hue-uploads]

July 5 2012 2:11 PM INFO httpfsaudit

Proxy user [hue] DoAs user [hdfs]


Can any one help me to solve the problem.
nice
Arindam

Search Discussions

  • bc Wong at Jul 15, 2012 at 11:13 pm

    On Sat, Jul 14, 2012 at 1:29 AM, Arindam wrote:

    Dear all,
    I am running cloudera manager 4 in 2 node cluster node. But when I want to
    start httpfs the error shoing,
    July 5 2012 11:47 AM ERROR org.apache.hadoop.hdfs.DFSClient

    Failed to close file /tmp/hue-uploads/tmp.192.168.10.55.4753504131917571218
    java.io.IOException: Failed to add a datanode. User may turn off this feature by setting dfs.client.block.write.replace-datanode-on-failure.policy in configuration, where the current policy is DEFAULT. (Nodes: current=[192.168.10.15:50010, 192.168.10.85:50010], original=[192.168.10.15:50010, 192.168.10.85:50010])
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:838)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:934)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)

    July 5 2012 11:47 AM ERROR
    org.apache.hadoop.security.UserGroupInformation

    PriviledgedActionException as:hadoop (auth:PROXY) via httpfs (auth:SIMPLE) cause:java.io.IOException: Failed to add a datanode. User may turn off this feature by setting dfs.client.block.write.replace-datanode-on-failure.policy in configuration, where the current policy is DEFAULT. (Nodes: current=[192.168.10.15:50010, 192.168.10.85:50010], original=[192.168.10.15:50010, 192.168.10.85:50010])

    July 5 2012 11:47 AM WARN httpfsaudit

    FAILED [POST:/v1/tmp/hue-uploads/tmp.192.168.10.55.4753504131917571218] response [Internal Server Error] Failed to add a datanode. User may turn off this feature by setting dfs.client.block.write.replace-datanode-on-failure.policy in configuration, where the current policy is DEFAULT. (Nodes: current=[192.168.10.15:50010, 192.168.10.85:50010], original=[192.168.10.15:50010, 192.168.10.85:50010])
    Hi Arindam,

    It seems that the HttpFS daemon itself is running, but runs into error when
    you try to upload files from Hue. And the error is that HttpFS can't create
    DNs to the write pipeline. I'd suggest that you try these:

    * Set the default replication factor to 2.
    * Or add a 3rd node to the cluster.
    * Can you read the filesystem via HttpFS?
    * Can you write to HDFS via the `hadoop' command?

    Cheers,
    bc
  • Arindam at Jul 17, 2012 at 7:30 am
    Hello bc
    Thanks for reply,
    Yes my daemon is running, I checked with chkconfig command, But cloudera
    manager shows stoped, I can also upload a file using hadoop put command,
    it is running well. But when I want to upload a file grater than 2 to 3 gb
    using hue file browser it was failed.
    nice
    Arindam
    On Saturday, July 14, 2012 1:59:53 PM UTC+5:30, Arindam wrote:

    Dear all,
    I am running cloudera manager 4 in 2 node cluster node. But when I want to
    start httpfs the error shoing,
    July 5 2012 11:47 AM ERROR org.apache.hadoop.hdfs.DFSClient

    Failed to close file /tmp/hue-uploads/tmp.192.168.10.55.4753504131917571218
    java.io.IOException: Failed to add a datanode. User may turn off this feature by setting dfs.client.block.write.replace-datanode-on-failure.policy in configuration, where the current policy is DEFAULT. (Nodes: current=[192.168.10.15:50010, 192.168.10.85:50010], original=[192.168.10.15:50010, 192.168.10.85:50010])
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:838)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:934)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)

    July 5 2012 11:47 AM ERROR
    org.apache.hadoop.security.UserGroupInformation

    PriviledgedActionException as:hadoop (auth:PROXY) via httpfs (auth:SIMPLE) cause:java.io.IOException: Failed to add a datanode. User may turn off this feature by setting dfs.client.block.write.replace-datanode-on-failure.policy in configuration, where the current policy is DEFAULT. (Nodes: current=[192.168.10.15:50010, 192.168.10.85:50010], original=[192.168.10.15:50010, 192.168.10.85:50010])

    July 5 2012 11:47 AM WARN httpfsaudit

    FAILED [POST:/v1/tmp/hue-uploads/tmp.192.168.10.55.4753504131917571218] response [Internal Server Error] Failed to add a datanode. User may turn off this feature by setting dfs.client.block.write.replace-datanode-on-failure.policy in configuration, where the current policy is DEFAULT. (Nodes: current=[192.168.10.15:50010, 192.168.10.85:50010], original=[192.168.10.15:50010, 192.168.10.85:50010])

    July 5 2012 11:47 AM WARN
    org.apache.hadoop.fs.http.server.HttpFSExceptionProvider

    [POST:/v1/tmp/hue-uploads/tmp.192.168.10.55.4753504131917571218] response [Internal Server Error] Failed to add a datanode. User may turn off this feature by setting dfs.client.block.write.replace-datanode-on-failure.policy in configuration, where the current policy is DEFAULT. (Nodes: current=[192.168.10.15:50010, 192.168.10.85:50010], original=[192.168.10.15:50010, 192.168.10.85:50010])
    java.io.IOException: Failed to add a datanode. User may turn off this feature by setting dfs.client.block.write.replace-datanode-on-failure.policy in configuration, where the current policy is DEFAULT. (Nodes: current=[192.168.10.15:50010, 192.168.10.85:50010], original=[192.168.10.15:50010, 192.168.10.85:50010])
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:778)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:838)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:934)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:461)

    July 5 2012 11:48 AM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 11:53 AM INFO httpfsaudit

    [/user/bt_data2]

    July 5 2012 11:53 AM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 11:55 AM INFO httpfsaudit

    [/user/bt_data2]

    July 5 2012 11:55 AM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 11:57 AM INFO httpfsaudit

    [/user/hadoop]

    July 5 2012 11:57 AM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 11:59 AM INFO httpfsaudit

    [/user/bt_data2]

    July 5 2012 11:59 AM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 12:01 PM INFO httpfsaudit

    [/user]

    July 5 2012 12:18 PM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 12:18 PM INFO httpfsaudit

    [/user/bt_data2]

    July 5 2012 12:18 PM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 12:24 PM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 12:24 PM INFO httpfsaudit

    [/user/bt_data2]

    July 5 2012 12:24 PM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 12:24 PM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 12:26 PM INFO httpfsaudit

    [/user/hadoop]

    July 5 2012 12:26 PM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 12:27 PM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 12:27 PM INFO httpfsaudit

    [/]

    July 5 2012 12:27 PM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 12:28 PM INFO httpfsaudit

    [/user/hadoop]

    July 5 2012 12:28 PM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 12:30 PM INFO httpfsaudit

    [/]

    July 5 2012 12:30 PM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 12:31 PM INFO httpfsaudit

    [/user/hadoop]

    July 5 2012 12:31 PM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 12:32 PM INFO httpfsaudit

    [/user/hadoop]

    July 5 2012 12:32 PM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 12:32 PM INFO httpfsaudit

    [/]

    July 5 2012 12:32 PM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 12:33 PM INFO httpfsaudit

    [/user/hadoop]

    July 5 2012 12:33 PM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 12:33 PM INFO httpfsaudit

    [/user/bt_data2]

    July 5 2012 12:33 PM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 12:33 PM INFO httpfsaudit

    [/user/hadoop]

    July 5 2012 12:33 PM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 12:33 PM INFO httpfsaudit

    [/]

    July 5 2012 12:33 PM INFO httpfsaudit

    [/user]

    July 5 2012 12:33 PM INFO httpfsaudit

    [/]

    July 5 2012 12:33 PM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 12:33 PM INFO httpfsaudit

    [/user/hadoop]

    July 5 2012 12:33 PM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 12:33 PM INFO httpfsaudit

    [/user/hadoop]

    July 5 2012 12:33 PM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 12:33 PM INFO httpfsaudit

    [/user]

    July 5 2012 12:33 PM INFO httpfsaudit

    [/]

    July 5 2012 12:52 PM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 12:52 PM INFO httpfsaudit

    [/user]

    July 5 2012 12:52 PM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 12:54 PM INFO httpfsaudit

    [/user]

    July 5 2012 12:54 PM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 1:08 PM INFO httpfsaudit

    [/user/hadoop]

    July 5 2012 1:08 PM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 1:08 PM INFO httpfsaudit

    [/user]

    July 5 2012 1:08 PM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 1:09 PM INFO httpfsaudit

    [/]

    July 5 2012 1:48 PM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 1:49 PM INFO httpfsaudit

    [/user/bt_data2]

    July 5 2012 1:49 PM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 1:50 PM INFO httpfsaudit

    [/user/bt_data2]

    July 5 2012 1:50 PM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 1:56 PM INFO httpfsaudit

    [/user/hadoop]

    July 5 2012 1:56 PM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 1:56 PM INFO httpfsaudit

    [/user/bt_data2]

    July 5 2012 1:56 PM INFO httpfsaudit

    Proxy user [hue] DoAs user [hadoop]

    July 5 2012 1:58 PM INFO httpfsaudit

    [/user]

    July 5 2012 2:04 PM INFO httpfsaudit

    Proxy user [hue] DoAs user [hdfs]

    July 5 2012 2:11 PM INFO httpfsaudit

    [/tmp/hue-uploads]

    July 5 2012 2:11 PM INFO httpfsaudit

    Proxy user [hue] DoAs user [hdfs]


    Can any one help me to solve the problem.
    nice
    Arindam
  • Arindam at Jul 17, 2012 at 10:04 am
    Hello bc,
    My httpfs is started now, I stop the back ground httpfs than start the same
    fropm cloudera manager. But the hue file manager unable to upload the large
    file grater than 500 gb.
    nice
    Arindam
  • bc Wong at Jul 17, 2012 at 4:57 pm

    On Tue, Jul 17, 2012 at 3:04 AM, Arindam wrote:

    Hello bc,
    My httpfs is started now, I stop the back ground httpfs than start the
    same fropm cloudera manager. But the hue file manager unable to upload the
    large file grater than 500 gb.
    nice
    Arindam
    [bcc: scm-users]

    Hi Arindam,

    Good to know that the role is running. What's your total HDFS capacity on
    your 2-node cluster? Can you upload a 500GB file via the hadoop
    commandline? I'm also including the cdh-user list to see if they know
    what's going on.

    Cheers,
    bc
  • bc Wong at Jul 18, 2012 at 6:24 pm

    On Tuesday, July 17, 2012 9:57:08 AM UTC-7, bc Wong wrote:
    On Tue, Jul 17, 2012 at 3:04 AM, Arindam wrote:

    Hello bc,
    My httpfs is started now, I stop the back ground httpfs than start the
    same fropm cloudera manager. But the hue file manager unable to upload the
    large file grater than 500 gb.
    nice
    Arindam
    [bcc: scm-users]

    Hi Arindam,

    Good to know that the role is running. What's your total HDFS capacity on
    your 2-node cluster? Can you upload a 500GB file via the hadoop
    commandline? I'm also including the cdh-user list to see if they know
    what's going on.
    Hi Arindam,

    Can you try `curl' to upload the file via HttpFS? See <
    http://hadoop.apache.org/common/docs/r1.0.3/webhdfs.html#CREATE>. Use your
    HttpFS host and port. (HttpFS and WebHDFS use the same protocol.)

    Btw, please cc the mailing list when you reply. It's good to keep a
    reference for others.

    Cheers,
    bc

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupscm-users @
categorieshadoop
postedJul 14, '12 at 8:29a
activeJul 18, '12 at 6:24p
posts6
users2
websitecloudera.com
irc#hadoop

2 users in discussion

bc Wong: 3 posts Arindam: 3 posts

People

Translate

site design / logo © 2022 Grokbase