FAQ
Hi,

I am evaluating Hadoop's Portability Across Heterogeneous Hardware and
Software Platforms. For this I am trying to setup a grid (Hadoop 0.17)
having Linux ( RHEL5 / FC 9), Solaris (SunOS 5) and AIX (5.3). I was able to
setup a grid with 10 Linux machines and run some basic grid jobs on it. I
was also able to setup and start off a standalone grid in an AIX machine.
But when I try to copy data into this (AIX) grid, I get the following error



*$ /opt/hadoop/bin/hadoop dfs -ls*

*ls: Cannot access .: No such file or directory.*

*$ /opt/hadoop/bin/hadoop dfs -copyFromLocal option_100_data.csv
option_100_data.csv*

*08/08/13 02:36:50 INFO dfs.DFSClient:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/hadoop/option_100_data.csv could only be replicated to 0 nodes,
instead of 1*

* at
org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1145)
*

* at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:300)*

* at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)*

* at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
*

* at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
*

* at java.lang.reflect.Method.invoke(Method.java:615)*

* at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:446)*

* at org.apache.hadoop.ipc.Server$Handler.run(Server.java:896)*

* *

* at org.apache.hadoop.ipc.Client.call(Client.java:557)*

* at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212)*

* at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)*

* at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)*

* at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
*

* at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
*

* at java.lang.reflect.Method.invoke(Method.java:615)*

* at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
*

* at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
*

* at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)*

* at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2334)
*

* at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2219)
*

* at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1700(DFSClient.java:1702)
*

* at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1842)
*

*
*

*08/08/13 02:36:50 WARN dfs.DFSClient: NotReplicatedYetException sleeping
/user/hadoop/option_100_data.csv retries left 4*

*08/08/13 02:36:51 INFO dfs.DFSClient:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/hadoop/option_100_data.csv could only be replicated to 0 nodes,
instead of 1*

* at
org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1145)
*

* …………………………………………………………………………………………………………………………………………………………………………*

*…………………………………………………………………………………………………………………………………………………………………………*

* …………………………………………………………………………………………………………………………………………………………………………*

*08/08/13 02:36:57 WARN dfs.DFSClient: Error Recovery for block null bad
datanode[0]*

*$*



This is the only error I am seeing in the log files. My first doubt was if
the issue was due to IBM JDK. To verify this I tried to run Hadoop using IBM
JDK in Linux (FC 9) machine and it worked perfectly.

It would be a great help if someone could give me some pointers on what I
can try to solve / debug this error.
Thanks,
Arun

Search Discussions

  • Ps40 at Dec 24, 2008 at 4:27 pm
    Hi,

    I saw that a fix was created for this issue. Were you able to run hadoop on
    AIX after this? We are in a similar situation and are wondering if hadoop
    will work on AIX and Solaris.

    Thanks


    Arun Venugopal-2 wrote:
    Hi,

    I am evaluating Hadoop's Portability Across Heterogeneous Hardware and
    Software Platforms. For this I am trying to setup a grid (Hadoop 0.17)
    having Linux ( RHEL5 / FC 9), Solaris (SunOS 5) and AIX (5.3). I was able
    to
    setup a grid with 10 Linux machines and run some basic grid jobs on it. I
    was also able to setup and start off a standalone grid in an AIX machine.
    But when I try to copy data into this (AIX) grid, I get the following
    error



    *$ /opt/hadoop/bin/hadoop dfs -ls*

    *ls: Cannot access .: No such file or directory.*

    *$ /opt/hadoop/bin/hadoop dfs -copyFromLocal option_100_data.csv
    option_100_data.csv*

    *08/08/13 02:36:50 INFO dfs.DFSClient:
    org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
    /user/hadoop/option_100_data.csv could only be replicated to 0 nodes,
    instead of 1*

    * at
    org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1145)
    *

    * at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:300)*

    * at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)*

    * at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
    *

    * at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    *

    * at java.lang.reflect.Method.invoke(Method.java:615)*

    * at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:446)*

    * at org.apache.hadoop.ipc.Server$Handler.run(Server.java:896)*

    * *

    * at org.apache.hadoop.ipc.Client.call(Client.java:557)*

    * at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212)*

    * at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)*

    * at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)*

    * at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
    *

    * at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    *

    * at java.lang.reflect.Method.invoke(Method.java:615)*

    * at
    org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    *

    * at
    org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    *

    * at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)*

    * at
    org.apache.hadoop.dfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2334)
    *

    * at
    org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2219)
    *

    * at
    org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1700(DFSClient.java:1702)
    *

    * at
    org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1842)
    *

    *
    *

    *08/08/13 02:36:50 WARN dfs.DFSClient: NotReplicatedYetException sleeping
    /user/hadoop/option_100_data.csv retries left 4*

    *08/08/13 02:36:51 INFO dfs.DFSClient:
    org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
    /user/hadoop/option_100_data.csv could only be replicated to 0 nodes,
    instead of 1*

    * at
    org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1145)
    *

    * …………………………………………………………………………………………………………………………………………………………………………*

    *…………………………………………………………………………………………………………………………………………………………………………*

    * …………………………………………………………………………………………………………………………………………………………………………*

    *08/08/13 02:36:57 WARN dfs.DFSClient: Error Recovery for block null bad
    datanode[0]*

    *$*



    This is the only error I am seeing in the log files. My first doubt was if
    the issue was due to IBM JDK. To verify this I tried to run Hadoop using
    IBM
    JDK in Linux (FC 9) machine and it worked perfectly.

    It would be a great help if someone could give me some pointers on what I
    can try to solve / debug this error.
    Thanks,
    Arun
    --
    View this message in context: http://www.nabble.com/issues-with-hadoop-in-AIX-tp18959680p21156899.html
    Sent from the Hadoop core-user mailing list archive at Nabble.com.
  • Brian Bockelman at Dec 24, 2008 at 7:28 pm
    Hey,

    I can attest that Hadoop works on Solaris 10 just fine.

    Brian
    On Dec 24, 2008, at 10:26 AM, ps40 wrote:


    Hi,

    I saw that a fix was created for this issue. Were you able to run
    hadoop on
    AIX after this? We are in a similar situation and are wondering if
    hadoop
    will work on AIX and Solaris.

    Thanks


    Arun Venugopal-2 wrote:
    Hi,

    I am evaluating Hadoop's Portability Across Heterogeneous Hardware
    and
    Software Platforms. For this I am trying to setup a grid (Hadoop
    0.17)
    having Linux ( RHEL5 / FC 9), Solaris (SunOS 5) and AIX (5.3). I
    was able
    to
    setup a grid with 10 Linux machines and run some basic grid jobs on
    it. I
    was also able to setup and start off a standalone grid in an AIX
    machine.
    But when I try to copy data into this (AIX) grid, I get the
    following
    error



    *$ /opt/hadoop/bin/hadoop dfs -ls*

    *ls: Cannot access .: No such file or directory.*

    *$ /opt/hadoop/bin/hadoop dfs -copyFromLocal option_100_data.csv
    option_100_data.csv*

    *08/08/13 02:36:50 INFO dfs.DFSClient:
    org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
    /user/hadoop/option_100_data.csv could only be replicated to 0 nodes,
    instead of 1*

    * at
    org
    .apache
    .hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1145)
    *

    * at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:
    300)*

    * at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)*

    * at
    sun
    .reflect
    .NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
    *

    * at
    sun
    .reflect
    .DelegatingMethodAccessorImpl
    .invoke(DelegatingMethodAccessorImpl.java:43)
    *

    * at java.lang.reflect.Method.invoke(Method.java:615)*

    * at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:446)*

    * at org.apache.hadoop.ipc.Server$Handler.run(Server.java:
    896)*

    * *

    * at org.apache.hadoop.ipc.Client.call(Client.java:557)*

    * at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212)*

    * at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)*

    * at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)*

    * at
    sun
    .reflect
    .NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
    *

    * at
    sun
    .reflect
    .DelegatingMethodAccessorImpl
    .invoke(DelegatingMethodAccessorImpl.java:43)
    *

    * at java.lang.reflect.Method.invoke(Method.java:615)*

    * at
    org
    .apache
    .hadoop
    .io
    .retry
    .RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    *

    * at
    org
    .apache
    .hadoop
    .io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:
    59)
    *

    * at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)*

    * at
    org.apache.hadoop.dfs.DFSClient
    $DFSOutputStream.locateFollowingBlock(DFSClient.java:2334)
    *

    * at
    org.apache.hadoop.dfs.DFSClient
    $DFSOutputStream.nextBlockOutputStream(DFSClient.java:2219)
    *

    * at
    org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access
    $1700(DFSClient.java:1702)
    *

    * at
    org.apache.hadoop.dfs.DFSClient$DFSOutputStream
    $DataStreamer.run(DFSClient.java:1842)
    *

    *
    *

    *08/08/13 02:36:50 WARN dfs.DFSClient: NotReplicatedYetException
    sleeping
    /user/hadoop/option_100_data.csv retries left 4*

    *08/08/13 02:36:51 INFO dfs.DFSClient:
    org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
    /user/hadoop/option_100_data.csv could only be replicated to 0 nodes,
    instead of 1*

    * at
    org
    .apache
    .hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1145)
    *

    * …………………………………………………………………………………………………………………………………………………………………………*

    *…………………………………………………………………………………………………………………………………………………………………………*

    * …………………………………………………………………………………………………………………………………………………………………………*

    *08/08/13 02:36:57 WARN dfs.DFSClient: Error Recovery for block
    null bad
    datanode[0]*

    *$*



    This is the only error I am seeing in the log files. My first doubt
    was if
    the issue was due to IBM JDK. To verify this I tried to run Hadoop
    using
    IBM
    JDK in Linux (FC 9) machine and it worked perfectly.

    It would be a great help if someone could give me some pointers on
    what I
    can try to solve / debug this error.
    Thanks,
    Arun
    --
    View this message in context: http://www.nabble.com/issues-with-hadoop-in-AIX-tp18959680p21156899.html
    Sent from the Hadoop core-user mailing list archive at Nabble.com.
  • Ps40 at Dec 25, 2008 at 5:15 pm
    Thanks for replying. Are you guys using Hadoop on Solaris in a production
    environment?



    Brian Bockelman wrote:
    Hey,

    I can attest that Hadoop works on Solaris 10 just fine.

    Brian
    On Dec 24, 2008, at 10:26 AM, ps40 wrote:


    Hi,

    I saw that a fix was created for this issue. Were you able to run
    hadoop on
    AIX after this? We are in a similar situation and are wondering if
    hadoop
    will work on AIX and Solaris.

    Thanks


    Arun Venugopal-2 wrote:
    Hi,

    I am evaluating Hadoop's Portability Across Heterogeneous Hardware
    and
    Software Platforms. For this I am trying to setup a grid (Hadoop
    0.17)
    having Linux ( RHEL5 / FC 9), Solaris (SunOS 5) and AIX (5.3). I
    was able
    to
    setup a grid with 10 Linux machines and run some basic grid jobs on
    it. I
    was also able to setup and start off a standalone grid in an AIX
    machine.
    But when I try to copy data into this (AIX) grid, I get the
    following
    error



    *$ /opt/hadoop/bin/hadoop dfs -ls*

    *ls: Cannot access .: No such file or directory.*

    *$ /opt/hadoop/bin/hadoop dfs -copyFromLocal option_100_data.csv
    option_100_data.csv*

    *08/08/13 02:36:50 INFO dfs.DFSClient:
    org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
    /user/hadoop/option_100_data.csv could only be replicated to 0 nodes,
    instead of 1*

    * at
    org
    .apache
    .hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1145)
    *

    * at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:
    300)*

    * at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)*

    * at
    sun
    .reflect
    .NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
    *

    * at
    sun
    .reflect
    .DelegatingMethodAccessorImpl
    .invoke(DelegatingMethodAccessorImpl.java:43)
    *

    * at java.lang.reflect.Method.invoke(Method.java:615)*

    * at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:446)*

    * at org.apache.hadoop.ipc.Server$Handler.run(Server.java:
    896)*

    * *

    * at org.apache.hadoop.ipc.Client.call(Client.java:557)*

    * at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212)*

    * at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)*

    * at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)*

    * at
    sun
    .reflect
    .NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
    *

    * at
    sun
    .reflect
    .DelegatingMethodAccessorImpl
    .invoke(DelegatingMethodAccessorImpl.java:43)
    *

    * at java.lang.reflect.Method.invoke(Method.java:615)*

    * at
    org
    .apache
    .hadoop
    .io
    .retry
    .RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    *

    * at
    org
    .apache
    .hadoop
    .io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:
    59)
    *

    * at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)*

    * at
    org.apache.hadoop.dfs.DFSClient
    $DFSOutputStream.locateFollowingBlock(DFSClient.java:2334)
    *

    * at
    org.apache.hadoop.dfs.DFSClient
    $DFSOutputStream.nextBlockOutputStream(DFSClient.java:2219)
    *

    * at
    org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access
    $1700(DFSClient.java:1702)
    *

    * at
    org.apache.hadoop.dfs.DFSClient$DFSOutputStream
    $DataStreamer.run(DFSClient.java:1842)
    *

    *
    *

    *08/08/13 02:36:50 WARN dfs.DFSClient: NotReplicatedYetException
    sleeping
    /user/hadoop/option_100_data.csv retries left 4*

    *08/08/13 02:36:51 INFO dfs.DFSClient:
    org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
    /user/hadoop/option_100_data.csv could only be replicated to 0 nodes,
    instead of 1*

    * at
    org
    .apache
    .hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1145)
    *

    * …………………………………………………………………………………………………………………………………………………………………………*

    *…………………………………………………………………………………………………………………………………………………………………………*

    * …………………………………………………………………………………………………………………………………………………………………………*

    *08/08/13 02:36:57 WARN dfs.DFSClient: Error Recovery for block
    null bad
    datanode[0]*

    *$*



    This is the only error I am seeing in the log files. My first doubt
    was if
    the issue was due to IBM JDK. To verify this I tried to run Hadoop
    using
    IBM
    JDK in Linux (FC 9) machine and it worked perfectly.

    It would be a great help if someone could give me some pointers on
    what I
    can try to solve / debug this error.
    Thanks,
    Arun
    --
    View this message in context:
    http://www.nabble.com/issues-with-hadoop-in-AIX-tp18959680p21156899.html
    Sent from the Hadoop core-user mailing list archive at Nabble.com.
    --
    View this message in context: http://www.nabble.com/issues-with-hadoop-in-AIX-tp18959680p21168778.html
    Sent from the Hadoop core-user mailing list archive at Nabble.com.
  • Brian Bockelman at Dec 27, 2008 at 1:30 am

    On Dec 25, 2008, at 11:14 AM, ps40 wrote:
    Thanks for replying. Are you guys using Hadoop on Solaris in a
    production
    environment?
    Yes! It's worked well so far (we switched our first Solaris server
    onto Hadoop about 2 weeks ago, so it's not like we have a plethora of
    experience). We're running it on a Sun "Thumper" on a large ZFS RAID
    pool. We are discussing turning on a second Thumper with a non-RAID
    configuration, and will be observing the difference.

    It's all Sun Java, so we don't really notice the difference. I don't
    have any experience using the IBM JDK, so AIX might be a tougher cookie.

    Brian
    Brian Bockelman wrote:
    Hey,

    I can attest that Hadoop works on Solaris 10 just fine.

    Brian
    On Dec 24, 2008, at 10:26 AM, ps40 wrote:


    Hi,

    I saw that a fix was created for this issue. Were you able to run
    hadoop on
    AIX after this? We are in a similar situation and are wondering if
    hadoop
    will work on AIX and Solaris.

    Thanks


    Arun Venugopal-2 wrote:
    Hi,

    I am evaluating Hadoop's Portability Across Heterogeneous Hardware
    and
    Software Platforms. For this I am trying to setup a grid (Hadoop
    0.17)
    having Linux ( RHEL5 / FC 9), Solaris (SunOS 5) and AIX (5.3). I
    was able
    to
    setup a grid with 10 Linux machines and run some basic grid jobs on
    it. I
    was also able to setup and start off a standalone grid in an AIX
    machine.
    But when I try to copy data into this (AIX) grid, I get the
    following
    error



    *$ /opt/hadoop/bin/hadoop dfs -ls*

    *ls: Cannot access .: No such file or directory.*

    *$ /opt/hadoop/bin/hadoop dfs -copyFromLocal option_100_data.csv
    option_100_data.csv*

    *08/08/13 02:36:50 INFO dfs.DFSClient:
    org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
    /user/hadoop/option_100_data.csv could only be replicated to 0
    nodes,
    instead of 1*

    * at
    org
    .apache
    .hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1145)
    *

    * at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:
    300)*

    * at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)*

    * at
    sun
    .reflect
    .NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
    *

    * at
    sun
    .reflect
    .DelegatingMethodAccessorImpl
    .invoke(DelegatingMethodAccessorImpl.java:43)
    *

    * at java.lang.reflect.Method.invoke(Method.java:615)*

    * at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:446)*

    * at org.apache.hadoop.ipc.Server$Handler.run(Server.java:
    896)*

    * *

    * at org.apache.hadoop.ipc.Client.call(Client.java:557)*

    * at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212)*

    * at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)*

    * at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)*

    * at
    sun
    .reflect
    .NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
    *

    * at
    sun
    .reflect
    .DelegatingMethodAccessorImpl
    .invoke(DelegatingMethodAccessorImpl.java:43)
    *

    * at java.lang.reflect.Method.invoke(Method.java:615)*

    * at
    org
    .apache
    .hadoop
    .io
    .retry
    .RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:
    82)
    *

    * at
    org
    .apache
    .hadoop
    .io
    .retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:
    59)
    *

    * at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)*

    * at
    org.apache.hadoop.dfs.DFSClient
    $DFSOutputStream.locateFollowingBlock(DFSClient.java:2334)
    *

    * at
    org.apache.hadoop.dfs.DFSClient
    $DFSOutputStream.nextBlockOutputStream(DFSClient.java:2219)
    *

    * at
    org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access
    $1700(DFSClient.java:1702)
    *

    * at
    org.apache.hadoop.dfs.DFSClient$DFSOutputStream
    $DataStreamer.run(DFSClient.java:1842)
    *

    *
    *

    *08/08/13 02:36:50 WARN dfs.DFSClient: NotReplicatedYetException
    sleeping
    /user/hadoop/option_100_data.csv retries left 4*

    *08/08/13 02:36:51 INFO dfs.DFSClient:
    org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
    /user/hadoop/option_100_data.csv could only be replicated to 0
    nodes,
    instead of 1*

    * at
    org
    .apache
    .hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1145)
    *

    * …………………………………………………………………………………………………………………………………………………………………………*

    *…………………………………………………………………………………………………………………………………………………………………………*

    * …………………………………………………………………………………………………………………………………………………………………………*

    *08/08/13 02:36:57 WARN dfs.DFSClient: Error Recovery for block
    null bad
    datanode[0]*

    *$*



    This is the only error I am seeing in the log files. My first doubt
    was if
    the issue was due to IBM JDK. To verify this I tried to run Hadoop
    using
    IBM
    JDK in Linux (FC 9) machine and it worked perfectly.

    It would be a great help if someone could give me some pointers on
    what I
    can try to solve / debug this error.
    Thanks,
    Arun
    --
    View this message in context:
    http://www.nabble.com/issues-with-hadoop-in-AIX-tp18959680p21156899.html
    Sent from the Hadoop core-user mailing list archive at Nabble.com.
    --
    View this message in context: http://www.nabble.com/issues-with-hadoop-in-AIX-tp18959680p21168778.html
    Sent from the Hadoop core-user mailing list archive at Nabble.com.
  • Arun Venugopal at Dec 27, 2008 at 6:18 am
    Hi !

    Yes, I was able to run this on AIX as well with a minor change to the
    DF.java code. But this was more of a proof of concept than on a
    production system.

    Regards,
    Arun
    On Dec 26, 2008, at 5:30 PM, Brian Bockelman wrote:

    On Dec 25, 2008, at 11:14 AM, ps40 wrote:


    Thanks for replying. Are you guys using Hadoop on Solaris in a
    production
    environment?
    Yes! It's worked well so far (we switched our first Solaris server
    onto Hadoop about 2 weeks ago, so it's not like we have a plethora
    of experience). We're running it on a Sun "Thumper" on a large ZFS
    RAID pool. We are discussing turning on a second Thumper with a non-
    RAID configuration, and will be observing the difference.

    It's all Sun Java, so we don't really notice the difference. I
    don't have any experience using the IBM JDK, so AIX might be a
    tougher cookie.

    Brian
    Brian Bockelman wrote:
    Hey,

    I can attest that Hadoop works on Solaris 10 just fine.

    Brian
    On Dec 24, 2008, at 10:26 AM, ps40 wrote:


    Hi,

    I saw that a fix was created for this issue. Were you able to run
    hadoop on
    AIX after this? We are in a similar situation and are wondering if
    hadoop
    will work on AIX and Solaris.

    Thanks


    Arun Venugopal-2 wrote:
    Hi,

    I am evaluating Hadoop's Portability Across Heterogeneous Hardware
    and
    Software Platforms. For this I am trying to setup a grid (Hadoop
    0.17)
    having Linux ( RHEL5 / FC 9), Solaris (SunOS 5) and AIX (5.3). I
    was able
    to
    setup a grid with 10 Linux machines and run some basic grid jobs
    on
    it. I
    was also able to setup and start off a standalone grid in an AIX
    machine.
    But when I try to copy data into this (AIX) grid, I get the
    following
    error



    *$ /opt/hadoop/bin/hadoop dfs -ls*

    *ls: Cannot access .: No such file or directory.*

    *$ /opt/hadoop/bin/hadoop dfs -copyFromLocal option_100_data.csv
    option_100_data.csv*

    *08/08/13 02:36:50 INFO dfs.DFSClient:
    org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
    /user/hadoop/option_100_data.csv could only be replicated to 0
    nodes,
    instead of 1*

    * at
    org
    .apache
    .hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:
    1145)
    *

    * at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:
    300)*

    * at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)*

    * at
    sun
    .reflect
    .NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
    *

    * at
    sun
    .reflect
    .DelegatingMethodAccessorImpl
    .invoke(DelegatingMethodAccessorImpl.java:43)
    *

    * at java.lang.reflect.Method.invoke(Method.java:615)*

    * at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:446)*

    * at org.apache.hadoop.ipc.Server$Handler.run(Server.java:
    896)*

    * *

    * at org.apache.hadoop.ipc.Client.call(Client.java:557)*

    * at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:
    212)*

    * at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown
    Source)*

    * at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)*

    * at
    sun
    .reflect
    .NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
    *

    * at
    sun
    .reflect
    .DelegatingMethodAccessorImpl
    .invoke(DelegatingMethodAccessorImpl.java:43)
    *

    * at java.lang.reflect.Method.invoke(Method.java:615)*

    * at
    org
    .apache
    .hadoop
    .io
    .retry
    .RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:
    82)
    *

    * at
    org
    .apache
    .hadoop
    .io
    .retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:
    59)
    *

    * at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown
    Source)*

    * at
    org.apache.hadoop.dfs.DFSClient
    $DFSOutputStream.locateFollowingBlock(DFSClient.java:2334)
    *

    * at
    org.apache.hadoop.dfs.DFSClient
    $DFSOutputStream.nextBlockOutputStream(DFSClient.java:2219)
    *

    * at
    org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access
    $1700(DFSClient.java:1702)
    *

    * at
    org.apache.hadoop.dfs.DFSClient$DFSOutputStream
    $DataStreamer.run(DFSClient.java:1842)
    *

    *
    *

    *08/08/13 02:36:50 WARN dfs.DFSClient: NotReplicatedYetException
    sleeping
    /user/hadoop/option_100_data.csv retries left 4*

    *08/08/13 02:36:51 INFO dfs.DFSClient:
    org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
    /user/hadoop/option_100_data.csv could only be replicated to 0
    nodes,
    instead of 1*

    * at
    org
    .apache
    .hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:
    1145)
    *

    *
    …………………………………………………………………………………………………………………………………………………………………………*

    *…………………………………………………………………………………………………………………………………………………………………………*

    *
    …………………………………………………………………………………………………………………………………………………………………………*

    *08/08/13 02:36:57 WARN dfs.DFSClient: Error Recovery for block
    null bad
    datanode[0]*

    *$*



    This is the only error I am seeing in the log files. My first
    doubt
    was if
    the issue was due to IBM JDK. To verify this I tried to run Hadoop
    using
    IBM
    JDK in Linux (FC 9) machine and it worked perfectly.

    It would be a great help if someone could give me some pointers on
    what I
    can try to solve / debug this error.
    Thanks,
    Arun
    --
    View this message in context:
    http://www.nabble.com/issues-with-hadoop-in-AIX-tp18959680p21156899.html
    Sent from the Hadoop core-user mailing list archive at Nabble.com.
    --
    View this message in context: http://www.nabble.com/issues-with-hadoop-in-AIX-tp18959680p21168778.html
    Sent from the Hadoop core-user mailing list archive at Nabble.com.
  • Allen Wittenauer at Dec 28, 2008 at 5:00 am

    On 12/27/08 12:18 AM, "Arun Venugopal" wrote:
    Yes, I was able to run this on AIX as well with a minor change to the
    DF.java code. But this was more of a proof of concept than on a
    production system.
    There are lots of places where Hadoop (esp. in contrib) interprets the
    output of Unix command line utilities. Changes like this are likely going to
    be required for AIX and other Unix systems that aren't being used by a
    committer. :(
  • Steve Loughran at Jan 5, 2009 at 11:30 am

    Allen Wittenauer wrote:
    On 12/27/08 12:18 AM, "Arun Venugopal" wrote:
    Yes, I was able to run this on AIX as well with a minor change to the
    DF.java code. But this was more of a proof of concept than on a
    production system.
    There are lots of places where Hadoop (esp. in contrib) interprets the
    output of Unix command line utilities. Changes like this are likely going to
    be required for AIX and other Unix systems that aren't being used by a
    committer. :(
    that aren't being used in the test process, equally importantly


    I think hudson runs on Solaris, doesn't it?
  • Ps40 at Dec 29, 2008 at 8:57 pm
    Arun,

    There was a fix checked in for this and apparently it applies to 0.18.1. We
    don't see this code change in 0.18.2. We are assuming that the fix will be
    in 0.18.3. Is there a date for the release of 0.18.3?

    Thanks




    Arun Venugopal-2 wrote:
    Hi !

    Yes, I was able to run this on AIX as well with a minor change to the
    DF.java code. But this was more of a proof of concept than on a
    production system.

    Regards,
    Arun
    On Dec 26, 2008, at 5:30 PM, Brian Bockelman wrote:

    On Dec 25, 2008, at 11:14 AM, ps40 wrote:


    Thanks for replying. Are you guys using Hadoop on Solaris in a
    production
    environment?
    Yes! It's worked well so far (we switched our first Solaris server
    onto Hadoop about 2 weeks ago, so it's not like we have a plethora
    of experience). We're running it on a Sun "Thumper" on a large ZFS
    RAID pool. We are discussing turning on a second Thumper with a non-
    RAID configuration, and will be observing the difference.

    It's all Sun Java, so we don't really notice the difference. I
    don't have any experience using the IBM JDK, so AIX might be a
    tougher cookie.

    Brian
    Brian Bockelman wrote:
    Hey,

    I can attest that Hadoop works on Solaris 10 just fine.

    Brian
    On Dec 24, 2008, at 10:26 AM, ps40 wrote:


    Hi,

    I saw that a fix was created for this issue. Were you able to run
    hadoop on
    AIX after this? We are in a similar situation and are wondering if
    hadoop
    will work on AIX and Solaris.

    Thanks


    Arun Venugopal-2 wrote:
    Hi,

    I am evaluating Hadoop's Portability Across Heterogeneous Hardware
    and
    Software Platforms. For this I am trying to setup a grid (Hadoop
    0.17)
    having Linux ( RHEL5 / FC 9), Solaris (SunOS 5) and AIX (5.3). I
    was able
    to
    setup a grid with 10 Linux machines and run some basic grid jobs
    on
    it. I
    was also able to setup and start off a standalone grid in an AIX
    machine.
    But when I try to copy data into this (AIX) grid, I get the
    following
    error



    *$ /opt/hadoop/bin/hadoop dfs -ls*

    *ls: Cannot access .: No such file or directory.*

    *$ /opt/hadoop/bin/hadoop dfs -copyFromLocal option_100_data.csv
    option_100_data.csv*

    *08/08/13 02:36:50 INFO dfs.DFSClient:
    org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
    /user/hadoop/option_100_data.csv could only be replicated to 0
    nodes,
    instead of 1*

    * at
    org
    .apache
    .hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:
    1145)
    *

    * at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:
    300)*

    * at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)*

    * at
    sun
    .reflect
    .NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
    *

    * at
    sun
    .reflect
    .DelegatingMethodAccessorImpl
    .invoke(DelegatingMethodAccessorImpl.java:43)
    *

    * at java.lang.reflect.Method.invoke(Method.java:615)*

    * at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:446)*

    * at org.apache.hadoop.ipc.Server$Handler.run(Server.java:
    896)*

    * *

    * at org.apache.hadoop.ipc.Client.call(Client.java:557)*

    * at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:
    212)*

    * at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown
    Source)*

    * at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)*

    * at
    sun
    .reflect
    .NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
    *

    * at
    sun
    .reflect
    .DelegatingMethodAccessorImpl
    .invoke(DelegatingMethodAccessorImpl.java:43)
    *

    * at java.lang.reflect.Method.invoke(Method.java:615)*

    * at
    org
    .apache
    .hadoop
    .io
    .retry
    .RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:
    82)
    *

    * at
    org
    .apache
    .hadoop
    .io
    .retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:
    59)
    *

    * at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown
    Source)*

    * at
    org.apache.hadoop.dfs.DFSClient
    $DFSOutputStream.locateFollowingBlock(DFSClient.java:2334)
    *

    * at
    org.apache.hadoop.dfs.DFSClient
    $DFSOutputStream.nextBlockOutputStream(DFSClient.java:2219)
    *

    * at
    org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access
    $1700(DFSClient.java:1702)
    *

    * at
    org.apache.hadoop.dfs.DFSClient$DFSOutputStream
    $DataStreamer.run(DFSClient.java:1842)
    *

    *
    *

    *08/08/13 02:36:50 WARN dfs.DFSClient: NotReplicatedYetException
    sleeping
    /user/hadoop/option_100_data.csv retries left 4*

    *08/08/13 02:36:51 INFO dfs.DFSClient:
    org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
    /user/hadoop/option_100_data.csv could only be replicated to 0
    nodes,
    instead of 1*

    * at
    org
    .apache
    .hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:
    1145)
    *

    *
    …………………………………………………………………………………………………………………………………………………………………………*

    *…………………………………………………………………………………………………………………………………………………………………………*

    *
    …………………………………………………………………………………………………………………………………………………………………………*

    *08/08/13 02:36:57 WARN dfs.DFSClient: Error Recovery for block
    null bad
    datanode[0]*

    *$*



    This is the only error I am seeing in the log files. My first
    doubt
    was if
    the issue was due to IBM JDK. To verify this I tried to run Hadoop
    using
    IBM
    JDK in Linux (FC 9) machine and it worked perfectly.

    It would be a great help if someone could give me some pointers on
    what I
    can try to solve / debug this error.
    Thanks,
    Arun
    --
    View this message in context:
    http://www.nabble.com/issues-with-hadoop-in-AIX-tp18959680p21156899.html
    Sent from the Hadoop core-user mailing list archive at Nabble.com.
    --
    View this message in context:
    http://www.nabble.com/issues-with-hadoop-in-AIX-tp18959680p21168778.html
    Sent from the Hadoop core-user mailing list archive at Nabble.com.
    --
    View this message in context: http://www.nabble.com/issues-with-hadoop-in-AIX-tp18959680p21209012.html
    Sent from the Hadoop core-user mailing list archive at Nabble.com.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedAug 13, '08 at 9:22a
activeJan 5, '09 at 11:30a
posts9
users5
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase