Grokbase Groups Hive user July 2011
FAQ
Hello,

can't import files with dynamic partioning. Query looks like this

FROM cost c INSERT OVERWRITE TABLE costp PARTITION (accountId,day) SELECT c.clientId,c.campaign,c.accountId,c.day DISTRIBUTE BY c.accountId,c.day

Strange thing is: Sometimes it works sometimes mapred fails with something like

Caused by: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /hive/tmp/hive_2011-07-12_17-51-24_300_3182373884335902984/_tmp.-ext-10000/accountid=536/day=2010-09-01/_tmp.000000_1 could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1469)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:649)
at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409)

at org.apache.hadoop.ipc.Client.call(Client.java:1104)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
at $Proxy1.addBlock(Unknown Source)
at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy1.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3185)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3055)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1900(DFSClient.java:2305)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2500)

My hive config looks like this:
<property>
<name>hive.exec.max.created.files</name>
<value>150000</value>
</property>
<property>
<name>hive.exec.max.dynamic.partitions.pernode</name>
<value>50000</value>
</property>
<property>
<name>hive.exec.dynamic.partition</name>
<value>true</value>
</property>
<property>
<name>hive.exec.dynamic.partition.mode</name>
<value>nonstrict</value>
</property>
<property>
<name>hive.exec.max.dynamic.partitions</name>
<value>100000</value>
</property>

Network is fine... I'am running a HBase cluster (3 datanodes at the moment) with no problems yet.
Any clues?
Thanks
Malte
--
Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir
belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de

Search Discussions

  • Labtrax at Jul 13, 2011 at 7:46 am
    Hi,

    I allways get

    java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row (tag=0) {"key":{},"value":{"_col0":"1129","_col1":"Campaign","_col2":"34811433","_col3":"group","_col4":"1271859453","_col5":"Soundso","_col6":"93709590","_col7":"BROAD","_col8":"SEARCH","_col9":"000","_col10":"1","_col11":"0","_col12":"2.0","_col13":"313","_col14":"2009-12-31"},"alias":0}
    at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:268)
    at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:468)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:416)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
    at org.apache.hadoop.mapred.Child.main(Child.java:262)
    Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row (tag=0) {"key":{},"value":{"_col0":"1129","_col1":"Campaign","_col2":"34811433","_col3":"group","_col4":"1271859453","_col5":"Soundso","_col6":"93709590","_col7":"BROAD","_col8":"SEARCH","_col9":"000","_col10":"1","_col11":"0","_col12":"2.0","_col13":"313","_col14":"2009-12-31"},"alias":0}
    at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:256)
    ... 7 more
    Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /hive/tmp/hive_2011-07-13_09-21-49_559_5867862827566422839/_tmp.-ext-10000/accountid=313/day=2009-12-31/_tmp.000001_0 could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1469)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:649)
    at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409)

    at org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:576)
    at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
    at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:744)
    at org.apache.hadoop.hive.ql.exec.ExtractOperator.processOp(ExtractOperator.java:45)
    at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
    at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:247)
    ... 7 more
    Caused by: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /hive/tmp/hive_2011-07-13_09-21-49_559_5867862827566422839/_tmp.-ext-10000/accountid=313/day=2009-12-31/_tmp.000001_0 could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1469)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:649)
    at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409)

    at org.apache.hadoop.ipc.Client.call(Client.java:1104)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy1.addBlock(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy1.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3185)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3055)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1900(DFSClient.java:2305)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2500)

    or

    java.lang.RuntimeException: Hive Runtime Error while closing operators: java.io.IOException: All datanodes 192.168.111.13:50010 are bad. Aborting...
    at org.apache.hadoop.hive.ql.exec.ExecReducer.close(ExecReducer.java:311)
    at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:478)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:416)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
    at org.apache.hadoop.mapred.Child.main(Child.java:262)
    Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: All datanodes 192.168.111.13:50010 are bad. Aborting...
    at org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.closeWriters(FileSinkOperator.java:171)
    at org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:642)
    at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:557)
    at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
    at org.apache.hadoop.hive.ql.exec.ExecReducer.close(ExecReducer.java:303)
    ... 7 more
    Caused by: java.io.IOException: All datanodes 192.168.111.13:50010 are bad. Aborting...
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2774)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)


    but some mapreduce jobs finshed sucessfully some not. They all do run with these 3 up to about 9 errors on different datanodes.
    I set the hive.exec.max.created.files to 1000000, but still most querys end up with FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask




    --
    NEU: FreePhone - kostenlos mobil telefonieren!
    Jetzt informieren: http://www.gmx.net/de/go/freephone
  • Labtrax at Jul 13, 2011 at 10:37 am
    It seems that the more dynamic partitions are imported the fewer I am able to import respectively the smaller the files have to be.
    Any clues?

    -------- Original-Nachricht --------
    Datum: Wed, 13 Jul 2011 09:45:27 +0200
    Von: "labtrax" <hive1@gmx.de>
    An: user@hive.apache.org
    Betreff: Re: dynamic partition import
    Hi,

    I allways get

    java.lang.RuntimeException:
    org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row (tag=0)
    {"key":{},"value":{"_col0":"1129","_col1":"Campaign","_col2":"34811433","_col3":"group","_col4":"1271859453","_col5":"Soundso","_col6":"93709590","_col7":"BROAD","_col8":"SEARCH","_col9":"000","_col10":"1","_col11":"0","_col12":"2.0","_col13":"313","_col14":"2009-12-31"},"alias":0}
    at
    org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:268)
    at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:468)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:416)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
    at org.apache.hadoop.mapred.Child.main(Child.java:262)
    Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime
    Error while processing row (tag=0)
    {"key":{},"value":{"_col0":"1129","_col1":"Campaign","_col2":"34811433","_col3":"group","_col4":"1271859453","_col5":"Soundso","_col6":"93709590","_col7":"BROAD","_col8":"SEARCH","_col9":"000","_col10":"1","_col11":"0","_col12":"2.0","_col13":"313","_col14":"2009-12-31"},"alias":0}
    at
    org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:256)
    ... 7 more
    Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
    org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
    /hive/tmp/hive_2011-07-13_09-21-49_559_5867862827566422839/_tmp.-ext-10000/accountid=313/day=2009-12-31/_tmp.000001_0
    could only be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1469)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:649)
    at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409)

    at
    org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:576)
    at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
    at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:744)
    at
    org.apache.hadoop.hive.ql.exec.ExtractOperator.processOp(ExtractOperator.java:45)
    at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
    at
    org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:247)
    ... 7 more
    Caused by: org.apache.hadoop.ipc.RemoteException: java.io.IOException:
    File
    /hive/tmp/hive_2011-07-13_09-21-49_559_5867862827566422839/_tmp.-ext-10000/accountid=313/day=2009-12-31/_tmp.000001_0 could only be replicated to 0
    nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1469)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:649)
    at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409)

    at org.apache.hadoop.ipc.Client.call(Client.java:1104)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy1.addBlock(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at
    org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at
    org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy1.addBlock(Unknown Source)
    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3185)
    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3055)
    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1900(DFSClient.java:2305)
    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2500)

    or

    java.lang.RuntimeException: Hive Runtime Error while closing operators:
    java.io.IOException: All datanodes 192.168.111.13:50010 are bad. Aborting...
    at org.apache.hadoop.hive.ql.exec.ExecReducer.close(ExecReducer.java:311)
    at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:478)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:416)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
    at org.apache.hadoop.mapred.Child.main(Child.java:262)
    Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
    java.io.IOException: All datanodes 192.168.111.13:50010 are bad. Aborting...
    at
    org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.closeWriters(FileSinkOperator.java:171)
    at
    org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:642)
    at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:557)
    at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
    at org.apache.hadoop.hive.ql.exec.ExecReducer.close(ExecReducer.java:303)
    ... 7 more
    Caused by: java.io.IOException: All datanodes 192.168.111.13:50010 are
    bad. Aborting...
    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2774)
    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)


    but some mapreduce jobs finshed sucessfully some not. They all do run with
    these 3 up to about 9 errors on different datanodes.
    I set the hive.exec.max.created.files to 1000000, but still most querys
    end up with FAILED: Execution Error, return code 2 from
    org.apache.hadoop.hive.ql.exec.MapRedTask




    --
    NEU: FreePhone - kostenlos mobil telefonieren!
    Jetzt informieren: http://www.gmx.net/de/go/freephone
    --
    NEU: FreePhone - kostenlos mobil telefonieren!
    Jetzt informieren: http://www.gmx.net/de/go/freephone
  • Hadoopman at Jul 13, 2011 at 2:28 pm
    I'm beginning to suspect this myself. We have a import job which has
    many smaller files. We've been merging them into a single log file and
    partitioning by day however I've seen this and other errors (usually
    memory related errors) posted by hive and the load fails.

    Our latest error has been not having enough partitions pernode set in
    hive (set to 1000 currently). When increasing this setting it gives the
    same error however I've noticed by loading fewer logs I'm avoiding
    dynamic partition errors (and thus the job failing).

    I have to keep reminding myself NOT to think of hive/hadoop like a
    database (though that is my background ::grinz::)

    If you find the solution to this I'd be very interested. It's hit me
    from time to time as well :-)

    Thanks!

    On 07/13/2011 04:36 AM, labtrax wrote:
    It seems that the more dynamic partitions are imported the fewer I am able to import respectively the smaller the files have to be.
    Any clues?

    [snip]
    but some mapreduce jobs finshed sucessfully some not. They all do run with
    these 3 up to about 9 errors on different datanodes.
    I set the hive.exec.max.created.files to 1000000, but still most querys
    end up with FAILED: Execution Error, return code 2 from
    org.apache.hadoop.hive.ql.exec.MapRedTask




    --
    NEU: FreePhone - kostenlos mobil telefonieren!
    Jetzt informieren: http://www.gmx.net/de/go/freephone
  • Labtrax at Jul 14, 2011 at 1:24 pm
    So we got it, I hope!
    We did take care of the ulimit max open file thing (e.g. 1.3.1.6.1. ulimit on Ubuntu: http://hbase.apache.org/book/notsoquick.html). But after the switch from "native" hadoop to cloudera ditribution cdh3u0 we didn't mention to to this for the users "hdfs", "hbase" AND "mapred". A single entry for the user "hdfs" won't fix this.
    hdfs - nofile 32768
    hdfs soft/hard nproc 32000
    hbase - nofile 32768
    hbase soft/hard nproc 32000
    root - nofile 32768
    root soft/hard nproc 32000
    mapred - nofile 32768
    mapred soft/hard nproc 32000
    zookeeper - nofile 32768
    zookeeper soft/hard nproc 32000
    in /etc/security/limits.conf (Ubuntu 10.10 Server).
    root and zookeeper might be unnecessary.
    --
    Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir
    belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupuser @
categorieshive, hadoop
postedJul 12, '11 at 4:06p
activeJul 14, '11 at 1:24p
posts5
users2
websitehive.apache.org

2 users in discussion

Labtrax: 4 posts Hadoopman: 1 post

People

Translate

site design / logo © 2021 Grokbase