FAQ
Hi,



I have a hadoop job running on over 50k files, each of which is about
500M.

I need to extract some tiny information from each file and no reducer is
needed.

However, the output from the mappers result in many small files (size is
~50k, the block size is however 64M, so it wastes a lot of space).

How can I set the number of mappers (say 100)?

If there is no way to set the number of mappers, the only way to solve
it is "cat" some files together?



Many Thanks,

Wei

Search Discussions

  • Harsh J at Sep 20, 2011 at 8:35 am
    Hello Wei!

    On Tue, Sep 20, 2011 at 1:25 PM, Peng, Wei wrote:
    (snip)
    However, the output from the mappers result in many small files (size is
    ~50k, the block size is however 64M, so it wastes a lot of space).

    How can I set the number of mappers (say 100)?
    What you're looking for is to 'pack' several files per mapper, if I
    get it right.

    In that case, you need to check out the CombineFileInputFormat. It can
    pack several files per mapper (with some degree of locality).

    Alternatively, pass a list of files (as a text file) as your input,
    and have your Mapper logic read them one by one. This way, if you
    divide 50k filenames over 100 files, you will get 100 mappers as you
    want - but at the cost of losing almost all locality.
    If there is no way to set the number of mappers, the only way to solve
    it is "cat" some files together?
    Concatenating is an alternative, if affordable - yes. You can lower
    the file count (down from 50k) this way.

    --
    Harsh J
  • Soumya Banerjee at Sep 20, 2011 at 9:06 am
    Hi,

    If you want all your map outputs in a single file you can use a
    IdentityReducer and set the number of reducers to 1.
    This would ensure that all your mapper output goes into the reducer and it
    wites into a single file.

    Soumya
    On Tue, Sep 20, 2011 at 2:04 PM, Harsh J wrote:

    Hello Wei!

    On Tue, Sep 20, 2011 at 1:25 PM, Peng, Wei wrote:
    (snip)
    However, the output from the mappers result in many small files (size is
    ~50k, the block size is however 64M, so it wastes a lot of space).

    How can I set the number of mappers (say 100)?
    What you're looking for is to 'pack' several files per mapper, if I
    get it right.

    In that case, you need to check out the CombineFileInputFormat. It can
    pack several files per mapper (with some degree of locality).

    Alternatively, pass a list of files (as a text file) as your input,
    and have your Mapper logic read them one by one. This way, if you
    divide 50k filenames over 100 files, you will get 100 mappers as you
    want - but at the cost of losing almost all locality.
    If there is no way to set the number of mappers, the only way to solve
    it is "cat" some files together?
    Concatenating is an alternative, if affordable - yes. You can lower
    the file count (down from 50k) this way.

    --
    Harsh J
  • Peng, Wei at Sep 20, 2011 at 3:23 pm
    Thank you all for the quick reply!!

    I think I was wrong. It has nothing to do with the number of mappers
    because each input file has size 500M, which is not too small in terms
    of 64M per block.

    The problem is that the output from each mapper is too small. Is there a
    way to combine some mappers output together? Setting the number of
    reducers to 1 might get a very huge file. Can I set the number of
    reducers to 100, but skip sorting, shuffling...etc.?

    Wei

    -----Original Message-----
    From: Soumya Banerjee
    Sent: Tuesday, September 20, 2011 2:06 AM
    To: common-user@hadoop.apache.org
    Subject: Re: how to set the number of mappers with 0 reducers?.

    Hi,

    If you want all your map outputs in a single file you can use a
    IdentityReducer and set the number of reducers to 1.
    This would ensure that all your mapper output goes into the reducer and
    it
    wites into a single file.

    Soumya
    On Tue, Sep 20, 2011 at 2:04 PM, Harsh J wrote:

    Hello Wei!

    On Tue, Sep 20, 2011 at 1:25 PM, Peng, Wei wrote:
    (snip)
    However, the output from the mappers result in many small files
    (size is
    ~50k, the block size is however 64M, so it wastes a lot of space).

    How can I set the number of mappers (say 100)?
    What you're looking for is to 'pack' several files per mapper, if I
    get it right.

    In that case, you need to check out the CombineFileInputFormat. It can
    pack several files per mapper (with some degree of locality).

    Alternatively, pass a list of files (as a text file) as your input,
    and have your Mapper logic read them one by one. This way, if you
    divide 50k filenames over 100 files, you will get 100 mappers as you
    want - but at the cost of losing almost all locality.
    If there is no way to set the number of mappers, the only way to
    solve
    it is "cat" some files together?
    Concatenating is an alternative, if affordable - yes. You can lower
    the file count (down from 50k) this way.

    --
    Harsh J
  • GOEKE, MATTHEW (AG/1000) at Sep 20, 2011 at 3:34 pm
    Amusingly this is almost the same question that was asked the other day :)

    <quote from Owen O'Malley>
    There isn't currently a way of getting a collated, but unsorted list of key/value pairs. For most applications, the in memory sort is fairly cheap relative to the shuffle and other parts of the processing.
    </quote>

    If you know that you will be filtering out a significant amount of information to the point where shuffle will be trivial then the impact of a reduce phase should be minimal using an identity reducer. It is either that aggregate as much data as you feel comfortable with into each split and have 1 file per map.

    How much data/percentage of input are you assuming will be output from each of these maps?

    Matt

    -----Original Message-----
    From: Peng, Wei
    Sent: Tuesday, September 20, 2011 10:22 AM
    To: common-user@hadoop.apache.org
    Subject: RE: how to set the number of mappers with 0 reducers?

    Thank you all for the quick reply!!

    I think I was wrong. It has nothing to do with the number of mappers
    because each input file has size 500M, which is not too small in terms
    of 64M per block.

    The problem is that the output from each mapper is too small. Is there a
    way to combine some mappers output together? Setting the number of
    reducers to 1 might get a very huge file. Can I set the number of
    reducers to 100, but skip sorting, shuffling...etc.?

    Wei

    -----Original Message-----
    From: Soumya Banerjee
    Sent: Tuesday, September 20, 2011 2:06 AM
    To: common-user@hadoop.apache.org
    Subject: Re: how to set the number of mappers with 0 reducers?.

    Hi,

    If you want all your map outputs in a single file you can use a
    IdentityReducer and set the number of reducers to 1.
    This would ensure that all your mapper output goes into the reducer and
    it
    wites into a single file.

    Soumya
    On Tue, Sep 20, 2011 at 2:04 PM, Harsh J wrote:

    Hello Wei!

    On Tue, Sep 20, 2011 at 1:25 PM, Peng, Wei wrote:
    (snip)
    However, the output from the mappers result in many small files
    (size is
    ~50k, the block size is however 64M, so it wastes a lot of space).

    How can I set the number of mappers (say 100)?
    What you're looking for is to 'pack' several files per mapper, if I
    get it right.

    In that case, you need to check out the CombineFileInputFormat. It can
    pack several files per mapper (with some degree of locality).

    Alternatively, pass a list of files (as a text file) as your input,
    and have your Mapper logic read them one by one. This way, if you
    divide 50k filenames over 100 files, you will get 100 mappers as you
    want - but at the cost of losing almost all locality.
    If there is no way to set the number of mappers, the only way to
    solve
    it is "cat" some files together?
    Concatenating is an alternative, if affordable - yes. You can lower
    the file count (down from 50k) this way.

    --
    Harsh J
    This e-mail message may contain privileged and/or confidential information, and is intended to be received only by persons entitled
    to receive such information. If you have received this e-mail in error, please notify the sender immediately. Please delete it and
    all attachments from any servers, hard drives or any other media. Other use of this e-mail by you is strictly prohibited.

    All e-mails and attachments sent and received are subject to monitoring, reading and archival by Monsanto, including its
    subsidiaries. The recipient of this e-mail is solely responsible for checking for the presence of "Viruses" or other "Malware".
    Monsanto, along with its subsidiaries, accepts no liability for any damage caused by any such code transmitted by or accompanying
    this e-mail or any attachment.


    The information contained in this email may be subject to the export control laws and regulations of the United States, potentially
    including but not limited to the Export Administration Regulations (EAR) and sanctions regulations issued by the U.S. Department of
    Treasury, Office of Foreign Asset Controls (OFAC). As a recipient of this information you are obligated to comply with all
    applicable U.S. export laws and regulations.
  • Peng, Wei at Sep 20, 2011 at 3:45 pm
    The input is 9010 files (each 500MB), and I would estimate the output to
    be around 50GB.
    My hadoop job failed because of out of memory (with 66 reducers). I
    guess that the key from each mapper output is unique so the sorting
    would be memory-intensive.
    Although I can set another key to reduce the number of unique keys, I am
    curious if there is a way to disable sorting/shuffling.

    Thanks,
    Wei

    -----Original Message-----
    From: GOEKE, MATTHEW (AG/1000)
    Sent: Tuesday, September 20, 2011 8:34 AM
    To: common-user@hadoop.apache.org
    Subject: RE: how to set the number of mappers with 0 reducers?

    Amusingly this is almost the same question that was asked the other day
    :)

    <quote from Owen O'Malley>
    There isn't currently a way of getting a collated, but unsorted list of
    key/value pairs. For most applications, the in memory sort is fairly
    cheap relative to the shuffle and other parts of the processing.
    </quote>

    If you know that you will be filtering out a significant amount of
    information to the point where shuffle will be trivial then the impact
    of a reduce phase should be minimal using an identity reducer. It is
    either that aggregate as much data as you feel comfortable with into
    each split and have 1 file per map.

    How much data/percentage of input are you assuming will be output from
    each of these maps?

    Matt

    -----Original Message-----
    From: Peng, Wei
    Sent: Tuesday, September 20, 2011 10:22 AM
    To: common-user@hadoop.apache.org
    Subject: RE: how to set the number of mappers with 0 reducers?

    Thank you all for the quick reply!!

    I think I was wrong. It has nothing to do with the number of mappers
    because each input file has size 500M, which is not too small in terms
    of 64M per block.

    The problem is that the output from each mapper is too small. Is there a
    way to combine some mappers output together? Setting the number of
    reducers to 1 might get a very huge file. Can I set the number of
    reducers to 100, but skip sorting, shuffling...etc.?

    Wei

    -----Original Message-----
    From: Soumya Banerjee
    Sent: Tuesday, September 20, 2011 2:06 AM
    To: common-user@hadoop.apache.org
    Subject: Re: how to set the number of mappers with 0 reducers?.

    Hi,

    If you want all your map outputs in a single file you can use a
    IdentityReducer and set the number of reducers to 1.
    This would ensure that all your mapper output goes into the reducer and
    it
    wites into a single file.

    Soumya
    On Tue, Sep 20, 2011 at 2:04 PM, Harsh J wrote:

    Hello Wei!

    On Tue, Sep 20, 2011 at 1:25 PM, Peng, Wei wrote:
    (snip)
    However, the output from the mappers result in many small files
    (size is
    ~50k, the block size is however 64M, so it wastes a lot of space).

    How can I set the number of mappers (say 100)?
    What you're looking for is to 'pack' several files per mapper, if I
    get it right.

    In that case, you need to check out the CombineFileInputFormat. It can
    pack several files per mapper (with some degree of locality).

    Alternatively, pass a list of files (as a text file) as your input,
    and have your Mapper logic read them one by one. This way, if you
    divide 50k filenames over 100 files, you will get 100 mappers as you
    want - but at the cost of losing almost all locality.
    If there is no way to set the number of mappers, the only way to
    solve
    it is "cat" some files together?
    Concatenating is an alternative, if affordable - yes. You can lower
    the file count (down from 50k) this way.

    --
    Harsh J
    This e-mail message may contain privileged and/or confidential
    information, and is intended to be received only by persons entitled
    to receive such information. If you have received this e-mail in error,
    please notify the sender immediately. Please delete it and
    all attachments from any servers, hard drives or any other media. Other
    use of this e-mail by you is strictly prohibited.

    All e-mails and attachments sent and received are subject to monitoring,
    reading and archival by Monsanto, including its
    subsidiaries. The recipient of this e-mail is solely responsible for
    checking for the presence of "Viruses" or other "Malware".
    Monsanto, along with its subsidiaries, accepts no liability for any
    damage caused by any such code transmitted by or accompanying
    this e-mail or any attachment.


    The information contained in this email may be subject to the export
    control laws and regulations of the United States, potentially
    including but not limited to the Export Administration Regulations (EAR)
    and sanctions regulations issued by the U.S. Department of
    Treasury, Office of Foreign Asset Controls (OFAC). As a recipient of
    this information you are obligated to comply with all
    applicable U.S. export laws and regulations.
  • GOEKE, MATTHEW (AG/1000) at Sep 20, 2011 at 4:08 pm
    There is currently no way to disable S/S. You can do many things to alleviate any issues you have with it though, one of them you mentioned below. Is there a reason why you are allowing each of your keys to be unique? If it is truly because you do not care then just create an even distribution of keys that you assign to allow for more aggregation.

    On a side note, what is the actual stack trace you are getting when the reducers fail and what is the reducer doing? I think for your use case using a reduce phase is the best way to go, as long as the job time meets your SLA, so we need to figure out why the job is failing.

    Matt

    -----Original Message-----
    From: Peng, Wei
    Sent: Tuesday, September 20, 2011 10:44 AM
    To: common-user@hadoop.apache.org
    Subject: RE: how to set the number of mappers with 0 reducers?

    The input is 9010 files (each 500MB), and I would estimate the output to
    be around 50GB.
    My hadoop job failed because of out of memory (with 66 reducers). I
    guess that the key from each mapper output is unique so the sorting
    would be memory-intensive.
    Although I can set another key to reduce the number of unique keys, I am
    curious if there is a way to disable sorting/shuffling.

    Thanks,
    Wei

    -----Original Message-----
    From: GOEKE, MATTHEW (AG/1000)
    Sent: Tuesday, September 20, 2011 8:34 AM
    To: common-user@hadoop.apache.org
    Subject: RE: how to set the number of mappers with 0 reducers?

    Amusingly this is almost the same question that was asked the other day
    :)

    <quote from Owen O'Malley>
    There isn't currently a way of getting a collated, but unsorted list of
    key/value pairs. For most applications, the in memory sort is fairly
    cheap relative to the shuffle and other parts of the processing.
    </quote>

    If you know that you will be filtering out a significant amount of
    information to the point where shuffle will be trivial then the impact
    of a reduce phase should be minimal using an identity reducer. It is
    either that aggregate as much data as you feel comfortable with into
    each split and have 1 file per map.

    How much data/percentage of input are you assuming will be output from
    each of these maps?

    Matt

    -----Original Message-----
    From: Peng, Wei
    Sent: Tuesday, September 20, 2011 10:22 AM
    To: common-user@hadoop.apache.org
    Subject: RE: how to set the number of mappers with 0 reducers?

    Thank you all for the quick reply!!

    I think I was wrong. It has nothing to do with the number of mappers
    because each input file has size 500M, which is not too small in terms
    of 64M per block.

    The problem is that the output from each mapper is too small. Is there a
    way to combine some mappers output together? Setting the number of
    reducers to 1 might get a very huge file. Can I set the number of
    reducers to 100, but skip sorting, shuffling...etc.?

    Wei

    -----Original Message-----
    From: Soumya Banerjee
    Sent: Tuesday, September 20, 2011 2:06 AM
    To: common-user@hadoop.apache.org
    Subject: Re: how to set the number of mappers with 0 reducers?.

    Hi,

    If you want all your map outputs in a single file you can use a
    IdentityReducer and set the number of reducers to 1.
    This would ensure that all your mapper output goes into the reducer and
    it
    wites into a single file.

    Soumya
    On Tue, Sep 20, 2011 at 2:04 PM, Harsh J wrote:

    Hello Wei!

    On Tue, Sep 20, 2011 at 1:25 PM, Peng, Wei wrote:
    (snip)
    However, the output from the mappers result in many small files
    (size is
    ~50k, the block size is however 64M, so it wastes a lot of space).

    How can I set the number of mappers (say 100)?
    What you're looking for is to 'pack' several files per mapper, if I
    get it right.

    In that case, you need to check out the CombineFileInputFormat. It can
    pack several files per mapper (with some degree of locality).

    Alternatively, pass a list of files (as a text file) as your input,
    and have your Mapper logic read them one by one. This way, if you
    divide 50k filenames over 100 files, you will get 100 mappers as you
    want - but at the cost of losing almost all locality.
    If there is no way to set the number of mappers, the only way to
    solve
    it is "cat" some files together?
    Concatenating is an alternative, if affordable - yes. You can lower
    the file count (down from 50k) this way.

    --
    Harsh J
    This e-mail message may contain privileged and/or confidential
    information, and is intended to be received only by persons entitled
    to receive such information. If you have received this e-mail in error,
    please notify the sender immediately. Please delete it and
    all attachments from any servers, hard drives or any other media. Other
    use of this e-mail by you is strictly prohibited.

    All e-mails and attachments sent and received are subject to monitoring,
    reading and archival by Monsanto, including its
    subsidiaries. The recipient of this e-mail is solely responsible for
    checking for the presence of "Viruses" or other "Malware".
    Monsanto, along with its subsidiaries, accepts no liability for any
    damage caused by any such code transmitted by or accompanying
    this e-mail or any attachment.


    The information contained in this email may be subject to the export
    control laws and regulations of the United States, potentially
    including but not limited to the Export Administration Regulations (EAR)
    and sanctions regulations issued by the U.S. Department of
    Treasury, Office of Foreign Asset Controls (OFAC). As a recipient of
    this information you are obligated to comply with all
    applicable U.S. export laws and regulations.
  • Peng, Wei at Sep 20, 2011 at 4:39 pm
    Thanks, Matthew! Actually the key is the time stamp, I want to sort the
    output from the earliest to the latest.

    The namenode log shows the following error:
    Exception in thread "LeaseChecker" java.lang.OutOfMemoryError: Java heap
    space
    at
    java.io.BufferedOutputStream.(BufferedOutputStream.java:42)
    at
    org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:318)
    at
    org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:859)
    at org.apache.hadoop.ipc.Client.call(Client.java:719)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
    at $Proxy4.renewLease(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessor
    Impl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at
    org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvo
    cationHandler.java:82)
    at
    org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocation
    Handler.java:59)
    at $Proxy4.renewLease(Unknown Source)
    at
    org.apache.hadoop.hdfs.DFSClient$LeaseChecker.renew(DFSClient.java:1016)
    at
    org.apache.hadoop.hdfs.DFSClient$LeaseChecker.run(DFSClient.java:1028)
    at java.lang.Thread.run(Thread.java:619)
    Exception in thread "92361528@qtp0-1" java.lang.OutOfMemoryError: Java
    heap space
    Exception in thread "ResponseProcessor for block
    blk_978156393275339316_12912516" java.lang.OutOfMemoryError: Java heap
    space
    at java.util.HashMap.addEntry(HashMap.java:753)
    at java.util.HashMap.put(HashMap.java:385)
    at
    sun.nio.ch.EPollSelectorImpl.implRegister(EPollSelectorImpl.java:143)
    at sun.nio.ch.SelectorImpl.register(SelectorImpl.java:115)
    at
    java.nio.channels.spi.AbstractSelectableChannel.register(AbstractSelecta
    bleChannel.java:180)
    at
    java.nio.channels.SelectableChannel.register(SelectableChannel.java:254)
    at
    org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWi
    thTimeout.java:331)
    at
    org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:
    157)
    at
    org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    at
    org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    at java.io.DataInputStream.readFully(DataInputStream.java:178)
    at java.io.DataInputStream.readLong(DataInputStream.java:399)
    at
    org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(D
    FSClient.java:2367)


    Wei
    -----Original Message-----
    From: GOEKE, MATTHEW (AG/1000)
    Sent: Tuesday, September 20, 2011 9:08 AM
    To: common-user@hadoop.apache.org
    Subject: RE: how to set the number of mappers with 0 reducers?

    There is currently no way to disable S/S. You can do many things to
    alleviate any issues you have with it though, one of them you mentioned
    below. Is there a reason why you are allowing each of your keys to be
    unique? If it is truly because you do not care then just create an even
    distribution of keys that you assign to allow for more aggregation.

    On a side note, what is the actual stack trace you are getting when the
    reducers fail and what is the reducer doing? I think for your use case
    using a reduce phase is the best way to go, as long as the job time
    meets your SLA, so we need to figure out why the job is failing.

    Matt

    -----Original Message-----
    From: Peng, Wei
    Sent: Tuesday, September 20, 2011 10:44 AM
    To: common-user@hadoop.apache.org
    Subject: RE: how to set the number of mappers with 0 reducers?

    The input is 9010 files (each 500MB), and I would estimate the output to
    be around 50GB.
    My hadoop job failed because of out of memory (with 66 reducers). I
    guess that the key from each mapper output is unique so the sorting
    would be memory-intensive.
    Although I can set another key to reduce the number of unique keys, I am
    curious if there is a way to disable sorting/shuffling.

    Thanks,
    Wei

    -----Original Message-----
    From: GOEKE, MATTHEW (AG/1000)
    Sent: Tuesday, September 20, 2011 8:34 AM
    To: common-user@hadoop.apache.org
    Subject: RE: how to set the number of mappers with 0 reducers?

    Amusingly this is almost the same question that was asked the other day
    :)

    <quote from Owen O'Malley>
    There isn't currently a way of getting a collated, but unsorted list of
    key/value pairs. For most applications, the in memory sort is fairly
    cheap relative to the shuffle and other parts of the processing.
    </quote>

    If you know that you will be filtering out a significant amount of
    information to the point where shuffle will be trivial then the impact
    of a reduce phase should be minimal using an identity reducer. It is
    either that aggregate as much data as you feel comfortable with into
    each split and have 1 file per map.

    How much data/percentage of input are you assuming will be output from
    each of these maps?

    Matt

    -----Original Message-----
    From: Peng, Wei
    Sent: Tuesday, September 20, 2011 10:22 AM
    To: common-user@hadoop.apache.org
    Subject: RE: how to set the number of mappers with 0 reducers?

    Thank you all for the quick reply!!

    I think I was wrong. It has nothing to do with the number of mappers
    because each input file has size 500M, which is not too small in terms
    of 64M per block.

    The problem is that the output from each mapper is too small. Is there a
    way to combine some mappers output together? Setting the number of
    reducers to 1 might get a very huge file. Can I set the number of
    reducers to 100, but skip sorting, shuffling...etc.?

    Wei

    -----Original Message-----
    From: Soumya Banerjee
    Sent: Tuesday, September 20, 2011 2:06 AM
    To: common-user@hadoop.apache.org
    Subject: Re: how to set the number of mappers with 0 reducers?.

    Hi,

    If you want all your map outputs in a single file you can use a
    IdentityReducer and set the number of reducers to 1.
    This would ensure that all your mapper output goes into the reducer and
    it
    wites into a single file.

    Soumya
    On Tue, Sep 20, 2011 at 2:04 PM, Harsh J wrote:

    Hello Wei!

    On Tue, Sep 20, 2011 at 1:25 PM, Peng, Wei wrote:
    (snip)
    However, the output from the mappers result in many small files
    (size is
    ~50k, the block size is however 64M, so it wastes a lot of space).

    How can I set the number of mappers (say 100)?
    What you're looking for is to 'pack' several files per mapper, if I
    get it right.

    In that case, you need to check out the CombineFileInputFormat. It can
    pack several files per mapper (with some degree of locality).

    Alternatively, pass a list of files (as a text file) as your input,
    and have your Mapper logic read them one by one. This way, if you
    divide 50k filenames over 100 files, you will get 100 mappers as you
    want - but at the cost of losing almost all locality.
    If there is no way to set the number of mappers, the only way to
    solve
    it is "cat" some files together?
    Concatenating is an alternative, if affordable - yes. You can lower
    the file count (down from 50k) this way.

    --
    Harsh J
    This e-mail message may contain privileged and/or confidential
    information, and is intended to be received only by persons entitled
    to receive such information. If you have received this e-mail in error,
    please notify the sender immediately. Please delete it and
    all attachments from any servers, hard drives or any other media. Other
    use of this e-mail by you is strictly prohibited.

    All e-mails and attachments sent and received are subject to monitoring,
    reading and archival by Monsanto, including its
    subsidiaries. The recipient of this e-mail is solely responsible for
    checking for the presence of "Viruses" or other "Malware".
    Monsanto, along with its subsidiaries, accepts no liability for any
    damage caused by any such code transmitted by or accompanying
    this e-mail or any attachment.


    The information contained in this email may be subject to the export
    control laws and regulations of the United States, potentially
    including but not limited to the Export Administration Regulations (EAR)
    and sanctions regulations issued by the U.S. Department of
    Treasury, Office of Foreign Asset Controls (OFAC). As a recipient of
    this information you are obligated to comply with all
    applicable U.S. export laws and regulations.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedSep 20, '11 at 7:56a
activeSep 20, '11 at 4:39p
posts8
users4
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase