FAQ
Hi,

I have 2 threads that copy file from hdfs and delete the directory
after copying the file.

In both the threads I use "FileSystem hdfs = FileSystem.get(conf);"
Once i finish copying and deleting I close the filesystem(
hdfs.close() in the finally block)

If one of threads does a FileSystem.close()(while the other thread is
still copying) the other threads stops copying and throws an error


java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:226)
at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:67)
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.close(DFSClient.java:1678)
at java.io.FilterInputStream.close(FilterInputStream.java:155)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:58)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:209)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:142)
at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1216)
at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1197)


Should I NOT do FileSystem.close() in the finally block ?? How I solve
this issue ?


Cheers,
Karthik

Search Discussions

  • Uma Maheswara Rao G 72686 at Jun 15, 2011 at 3:36 pm
    Hi Karthik,

    FileSystem will cache the objects.
    Based on schema and url info it will form the key and get the fs from cache.

    you can close the fs only after completed your operations with filesystem object. Even if you call two times FileSystem.get(conf), it will return same object. So, you cant close like that.

    (or) initialize their own DistributedFileSystems separately and use.

    (or) use FileSystem.newInstance(conf..)


    Regards,
    Uma Mahesh

    ******************************************************************************************
    This email and its attachments contain confidential information from HUAWEI, which is intended only for the person or entity whose address is listed above. Any use of the information contained here in any way (including, but not limited to, total or partial disclosure, reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this email in error, please notify the sender by phone or email immediately and delete it!
    *****************************************************************************************

    ----- Original Message -----
    From: karthik tunga <[email protected]>
    Date: Wednesday, June 15, 2011 8:39 pm
    Subject: FileSystem.close() using Threads !!!
    To: [email protected]
    Hi,

    I have 2 threads that copy file from hdfs and delete the directory
    after copying the file.

    In both the threads I use "FileSystem hdfs = FileSystem.get(conf);"
    Once i finish copying and deleting I close the filesystem(
    hdfs.close() in the finally block)

    If one of threads does a FileSystem.close()(while the other thread is
    still copying) the other threads stops copying and throws an error


    java.io.IOException: Filesystem closed
    at
    org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:226)
    at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:67)
    at
    org.apache.hadoop.hdfs.DFSClient$DFSInputStream.close(DFSClient.java:1678) at java.io.FilterInputStream.close(FilterInputStream.java:155)
    at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:58)
    at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:209)
    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:142)
    at
    org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1216) at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1197)


    Should I NOT do FileSystem.close() in the finally block ?? How I solve
    this issue ?


    Cheers,
    Karthik

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouphdfs-dev @
categorieshadoop
postedJun 15, '11 at 3:08p
activeJun 15, '11 at 3:36p
posts2
users2
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2023 Grokbase