FAQ
Hi,

I had requested help on an issue we have been facing with the "Too many
open files" Exception garbling the search indexes and crashing the
search on the web site.
As a suggestion, you had asked us to look at the articles on O'Reilly
Network which had specific context around this exact problem.
One of the suggestions was to increase the limit on the number of file
descriptors on the file system. We tried it by first lowering the limit
to 200 from 256 in order to reproduce the exception. The exception did
get reproduced but even after increasing the limit to 500, the exception
kept coming until after several rounds of trying to rebuild the index,
we finally got to get it working for the default file descriptor limit
of 256. This makes us wonder if your first suggestion of optimizing
indexes is a pre-requisite to trying this option.

Another piece of relevant information is that we have the default merge
factor of 10.

Kindly give us pointers to what it that we are doing wrong or should we
be trying something completely different.

Thanks and regards
Neelam Bhatnagar

Search Discussions

  • Will Allen at Nov 22, 2004 at 7:54 pm
    If you are on linux the number of file handles for a session is much lower than that for the whole machine. "ulimit -n" will tell you. There are instructions on the web for changing this setting, it involves the /etc/security/limits.conf and setting the values for "nofile".

    (bulkadm is my user)

    bulkadm soft nofile 8192
    bulkadm hard nofile 65536

    Also, if you use the condensed file format you will have many fewer files.

    -----Original Message-----
    From: Neelam Bhatnagar
    Sent: Monday, November 22, 2004 10:02 AM
    To: Otis Gospodnetic
    Cc: [email protected]
    Subject: Too many open files issue


    Hi,

    I had requested help on an issue we have been facing with the "Too many
    open files" Exception garbling the search indexes and crashing the
    search on the web site.
    As a suggestion, you had asked us to look at the articles on O'Reilly
    Network which had specific context around this exact problem.
    One of the suggestions was to increase the limit on the number of file
    descriptors on the file system. We tried it by first lowering the limit
    to 200 from 256 in order to reproduce the exception. The exception did
    get reproduced but even after increasing the limit to 500, the exception
    kept coming until after several rounds of trying to rebuild the index,
    we finally got to get it working for the default file descriptor limit
    of 256. This makes us wonder if your first suggestion of optimizing
    indexes is a pre-requisite to trying this option.

    Another piece of relevant information is that we have the default merge
    factor of 10.

    Kindly give us pointers to what it that we are doing wrong or should we
    be trying something completely different.

    Thanks and regards
    Neelam Bhatnagar


    ---------------------------------------------------------------------
    To unsubscribe, e-mail: [email protected]
    For additional commands, e-mail: [email protected]
  • Dmitry at Nov 22, 2004 at 10:16 pm
    I'm sorry, I wasn't involved in the original conversation but maybe I
    can jump in with some info that will help.

    The number of files depends on the merge factor, number of segments, and
    number of indexed fields in your index. It also depends on whether you
    are using "compound files" or not (this is a flag on the IndexWriter).
    With compound files flag on, segments have fixed number of files,
    regardless of how many fields you use. Without the flag, each field is a
    separate file.

    Let's say you have 10 segments (per your merge factor) that are being
    merged into a new segment (via an optimize call or just because you have
    reached the merge factor). This means there are 11 segments open at the
    same time. If you have 20 indexed fields and are not using compound
    files, that's 20 * 11 = 220 files. There are a few other files open as
    well, plus whatever other files and sockets that your JVM process is
    holding open at that time. This would include incoming connections, for
    example, if this is running inside a web server. If you are running in
    an application server, this could include connections and files open by
    other applications in that same app server.

    So the numbers run up quite a bit.

    By the way, it is usual to have the file descriptors limit set at 9000
    or so for unix machines running production web applications. By the way
    2, on Solaris, you will need to modify a value in /etc/systems to get up
    to this level. Not sure about Linux or other flavors.

    Another suggestion - you may want to look into a tool called "lsof". It
    is a utility that will show file handles open by a particular process.
    It could be that some other part of your process (or of the application
    server, VM, etc) is not closing files. This tool will help you see what
    files are open and you can validate that all of the really need to be open.

    Best of luck.
    Dmitry.


    Neelam Bhatnagar wrote:
    Hi,

    I had requested help on an issue we have been facing with the "Too many
    open files" Exception garbling the search indexes and crashing the
    search on the web site.
    As a suggestion, you had asked us to look at the articles on O'Reilly
    Network which had specific context around this exact problem.
    One of the suggestions was to increase the limit on the number of file
    descriptors on the file system. We tried it by first lowering the limit
    to 200 from 256 in order to reproduce the exception. The exception did
    get reproduced but even after increasing the limit to 500, the exception
    kept coming until after several rounds of trying to rebuild the index,
    we finally got to get it working for the default file descriptor limit
    of 256. This makes us wonder if your first suggestion of optimizing
    indexes is a pre-requisite to trying this option.

    Another piece of relevant information is that we have the default merge
    factor of 10.

    Kindly give us pointers to what it that we are doing wrong or should we
    be trying something completely different.

    Thanks and regards
    Neelam Bhatnagar



    ---------------------------------------------------------------------
    To unsubscribe, e-mail: [email protected]
    For additional commands, e-mail: [email protected]
  • Chris Lamprecht at Nov 22, 2004 at 10:21 pm
    A useful resource for increasing the number of file handles on various
    operating systems is the Volano Report:

    http://www.volano.com/report/
    I had requested help on an issue we have been facing with the "Too many
    open files" Exception garbling the search indexes and crashing the
    search on the web site.
    ---------------------------------------------------------------------
    To unsubscribe, e-mail: [email protected]
    For additional commands, e-mail: [email protected]
  • John Wang at Nov 24, 2004 at 8:27 pm
    I have also seen this problem.

    In the Lucene code, I don't see where the reader speicified when
    creating a field is closed. That holds on to the file.

    I am looking at DocumentWriter.invertDocument()

    Thanks

    -John


    On Mon, 22 Nov 2004 16:21:35 -0600, Chris Lamprecht
    wrote:
    A useful resource for increasing the number of file handles on various
    operating systems is the Volano Report:

    http://www.volano.com/report/


    I had requested help on an issue we have been facing with the "Too many
    open files" Exception garbling the search indexes and crashing the
    search on the web site.
    ---------------------------------------------------------------------
    To unsubscribe, e-mail: [email protected]
    For additional commands, e-mail: [email protected]
    ---------------------------------------------------------------------
    To unsubscribe, e-mail: [email protected]
    For additional commands, e-mail: [email protected]
  • Doug Cutting at Nov 26, 2004 at 8:47 am

    John Wang wrote:
    In the Lucene code, I don't see where the reader speicified when
    creating a field is closed. That holds on to the file.

    I am looking at DocumentWriter.invertDocument()
    It is closed in a finally clause on line 170, when the TokenStream is
    closed.

    Doug


    ---------------------------------------------------------------------
    To unsubscribe, e-mail: [email protected]
    For additional commands, e-mail: [email protected]

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupjava-user @
categorieslucene
postedNov 22, '04 at 3:06p
activeNov 26, '04 at 8:47a
posts6
users6
websitelucene.apache.org

People

Translate

site design / logo © 2023 Grokbase