FAQ
Hi,

I have a program to create a lucene index, and another program for searching
that index.

The Search program create an IndexSearcher object once in its constructor,
and I created a method doSearch to search the index. The doSearch method
uses the indexSearcher object to get the Hits.

My Indexer program is continuously adding documents to the index.

My problem is that I am not getting the matching documents in my search
results, which are added after creating the IndexSearcher object in my
Search program.
Is it possible to get all the matching document in the result without
restarting the Searcher program?

Thanks,
Sunil

Search Discussions

  • Karel Tejnora at Oct 26, 2006 at 11:19 am
    Nope. IndexReader obtains a snapshot of index - not closing and opening
    indexreader leads to not deleting files (windows exception, linux will
    not free them).
    Is it possible to get all the matching document in the result without
    restarting the Searcher program?
    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
  • Sunil Kumar PK at Oct 26, 2006 at 12:07 pm
    could you please explain?
    On 10/26/06, Karel Tejnora wrote:
    Nope. IndexReader obtains a snapshot of index - not closing and opening
    indexreader leads to not deleting files (windows exception, linux will
    not free them).
    Is it possible to get all the matching document in the result without
    restarting the Searcher program?
    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
  • Michael McCandless at Oct 26, 2006 at 1:22 pm

    Sunil Kumar PK wrote:
    could you please explain?
    On 10/26/06, Karel Tejnora wrote:
    Nope. IndexReader obtains a snapshot of index - not closing and opening
    indexreader leads to not deleting files (windows exception, linux will
    not free them).
    Is it possible to get all the matching document in the result without
    restarting the Searcher program?
    A searcher once created only searches the index as of the "point in
    time" that it was created. Ie, it's an unchanging snapshot. So any
    deletes/updates that happen to the index by a writer will not be visible
    until you close and reopen another searcher.

    This "point in time" searching relies on certain properties of the
    underlying filesystem in order to work properly. Windows local and
    remote (SMB) filesystems work because files that are open can't be
    deleted (and Lucene just retries); local UNIX filesystems work because
    the open file handle can still access a deleted file ("delete on last
    close").

    However: NFS does not have "delete on last close", so you can't rely on
    "point in time" searching when using NFS across machines (it's possible
    a single machine may work). If a writer on a different machine has
    committed to the index that a searcher is using over NFS then the
    searcher will eventually hit an IOException with "stale NFS handle".
    See here for details on current known issues with NFS:

    http://issues.apache.org/jira/browse/LUCENE-673

    Mike

    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
  • Stanislav Jordanov at Oct 26, 2006 at 3:27 pm
    Have the following problem with (explicitly invoked) index optimization -
    it seems to always merge all existing index segments into a single huge
    segment, which is undesirable in my case.
    Is there a way to force index optimization to honor the
    IndexWriter.MAX_MERGE_DOCS setting?

    Stanislav

    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
  • Sunil Kumar PK at Oct 27, 2006 at 6:33 am
    Thanks Mike for the information.

    Actually I am using RemoteParallelMultiSearcher with 10 Search Servers, my
    crawler program freequently add new documents in to all the Search Servers
    in a distributed manner. So in this case, if I add a document in a
    particular index, I need to restart the searcher program in that server.
    right? Can I do this with a remote call, or I want to add a new method to
    Searchable interface?

    Thanks,
    Sunil

    On 10/26/06, Michael McCandless wrote:

    Sunil Kumar PK wrote:
    could you please explain?
    On 10/26/06, Karel Tejnora wrote:
    Nope. IndexReader obtains a snapshot of index - not closing and opening
    indexreader leads to not deleting files (windows exception, linux will
    not free them).
    Is it possible to get all the matching document in the result without
    restarting the Searcher program?
    A searcher once created only searches the index as of the "point in
    time" that it was created. Ie, it's an unchanging snapshot. So any
    deletes/updates that happen to the index by a writer will not be visible
    until you close and reopen another searcher.

    This "point in time" searching relies on certain properties of the
    underlying filesystem in order to work properly. Windows local and
    remote (SMB) filesystems work because files that are open can't be
    deleted (and Lucene just retries); local UNIX filesystems work because
    the open file handle can still access a deleted file ("delete on last
    close").

    However: NFS does not have "delete on last close", so you can't rely on
    "point in time" searching when using NFS across machines (it's possible
    a single machine may work). If a writer on a different machine has
    committed to the index that a searcher is using over NFS then the
    searcher will eventually hit an IOException with "stale NFS handle".
    See here for details on current known issues with NFS:

    http://issues.apache.org/jira/browse/LUCENE-673

    Mike

    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupjava-user @
categorieslucene
postedOct 26, '06 at 10:57a
activeOct 27, '06 at 6:33a
posts6
users4
websitelucene.apache.org

People

Translate

site design / logo © 2022 Grokbase