FAQ
Hi!

we are using Lucene 2.4.1 in our app. It works great so far, but now a customer ran into a strange problem.
During the day, the search index is updated regularly with the newest changes in the application. At night, when nothing much is happening in the application, the index is optimised.
The updating during the day works fine, but during the optimizing, all kinds of strange exceptions occur:

java.io.IOException: Access is denied
at java.io.WinNTFileSystem.createFileExclusively(Native Method)
at java.io.File.createNewFile(Unknown Source)
at org.apache.lucene.store.SimpleFSLock.obtain(SimpleFSLockFactory.java:144)
at org.apache.lucene.store.Lock.obtain(Lock.java:73)
at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1070)
at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:566)

or

org.apache.lucene.store.LockReleaseFailedException: failed to delete searchindex\write.lock
at org.apache.lucene.store.SimpleFSLock.release(SimpleFSLockFactory.java:149)
at org.apache.lucene.index.IndexWriter.closeInternal(IndexWriter.java:1668)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1602)
at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1578)

or

java.io.IOException: background merge hit exception: _dhi:c1195 _dlt:c33 into _dlu [optimize]
Exception in thread "Lucene Merge Thread #0" at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2273)
at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2218)
at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2198)
...
Caused by: java.io.FileNotFoundException: searchindex\_dlu.fnm (The system cannot find the file specified)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(Unknown Source)
at org.apache.lucene.store.FSDirectory$FSIndexInput$Descriptor.(FSDirectory.java:582)
at org.apache.lucene.store.FSDirectory.openInput(FSDirectory.java:488)
at org.apache.lucene.store.FSDirectory.openInput(FSDirectory.java:482)
at org.apache.lucene.index.CompoundFileWriter.copyFile(CompoundFileWriter.java:221)
at org.apache.lucene.index.CompoundFileWriter.close(CompoundFileWriter.java:184)
at org.apache.lucene.index.SegmentMerger.createCompoundFile(SegmentMerger.java:204)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4263)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3884)
at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:205)
at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:260)

The indexes look fine when I open them with Luke, and since the normal updating works, I don't think it has something to do with write-rights of the disks.
So, what could be the cause of this?
And: How necessary is it really to run an optimisation every night? A lot of changes take place when the program runs, so the search index is changed quite frequently. Maybe it is enough to let the automatic merging take care of things?

Greets,
Anna



---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org

Search Discussions

  • Ian Lea at Apr 29, 2010 at 11:33 am
    Hi


    It is not necessary to run optimize. At a guess there is some job
    such as a backup or virus check that is running overnight and locking
    files and parts of the file system. If that is the case, and you do
    want to run optimize, perhaps you could schedule around it. Or switch
    to a unix based system that doesn't have these locking issues.


    --
    Ian.

    On Thu, Apr 29, 2010 at 11:50 AM, Anna Hunecke wrote:
    Hi!

    we are using Lucene 2.4.1 in our app. It works great so far, but now a customer ran into a strange problem.
    During the day, the search index is updated regularly with the newest changes in the application. At night, when nothing much is happening in the application, the index is optimised.
    The updating during the day works fine, but during the optimizing, all kinds of strange exceptions occur:

    java.io.IOException: Access is denied
    at java.io.WinNTFileSystem.createFileExclusively(Native Method)
    at java.io.File.createNewFile(Unknown Source)
    at org.apache.lucene.store.SimpleFSLock.obtain(SimpleFSLockFactory.java:144)
    at org.apache.lucene.store.Lock.obtain(Lock.java:73)
    at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1070)
    at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:566)

    or

    org.apache.lucene.store.LockReleaseFailedException: failed to delete searchindex\write.lock
    at org.apache.lucene.store.SimpleFSLock.release(SimpleFSLockFactory.java:149)
    at org.apache.lucene.index.IndexWriter.closeInternal(IndexWriter.java:1668)
    at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1602)
    at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1578)

    or

    java.io.IOException: background merge hit exception: _dhi:c1195 _dlt:c33 into _dlu [optimize]
    Exception in thread "Lucene Merge Thread #0"    at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2273)
    at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2218)
    at org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2198)
    ...
    Caused by: java.io.FileNotFoundException: searchindex\_dlu.fnm (The system cannot find the file specified)
    at java.io.RandomAccessFile.open(Native Method)
    at java.io.RandomAccessFile.<init>(Unknown Source)
    at org.apache.lucene.store.FSDirectory$FSIndexInput$Descriptor.<init>(FSDirectory.java:552)
    at org.apache.lucene.store.FSDirectory$FSIndexInput.<init>(FSDirectory.java:582)
    at org.apache.lucene.store.FSDirectory.openInput(FSDirectory.java:488)
    at org.apache.lucene.store.FSDirectory.openInput(FSDirectory.java:482)
    at org.apache.lucene.index.CompoundFileWriter.copyFile(CompoundFileWriter.java:221)
    at org.apache.lucene.index.CompoundFileWriter.close(CompoundFileWriter.java:184)
    at org.apache.lucene.index.SegmentMerger.createCompoundFile(SegmentMerger.java:204)
    at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4263)
    at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3884)
    at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:205)
    at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:260)

    The indexes look fine when I open them with Luke, and since the normal updating works, I don't think it has something to do with write-rights of the disks.
    So, what could be the cause of this?
    And: How necessary is it really to run an optimisation every night? A lot of changes take place when the program runs, so the search index is changed quite frequently. Maybe it is enough to let the automatic merging take care of things?

    Greets,
    Anna



    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
  • Anna Hunecke at Apr 30, 2010 at 8:18 am
    Hi Ian,

    thanks for the answer. I also assumed something like this. Telling the customer to switch to unix is not an option, so I'll try to solve the problem by scheduling the optimization to occur at some other time.

    Can you explain a bit more why you think optimization is not necessary?
    As far as I understand it, it is necessary compact the index files from time to time, especially if there are many changes in the index.
    What I don't understand is what the difference is between the explicit optimization and the automatic merging of the files.

    Best,
    Anna

    --- Ian Lea <ian.lea@gmail.com> schrieb am Do, 29.4.2010:
    Von: Ian Lea <ian.lea@gmail.com>
    Betreff: Re: IOExceptions when optimising the index
    An: java-user@lucene.apache.org
    Datum: Donnerstag, 29. April, 2010 13:32 Uhr
    Hi


    It is not necessary to run optimize.  At a guess there
    is some job
    such as a backup or virus check that is running overnight
    and locking
    files and parts of the file system.  If that is the
    case, and you do
    want to run optimize, perhaps you could schedule around
    it.  Or switch
    to a unix based system that doesn't have these locking
    issues.


    --
    Ian.

    On Thu, Apr 29, 2010 at 11:50 AM, Anna Hunecke wrote:
    Hi!

    we are using Lucene 2.4.1 in our app. It works great
    so far, but now a customer ran into a strange problem.
    During the day, the search index is updated regularly
    with the newest changes in the application. At night, when
    nothing much is happening in the application, the index is
    optimised.
    The updating during the day works fine, but during the
    optimizing, all kinds of strange exceptions occur:
    java.io.IOException: Access is denied
    at
    java.io.WinNTFileSystem.createFileExclusively(Native
    Method)
    at java.io.File.createNewFile(Unknown Source)
    at
    org.apache.lucene.store.SimpleFSLock.obtain(SimpleFSLockFactory.java:144)
    at org.apache.lucene.store.Lock.obtain(Lock.java:73)
    at
    org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1070)
    at
    org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:566)
    or

    org.apache.lucene.store.LockReleaseFailedException:
    failed to delete searchindex\write.lock
    at
    org.apache.lucene.store.SimpleFSLock.release(SimpleFSLockFactory.java:149)
    at
    org.apache.lucene.index.IndexWriter.closeInternal(IndexWriter.java:1668)
    at
    org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1602)
    at
    org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1578)
    or

    java.io.IOException: background merge hit exception:
    _dhi:c1195 _dlt:c33 into _dlu [optimize]
    Exception in thread "Lucene Merge Thread #0"    at
    org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2273)
    at
    org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2218)
    at
    org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2198)
    ...
    Caused by: java.io.FileNotFoundException:
    searchindex\_dlu.fnm (The system cannot find the file
    specified)
    at java.io.RandomAccessFile.open(Native Method)
    at java.io.RandomAccessFile.<init>(Unknown Source)
    at
    org.apache.lucene.store.FSDirectory$FSIndexInput$Descriptor.<init>(FSDirectory.java:552)
    at
    org.apache.lucene.store.FSDirectory$FSIndexInput.<init>(FSDirectory.java:582)
    at
    org.apache.lucene.store.FSDirectory.openInput(FSDirectory.java:488)
    at
    org.apache.lucene.store.FSDirectory.openInput(FSDirectory.java:482)
    at
    org.apache.lucene.index.CompoundFileWriter.copyFile(CompoundFileWriter.java:221)
    at
    org.apache.lucene.index.CompoundFileWriter.close(CompoundFileWriter.java:184)
    at
    org.apache.lucene.index.SegmentMerger.createCompoundFile(SegmentMerger.java:204)
    at
    org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4263)
    at
    org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3884)
    at
    org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:205)
    at
    org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:260)
    The indexes look fine when I open them with Luke, and
    since the normal updating works, I don't think it has
    something to do with write-rights of the disks.
    So, what could be the cause of this?
    And: How necessary is it really to run an optimisation
    every night? A lot of changes take place when the program
    runs, so the search index is changed quite frequently. Maybe
    it is enough to let the automatic merging take care of
    things?
    Greets,
    Anna



    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org


    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
  • Uwe Schindler at Apr 30, 2010 at 9:07 am
    As Lucene 2.9 switched to per-segment search, every query run separately on each segment of an index and the results are combined. There is no difference between an optimized or unoptimized index for this process. Furthermore, if you sort by fields, you should not optimize at all, as the FieldCache needs to be completely refreshed after optimizing, which can take some time. Currently, only changed segments are reloaded.

    If you update or add documents to the index, the merge process will automatically compact segments on the next merge. There may still be some deleted documents between (especially if you have the index optimized once and it consisted only of one *large* segment, which is not touched or merged until another very large segment of equal size is added).

    Optimizing indexes only makes sense for static indexes like those delivered on CD-ROMs.

    Uwe

    -----
    Uwe Schindler
    H.-H.-Meier-Allee 63, D-28213 Bremen
    http://www.thetaphi.de
    eMail: uwe@thetaphi.de

    -----Original Message-----
    From: Anna Hunecke
    Sent: Friday, April 30, 2010 11:18 AM
    To: java-user@lucene.apache.org
    Subject: Re: IOExceptions when optimising the index

    Hi Ian,

    thanks for the answer. I also assumed something like this. Telling the
    customer to switch to unix is not an option, so I'll try to solve the
    problem by scheduling the optimization to occur at some other time.

    Can you explain a bit more why you think optimization is not necessary?
    As far as I understand it, it is necessary compact the index files from
    time to time, especially if there are many changes in the index.
    What I don't understand is what the difference is between the explicit
    optimization and the automatic merging of the files.

    Best,
    Anna

    --- Ian Lea <ian.lea@gmail.com> schrieb am Do, 29.4.2010:
    Von: Ian Lea <ian.lea@gmail.com>
    Betreff: Re: IOExceptions when optimising the index
    An: java-user@lucene.apache.org
    Datum: Donnerstag, 29. April, 2010 13:32 Uhr
    Hi


    It is not necessary to run optimize. At a guess there
    is some job
    such as a backup or virus check that is running overnight
    and locking
    files and parts of the file system. If that is the
    case, and you do
    want to run optimize, perhaps you could schedule around
    it. Or switch
    to a unix based system that doesn't have these locking
    issues.


    --
    Ian.


    On Thu, Apr 29, 2010 at 11:50 AM, Anna Hunecke <annahunecke@yahoo.de>
    wrote:
    Hi!

    we are using Lucene 2.4.1 in our app. It works great
    so far, but now a customer ran into a strange problem.
    During the day, the search index is updated regularly
    with the newest changes in the application. At night, when
    nothing much is happening in the application, the index is
    optimised.
    The updating during the day works fine, but during the
    optimizing, all kinds of strange exceptions occur:
    java.io.IOException: Access is denied
    at
    java.io.WinNTFileSystem.createFileExclusively(Native
    Method)
    at java.io.File.createNewFile(Unknown Source)
    at
    org.apache.lucene.store.SimpleFSLock.obtain(SimpleFSLockFactory.java:14
    4)
    at org.apache.lucene.store.Lock.obtain(Lock.java:73)
    at
    org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1070)
    at
    org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:566)
    or

    org.apache.lucene.store.LockReleaseFailedException:
    failed to delete searchindex\write.lock
    at
    org.apache.lucene.store.SimpleFSLock.release(SimpleFSLockFactory.java:1
    49)
    at
    org.apache.lucene.index.IndexWriter.closeInternal(IndexWriter.java:1668
    )
    at
    org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1602)
    at
    org.apache.lucene.index.IndexWriter.close(IndexWriter.java:1578)
    or

    java.io.IOException: background merge hit exception:
    _dhi:c1195 _dlt:c33 into _dlu [optimize]
    Exception in thread "Lucene Merge Thread #0" at
    org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2273)
    at
    org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2218)
    at
    org.apache.lucene.index.IndexWriter.optimize(IndexWriter.java:2198)
    ...
    Caused by: java.io.FileNotFoundException:
    searchindex\_dlu.fnm (The system cannot find the file
    specified)
    at java.io.RandomAccessFile.open(Native Method)
    at java.io.RandomAccessFile.<init>(Unknown Source)
    at
    org.apache.lucene.store.FSDirectory$FSIndexInput$Descriptor.<init>(FSDi
    rectory.java:552)
    at
    org.apache.lucene.store.FSDirectory$FSIndexInput.<init>(FSDirectory.jav
    a:582)
    at
    org.apache.lucene.store.FSDirectory.openInput(FSDirectory.java:488)
    at
    org.apache.lucene.store.FSDirectory.openInput(FSDirectory.java:482)
    at
    org.apache.lucene.index.CompoundFileWriter.copyFile(CompoundFileWriter.
    java:221)
    at
    org.apache.lucene.index.CompoundFileWriter.close(CompoundFileWriter.jav
    a:184)
    at
    org.apache.lucene.index.SegmentMerger.createCompoundFile(SegmentMerger.
    java:204)
    at
    org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4263)
    at
    org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3884)
    at
    org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMerg
    eScheduler.java:205)
    at
    org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(Concur
    rentMergeScheduler.java:260)
    The indexes look fine when I open them with Luke, and
    since the normal updating works, I don't think it has
    something to do with write-rights of the disks.
    So, what could be the cause of this?
    And: How necessary is it really to run an optimisation
    every night? A lot of changes take place when the program
    runs, so the search index is changed quite frequently. Maybe
    it is enough to let the automatic merging take care of
    things?
    Greets,
    Anna



    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org


    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org


    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupjava-user @
categorieslucene
postedApr 29, '10 at 10:51a
activeApr 30, '10 at 9:07a
posts4
users3
websitelucene.apache.org

People

Translate

site design / logo © 2021 Grokbase