FAQ
hello all,
I am here to ask about lucene in flushing indexes.
below is a pseudocode I get from the book lucene in action.

FSDirectory fsDir = FSDirectory.getDirectory("/tmp/index",
true);
RAMDirectory ramDir = new RAMDirectory();
IndexWriter fsWriter = IndexWriter(fsDir,
new SimpleAnalyzer(), true);
IndexWriter ramWriter = new IndexWriter(ramDir,
new SimpleAnalyzer(), true);
while (there are documents to index) {
... create Document ...
ramWriter.addDocument(doc);
if (condition for flushing memory to disk has been met) { ///THIS LINE
FOR FLUSHING INDEX
fsWriter.addIndexes(Directory[] {ramDir});
ramWriter.close();
ramWriter = new IndexWriter(ramDir, new SimpleAnalyzer(),
true);
}
}

the above code has a condition to which the index need to be flush
from RAMDirectory to the disk. what I am asking is that what is the
correct source code to be used? I mean what is the source code for
flushing indexes? maybe some of you had tried it so can help me out
here. I am a bit stuck at this point. any help would be appreciated.

thanks

--
http://jacobian.web.id

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org

Search Discussions

  • Uwe Schindler at Sep 26, 2010 at 4:38 am
    You should close the ramwriter before doing the addindexes call. Else you
    simply copy only *committed* changes, but there are none, as the index is
    initially empty and stays empty for an outside reader (your addindexes call
    is the outside reader) until the ramwriter is closed or committed. :-)

    -----
    Uwe Schindler
    H.-H.-Meier-Allee 63, D-28213 Bremen
    http://www.thetaphi.de
    eMail: uwe@thetaphi.de

    -----Original Message-----
    From: Yakob
    Sent: Saturday, September 25, 2010 9:17 PM
    To: java-user@lucene.apache.org
    Subject: flushing index

    hello all,
    I am here to ask about lucene in flushing indexes.
    below is a pseudocode I get from the book lucene in action.

    FSDirectory fsDir = FSDirectory.getDirectory("/tmp/index",
    true);
    RAMDirectory ramDir = new RAMDirectory(); IndexWriter fsWriter =
    IndexWriter(fsDir, new SimpleAnalyzer(), true); IndexWriter ramWriter = new
    IndexWriter(ramDir, new SimpleAnalyzer(), true); while (there are documents
    to index) { ... create Document ...
    ramWriter.addDocument(doc);
    if (condition for flushing memory to disk has been met) { ///THIS LINE FOR
    FLUSHING INDEX fsWriter.addIndexes(Directory[] {ramDir});
    ramWriter.close();
    ramWriter = new IndexWriter(ramDir, new SimpleAnalyzer(), true); } }

    the above code has a condition to which the index need to be flush from
    RAMDirectory to the disk. what I am asking is that what is the correct source
    code to be used? I mean what is the source code for flushing indexes? maybe
    some of you had tried it so can help me out here. I am a bit stuck at this point.
    any help would be appreciated.

    thanks

    --
    http://jacobian.web.id

    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org


    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
  • Yakob at Sep 26, 2010 at 12:08 pm

    On 9/26/10, Uwe Schindler wrote:
    You should close the ramwriter before doing the addindexes call. Else you
    simply copy only *committed* changes, but there are none, as the index is
    initially empty and stays empty for an outside reader (your addindexes call
    is the outside reader) until the ramwriter is closed or committed. :-)

    -----
    Uwe Schindler
    H.-H.-Meier-Allee 63, D-28213 Bremen
    http://www.thetaphi.de
    eMail: uwe@thetaphi.de
    thanks for the suggestion. but I was also asking of what source code
    should I filled in this line

    if (condition for flushing memory to disk has been met) { ///THIS LINE
    ramWriter.close();
    fsWriter.addIndexes(Directory[] {ramDir});
    ramWriter = new IndexWriter(ramDir, new SimpleAnalyzer(),
    true);
    }
    }

    I mean what source code that I can put after the IF statement so that
    it will force the index to be flushed?
    but anyway, did I put the ramwriter.close() correctly in the above
    source code? you advised me to close ramwriter before the addIndexes()
    right?

    thanks for your help though. :-)
    --
    http://jacobian.web.id

    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
  • Uwe Schindler at Sep 26, 2010 at 5:16 pm

    thanks for the suggestion. but I was also asking of what source code should I
    filled in this line

    if (condition for flushing memory to disk has been met) { ///THIS LINE
    ramWriter.close(); fsWriter.addIndexes(Directory[] {ramDir}); ramWriter = new
    IndexWriter(ramDir, new SimpleAnalyzer(), true); } }

    I mean what source code that I can put after the IF statement so that it will
    force the index to be flushed?
    but anyway, did I put the ramwriter.close() correctly in the above source code?
    you advised me to close ramwriter before the addIndexes() right?
    Yes. You must close before, else the addIndexes call will do nothing, as the
    index looks empty for the addIndexes() call (because no committed segments
    are available in the ramDir).

    I don't understand what you mean with flushing? If you are working on Lucene
    2.9 or 3.0, the ramWriter is flushed to the RAMDir on close. The addIndexes
    call will add the index to the on-disk writer. To flush that fsWriter (flush
    is the wrong thing, you probably mean commit), simply call fsWriter.commit()
    so the newly added segments are written to disk and IndexReaders opened in
    parallel "see" the new segments.

    Btw: If you are working on Lucene 3.0, the addIndexes call does not need the
    new Directory[] {}, as the method is Java 5 varargs now.

    Uwe


    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
  • Yakob at Sep 28, 2010 at 6:18 am

    On 9/27/10, Uwe Schindler wrote:

    Yes. You must close before, else the addIndexes call will do nothing, as the
    index looks empty for the addIndexes() call (because no committed segments
    are available in the ramDir).

    I don't understand what you mean with flushing? If you are working on Lucene
    2.9 or 3.0, the ramWriter is flushed to the RAMDir on close. The addIndexes
    call will add the index to the on-disk writer. To flush that fsWriter (flush
    is the wrong thing, you probably mean commit), simply call fsWriter.commit()
    so the newly added segments are written to disk and IndexReaders opened in
    parallel "see" the new segments.

    Btw: If you are working on Lucene 3.0, the addIndexes call does not need the
    new Directory[] {}, as the method is Java 5 varargs now.

    Uwe
    I mean I need to flush the index periodically.that's mean that the
    index will be regularly updated as the document being added.what do
    you reckon is the solution for this? I need a sample source code to be
    able to flush an index.

    ok just like this source code below.

    public class SimpleFileIndexer {

    public static void main(String[] args) throws Exception {

    File indexDir = new
    File("C:/Users/Raden/Documents/lucene/LuceneHibernate/adi");
    File dataDir = new
    File("C:/Users/Raden/Documents/lucene/LuceneHibernate/adi");
    String suffix = "txt";

    SimpleFileIndexer indexer = new SimpleFileIndexer();

    int numIndex = indexer.index(indexDir, dataDir, suffix);

    System.out.println("Total files indexed " + numIndex);

    }

    private int index(File indexDir, File dataDir, String suffix) throws
    Exception {

    IndexWriter indexWriter = new IndexWriter(
    FSDirectory.open(indexDir),
    new SimpleAnalyzer(),
    true,
    IndexWriter.MaxFieldLength.LIMITED);
    indexWriter.setUseCompoundFile(false);

    indexDirectory(indexWriter, dataDir, suffix);

    int numIndexed = indexWriter.maxDoc();
    indexWriter.optimize();
    indexWriter.close();

    return numIndexed;

    }

    private void indexDirectory(IndexWriter indexWriter, File dataDir,
    String suffix) throws IOException {
    File[] files = dataDir.listFiles();
    for (int i = 0; i < files.length; i++) {
    File f = files[i];
    if (f.isDirectory()) {
    indexDirectory(indexWriter, f, suffix);
    }
    else {
    indexFileWithIndexWriter(indexWriter, f, suffix);
    }
    }
    }

    private void indexFileWithIndexWriter(IndexWriter indexWriter, File
    f, String suffix) throws IOException {
    if (f.isHidden() || f.isDirectory() || !f.canRead() || !f.exists()) {
    return;
    }
    if (suffix!=null && !f.getName().endsWith(suffix)) {
    return;
    }
    System.out.println("Indexing file " + f.getCanonicalPath());

    Document doc = new Document();
    doc.add(new Field("contents", new FileReader(f)));
    doc.add(new Field("filename", f.getCanonicalPath(), Field.Store.YES,
    Field.Index.ANALYZED));

    indexWriter.addDocument(doc);
    }

    }


    the above source code can index documents when given the directory of
    text files. now what I am asking is how can I made the code to run
    continuously? what class should I use? so that everytime there is new
    documents added to that directory then lucene will index those
    documents automatically, can you help me out on this one. I really
    need to know what is the best solution.

    thanks
    --
    http://jacobian.web.id

    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
  • Erick Erickson at Sep 28, 2010 at 12:51 pm
    Flushing an index to disk is just an IndexWriter.commit(), there's nothing
    really special about that...

    About running your code continuously, you have several options:
    1> schedule a recurring job to do this. On *nix systems, this is a cron job,
    on Windows systems there's a job scheduler.
    2> Just start it up in an infinite loop. That is, your main is just a
    while(1){}.
    you'll probably want to throttle it a bit, that is run, sleep for some
    interval
    and start again.
    3> You can get really fancy and try to put some filesystem hooks in that
    notify you when anything changes in a directory, but I really wouldn't go
    there.

    Note that you'll have to keep some kind of timestamp (probably in a separate
    file or configuration somewhere) that you can compare against to figure out
    whether you've already indexed the current version of the file.

    The other thing you'll have to worry about is deletions. That is, how do you
    *remove* a file from your index if it has been deleted on disk? You may have
    to ask your index for all the file paths.

    You want to think about storing the file path NOT analyzed (perhaps with
    keywordtokenizer). That way you'll be able to know which files to remove
    if they are no longer in your directory. As well as which files to update
    when they've changed.

    HTH
    Erick
    On Tue, Sep 28, 2010 at 2:18 AM, Yakob wrote:
    On 9/27/10, Uwe Schindler wrote:


    Yes. You must close before, else the addIndexes call will do nothing, as the
    index looks empty for the addIndexes() call (because no committed segments
    are available in the ramDir).

    I don't understand what you mean with flushing? If you are working on Lucene
    2.9 or 3.0, the ramWriter is flushed to the RAMDir on close. The
    addIndexes
    call will add the index to the on-disk writer. To flush that fsWriter (flush
    is the wrong thing, you probably mean commit), simply call
    fsWriter.commit()
    so the newly added segments are written to disk and IndexReaders opened in
    parallel "see" the new segments.

    Btw: If you are working on Lucene 3.0, the addIndexes call does not need the
    new Directory[] {}, as the method is Java 5 varargs now.

    Uwe
    I mean I need to flush the index periodically.that's mean that the
    index will be regularly updated as the document being added.what do
    you reckon is the solution for this? I need a sample source code to be
    able to flush an index.

    ok just like this source code below.

    public class SimpleFileIndexer {

    public static void main(String[] args) throws Exception {

    File indexDir = new
    File("C:/Users/Raden/Documents/lucene/LuceneHibernate/adi");
    File dataDir = new
    File("C:/Users/Raden/Documents/lucene/LuceneHibernate/adi");
    String suffix = "txt";

    SimpleFileIndexer indexer = new SimpleFileIndexer();

    int numIndex = indexer.index(indexDir, dataDir, suffix);

    System.out.println("Total files indexed " + numIndex);

    }

    private int index(File indexDir, File dataDir, String suffix) throws
    Exception {

    IndexWriter indexWriter = new IndexWriter(
    FSDirectory.open(indexDir),
    new SimpleAnalyzer(),
    true,
    IndexWriter.MaxFieldLength.LIMITED);
    indexWriter.setUseCompoundFile(false);

    indexDirectory(indexWriter, dataDir, suffix);

    int numIndexed = indexWriter.maxDoc();
    indexWriter.optimize();
    indexWriter.close();

    return numIndexed;

    }

    private void indexDirectory(IndexWriter indexWriter, File dataDir,
    String suffix) throws IOException {
    File[] files = dataDir.listFiles();
    for (int i = 0; i < files.length; i++) {
    File f = files[i];
    if (f.isDirectory()) {
    indexDirectory(indexWriter, f, suffix);
    }
    else {
    indexFileWithIndexWriter(indexWriter, f,
    suffix);
    }
    }
    }

    private void indexFileWithIndexWriter(IndexWriter indexWriter, File
    f, String suffix) throws IOException {
    if (f.isHidden() || f.isDirectory() || !f.canRead() ||
    !f.exists()) {
    return;
    }
    if (suffix!=null && !f.getName().endsWith(suffix)) {
    return;
    }
    System.out.println("Indexing file " + f.getCanonicalPath());

    Document doc = new Document();
    doc.add(new Field("contents", new FileReader(f)));
    doc.add(new Field("filename", f.getCanonicalPath(),
    Field.Store.YES,
    Field.Index.ANALYZED));

    indexWriter.addDocument(doc);
    }

    }


    the above source code can index documents when given the directory of
    text files. now what I am asking is how can I made the code to run
    continuously? what class should I use? so that everytime there is new
    documents added to that directory then lucene will index those
    documents automatically, can you help me out on this one. I really
    need to know what is the best solution.

    thanks
    --
    http://jacobian.web.id

    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
  • Yakob at Sep 28, 2010 at 1:39 pm

    On 9/28/10, Erick Erickson wrote:
    Flushing an index to disk is just an IndexWriter.commit(), there's nothing
    really special about that...

    About running your code continuously, you have several options:
    1> schedule a recurring job to do this. On *nix systems, this is a cron job,
    on Windows systems there's a job scheduler.
    2> Just start it up in an infinite loop. That is, your main is just a
    while(1){}.
    you'll probably want to throttle it a bit, that is run, sleep for some
    interval
    and start again.
    3> You can get really fancy and try to put some filesystem hooks in that
    notify you when anything changes in a directory, but I really wouldn't go
    there.

    Note that you'll have to keep some kind of timestamp (probably in a separate
    file or configuration somewhere) that you can compare against to figure out
    whether you've already indexed the current version of the file.

    The other thing you'll have to worry about is deletions. That is, how do you
    *remove* a file from your index if it has been deleted on disk? You may have
    to ask your index for all the file paths.

    You want to think about storing the file path NOT analyzed (perhaps with
    keywordtokenizer). That way you'll be able to know which files to remove
    if they are no longer in your directory. As well as which files to update
    when they've changed.

    HTH
    Erick

    I think I'll go with the third option, I had found a class that can do
    monitoring of certain directory, it's called Jnotify. I am planning of
    inserting this class in my code above. can you tell me how to do that?
    or maybe you can forward me to any link tutorials that explain how to
    include Jnotify in a certain lucene source code. I had search about
    Jnotify on google but the tutorials of Jnotify is still a few I guess.

    thanks though.

    --
    http://jacobian.web.id

    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
  • Erick Erickson at Sep 28, 2010 at 8:29 pm
    Nope, never used jNotify, so I don't have any code handy...

    Good luck!
    Erick
    On Tue, Sep 28, 2010 at 9:38 AM, Yakob wrote:
    On 9/28/10, Erick Erickson wrote:
    Flushing an index to disk is just an IndexWriter.commit(), there's nothing
    really special about that...

    About running your code continuously, you have several options:
    1> schedule a recurring job to do this. On *nix systems, this is a cron job,
    on Windows systems there's a job scheduler.
    2> Just start it up in an infinite loop. That is, your main is just a
    while(1){}.
    you'll probably want to throttle it a bit, that is run, sleep for some
    interval
    and start again.
    3> You can get really fancy and try to put some filesystem hooks in that
    notify you when anything changes in a directory, but I really wouldn't go
    there.

    Note that you'll have to keep some kind of timestamp (probably in a separate
    file or configuration somewhere) that you can compare against to figure out
    whether you've already indexed the current version of the file.

    The other thing you'll have to worry about is deletions. That is, how do you
    *remove* a file from your index if it has been deleted on disk? You may have
    to ask your index for all the file paths.

    You want to think about storing the file path NOT analyzed (perhaps with
    keywordtokenizer). That way you'll be able to know which files to remove
    if they are no longer in your directory. As well as which files to update
    when they've changed.

    HTH
    Erick

    I think I'll go with the third option, I had found a class that can do
    monitoring of certain directory, it's called Jnotify. I am planning of
    inserting this class in my code above. can you tell me how to do that?
    or maybe you can forward me to any link tutorials that explain how to
    include Jnotify in a certain lucene source code. I had search about
    Jnotify on google but the tutorials of Jnotify is still a few I guess.

    thanks though.

    --
    http://jacobian.web.id

    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
  • Yakob at Oct 11, 2010 at 5:35 pm

    On 9/29/10, Erick Erickson wrote:
    Nope, never used jNotify, so I don't have any code handy...

    Good luck!
    Erick
    so I did try JNotify but there is seems to be some bug in it that I
    find it hards to integrate in my lucene source code.so I had to try a
    looping option instead.

    http://stackoverflow.com/questions/3840844/error-exception-access-violation-in-jnotify

    so anyway, I had another question now. I was trying to make a lucene
    source code that can do indexing and store them first in a memory
    using RAMDirectory and then flush this index in a memory into a disk
    using FSDirectory. I had done some modifications of this code but to
    no avail. maybe some of you can help me out a bit.
    here is the source code again.

    import java.io.File;
    import java.io.FileReader;
    import java.io.IOException;
    import org.apache.lucene.analysis.SimpleAnalyzer;
    import org.apache.lucene.document.Document;
    import org.apache.lucene.document.Field;
    import org.apache.lucene.index.IndexWriter;
    import org.apache.lucene.store.FSDirectory;


    public class SimpleFileIndexer {


    public static void main(String[] args) throws Exception {

    int i=0;
    while(i<10) {
    File indexDir = new
    File("C:/Users/Raden/Documents/lucene/LuceneHibernate/adi");
    File dataDir = new
    File("C:/Users/Raden/Documents/lucene/LuceneHibernate/adi");
    String suffix = "txt";

    SimpleFileIndexer indexer = new SimpleFileIndexer();

    int numIndex = indexer.index(indexDir, dataDir, suffix);

    System.out.println("Total files indexed " + numIndex);
    i++;
    Thread.sleep(10000);

    }
    }



    private int index(File indexDir, File dataDir, String suffix) throws
    Exception {

    IndexWriter indexWriter = new IndexWriter(
    FSDirectory.open(indexDir),
    new SimpleAnalyzer(),
    true,
    IndexWriter.MaxFieldLength.LIMITED);
    indexWriter.setUseCompoundFile(false);

    indexDirectory(indexWriter, dataDir, suffix);

    int numIndexed = indexWriter.maxDoc();
    indexWriter.optimize();
    indexWriter.close();

    return numIndexed;

    }

    private void indexDirectory(IndexWriter indexWriter, File dataDir,
    String suffix) throws IOException {
    File[] files = dataDir.listFiles();
    for (int i = 0; i < files.length; i++) {
    File f = files[i];
    if (f.isDirectory()) {
    indexDirectory(indexWriter, f, suffix);
    }
    else {
    indexFileWithIndexWriter(indexWriter, f, suffix);
    }
    }
    }

    private void indexFileWithIndexWriter(IndexWriter indexWriter, File
    f, String suffix) throws IOException {
    if (f.isHidden() || f.isDirectory() || !f.canRead() || !f.exists()) {
    return;
    }
    if (suffix!=null && !f.getName().endsWith(suffix)) {
    return;
    }
    System.out.println("Indexing file " + f.getCanonicalPath());

    Document doc = new Document();
    doc.add(new Field("contents", new FileReader(f)));
    doc.add(new Field("filename", f.getCanonicalPath(), Field.Store.YES,
    Field.Index.ANALYZED));

    indexWriter.addDocument(doc);
    }

    }

    so what's the best way for me to integrate RAMDirectory in that source
    code before putting them in FSDirectory. any help would be appreciated
    though.
    thanks


    --
    http://jacobian.web.id

    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
  • Erick Erickson at Oct 11, 2010 at 8:45 pm
    It's a good idea to start a new thread when asking a different question,
    see:
    http://people.apache.org/~hossman/#threadhijack

    <http://people.apache.org/~hossman/#threadhijack>I have to ask why you want
    to integrate the RAM directory. If you're using it
    to speed up indexing, you're probably making way more work for yourself
    than you need to. If you're trying to do something with Near Real Time, one
    suggestion is to just not bother. Add docs to the RAM directory AND your
    FSDirectory simultaneously. The data you index to FSDir won't be visible
    until you reopen the FSDir reader, so your flush could then be just
    reopen everything...

    Best
    Erick
    On Mon, Oct 11, 2010 at 1:34 PM, Yakob wrote:
    On 9/29/10, Erick Erickson wrote:
    Nope, never used jNotify, so I don't have any code handy...

    Good luck!
    Erick
    so I did try JNotify but there is seems to be some bug in it that I
    find it hards to integrate in my lucene source code.so I had to try a
    looping option instead.


    http://stackoverflow.com/questions/3840844/error-exception-access-violation-in-jnotify

    so anyway, I had another question now. I was trying to make a lucene
    source code that can do indexing and store them first in a memory
    using RAMDirectory and then flush this index in a memory into a disk
    using FSDirectory. I had done some modifications of this code but to
    no avail. maybe some of you can help me out a bit.
    here is the source code again.

    import java.io.File;
    import java.io.FileReader;
    import java.io.IOException;
    import org.apache.lucene.analysis.SimpleAnalyzer;
    import org.apache.lucene.document.Document;
    import org.apache.lucene.document.Field;
    import org.apache.lucene.index.IndexWriter;
    import org.apache.lucene.store.FSDirectory;


    public class SimpleFileIndexer {


    public static void main(String[] args) throws Exception {

    int i=0;
    while(i<10) {
    File indexDir = new
    File("C:/Users/Raden/Documents/lucene/LuceneHibernate/adi");
    File dataDir = new
    File("C:/Users/Raden/Documents/lucene/LuceneHibernate/adi");
    String suffix = "txt";

    SimpleFileIndexer indexer = new SimpleFileIndexer();

    int numIndex = indexer.index(indexDir, dataDir, suffix);

    System.out.println("Total files indexed " + numIndex);
    i++;
    Thread.sleep(10000);

    }
    }



    private int index(File indexDir, File dataDir, String suffix) throws
    Exception {

    IndexWriter indexWriter = new IndexWriter(
    FSDirectory.open(indexDir),
    new SimpleAnalyzer(),
    true,
    IndexWriter.MaxFieldLength.LIMITED);
    indexWriter.setUseCompoundFile(false);

    indexDirectory(indexWriter, dataDir, suffix);

    int numIndexed = indexWriter.maxDoc();
    indexWriter.optimize();
    indexWriter.close();

    return numIndexed;

    }

    private void indexDirectory(IndexWriter indexWriter, File dataDir,
    String suffix) throws IOException {
    File[] files = dataDir.listFiles();
    for (int i = 0; i < files.length; i++) {
    File f = files[i];
    if (f.isDirectory()) {
    indexDirectory(indexWriter, f, suffix);
    }
    else {
    indexFileWithIndexWriter(indexWriter, f,
    suffix);
    }
    }
    }

    private void indexFileWithIndexWriter(IndexWriter indexWriter, File
    f, String suffix) throws IOException {
    if (f.isHidden() || f.isDirectory() || !f.canRead() ||
    !f.exists()) {
    return;
    }
    if (suffix!=null && !f.getName().endsWith(suffix)) {
    return;
    }
    System.out.println("Indexing file " + f.getCanonicalPath());

    Document doc = new Document();
    doc.add(new Field("contents", new FileReader(f)));
    doc.add(new Field("filename", f.getCanonicalPath(),
    Field.Store.YES,
    Field.Index.ANALYZED));

    indexWriter.addDocument(doc);
    }

    }

    so what's the best way for me to integrate RAMDirectory in that source
    code before putting them in FSDirectory. any help would be appreciated
    though.
    thanks


    --
    http://jacobian.web.id

    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupjava-user @
categorieslucene
postedSep 26, '10 at 4:17a
activeOct 11, '10 at 8:45p
posts10
users3
websitelucene.apache.org

People

Translate

site design / logo © 2022 Grokbase