FAQ

I’m using Lucene 2.4.1 and I’m trying to use a custom
HitCollector to collect only the first N hits (not the best hits) for
performance.
You mean that you do not need score calculation therefore you do not want results sorted by relevancy. Just you need is a Boolean Retrieval Model, right? All results will have ConstantScore (0 or 1).

If yes, I think omitting norms and tfs of your searchable fields can improve your search performance without writing custom HitCollector.



void org.apache.lucene.document.Field.setOmitNorms(true)
Using Field.Index.ANALYZED_NO_NORMS in constructor.
void org.apache.lucene.document.Field.setOmitTf(true)


But i am not sure those changes are enough to convert Vector Space Model to Boolean Retrieval Model. I think there should be another trick to not to calculate score/similarity at all.

__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org

Search Discussions

  • Rafis at Aug 22, 2009 at 12:12 pm

    Len Takeuchi-2 wrote:

    I’m using Lucene 2.4.1 and I’m trying to use a custom HitCollector to
    collect
    only the first N hits (not the best hits) for performance. I saw another
    e-mail in this group where they mentioned writing a HitCollector which
    throws
    an exception after N hits to do this. So I tried this approach and it
    seems
    as if my HitCollector isn’t called until the hits have been determined,
    i.e.
    the time until my HitCollector is called is dependent on the number of
    hits
    and my performance is no better than when I was not using a custom
    HitCollector.
    In my case count of found hits was required, so there was no throwing
    exception.

    /**
    * Hit collector to collect only first docs ignoring score
    * @author Rafis
    */
    public class LimitedHitCollector extends HitCollector {
    public final int capacity;
    public final int[] docs;
    private int count;

    public LimitedHitCollector(int capacity) {
    this.capacity = capacity;
    docs = new int[capacity];
    }

    @Override
    public void collect(int doc, float score) {
    if (count < capacity)
    docs[count] = doc;
    // else
    // throw custom exception here to break search, if you do not need real
    count
    count++;
    }

    /**
    * @return Number of collected docs
    */
    public int getSize() {
    return Math.min(count, capacity);
    }

    /**
    * @return Number of found docs
    */
    public int getCount() {
    return count;
    }

    }


    Add throwing exception in collect(int doc, float score) to break search:

    @Override
    public void collect(int doc, float score) {
    if (count < capacity)
    docs[count++] = doc;
    else
    throw new SomeRuntimeException();
    }

    And catch it from search.

    Regards,
    Rafis
    --
    View this message in context: http://www.nabble.com/Using-HitCollector-to-Collect-First-N-Hits-tp25090722p25093309.html
    Sent from the Lucene - Java Users mailing list archive at Nabble.com.


    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
  • Simon Willnauer at Aug 22, 2009 at 12:34 pm
    Hi Len,
    what kind of query do you execute when you collect the hits.
    HitCollector should be called for each document by the time it is
    scored. Is it possible that you run a query that could be expensive in
    terms of term expansion like WildcardQuery?

    simon


    On Sat, Aug 22, 2009 at 7:09 AM, Len Takeuchiwrote:
    Hello,

    I’m using Lucene 2.4.1 and I’m trying to use a custom HitCollector to collect
    only the first N hits (not the best hits) for performance.  I saw another
    e-mail in this group where they mentioned writing a HitCollector which throws
    an exception after N hits to do this.  So I tried this approach and it seems
    as if my HitCollector isn’t called until the hits have been determined, i.e.
    the time until my HitCollector is called is dependent on the number of hits
    and my performance is no better than when I was not using a custom
    HitCollector.  Does anyone have insight into my problem?  The person who tried
    approach mentioned performance improved significantly for him.

    Regards,
    Len


    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
  • Len Takeuchi at Aug 22, 2009 at 3:52 pm
    Hello,

    I have attached the original thread from where I got my information at the
    very bottom in case it is of any help. In regards to whether I want just a
    boolean retrieval model, in the usage we are currently discussing, the
    answer is yes (I don't care about the score). However, we also do other
    queries in other use cases where we do care about the score.

    In regards to the type of query, we are using a prefix query. We have a
    problem with performance when the prefix entered by the user is short
    because it yields a large number of hits. I was hoping that taking the
    approach I mentioned, search engine would call the HitCollector
    incrementally and give me a chance to end the search earlier but it seems
    like it is not happening. Do you think the problem is with term expansion?

    Regards,
    Len

    ----- original message -----
    From: simon.willnauer [at] googlemail
    Re: Using HitCollector to Collect First N Hits

    Hi Len,
    what kind of query do you execute when you collect the hits.
    HitCollector should be called for each document by the time it is
    scored. Is it possible that you run a query that could be expensive in
    terms of term expansion like WildcardQuery?

    simon

    ----- original message -----
    From : ltakeuchi [at] jostleme
    Stop search process when a given number of hits is reached
    Hello,

    I’m using Lucene 2.4.1 and I’m trying to use a custom HitCollector to
    collect
    only the first N hits (not the best hits) for performance. I saw another
    e-mail in this group where they mentioned writing a HitCollector which
    throws
    an exception after N hits to do this. So I tried this approach and it seems
    as if my HitCollector isn’t called until the hits have been determined, i.e.
    the time until my HitCollector is called is dependent on the number of hits
    and my performance is no better than when I was not using a custom
    HitCollector. Does anyone have insight into my problem? The person who tried
    approach mentioned performance improved significantly for him.

    Regards,
    Len







    ORIGINAL THREAD BELOW
    =====================

    ----- original message -----
    From : yodapoubelle [at] yahoo
    Re : Stop search process when a given number of hits is reached

    Thanks a lot for your responses...

    I have tried the HitCollector and throw an exception when the limit of hits
    is reached...
    It works fine and the search time is really reduce when there is a lot of
    docs which are matching the query...

    I did that :

    public class CountCollector extends HitCollector{
    public int cpt;
    private int _maxHit;
    public CountCollector(int maxHit)
    {
    cpt = 0;
    _maxHit = maxHit
    }
    public void collect(int arg0, float arg1)
    {
    cpt++;
    if (cpt > _max_Hit)
    {
    throw new LimitIsReachedException();
    }
    }
    }

    With a simple try catch, I catch the exception, and display "cpt" (the
    counter)...

    Best regards

    ----- Message d'origine ----
    De : Andrzej Bialecki <ab[at]getopt.org>
    À : java-user[at]lucene.apache.org
    Envoyé le : Jeudi, 7 Août 2008, 14h29mn 31s
    Objet : Re: Stop search process when a given number of hits is reached

    Doron Cohen wrote:
    Nothing built in that I'm aware of will do this, but it can be done by
    searching with your own HitCollector.
    There is a related feature - stop search after a specified time - using
    TimeLimitedCollector.
    It is not released yet, see issue LUCENE-997.
    In short, the collector's collect() method is invoked in the search process
    for each matching document.
    Once 500 docs were collected, your collector can cause the search to stop by
    throwing an exception.
    Upon catching the exception you know that 500 docs were collected.
    Two additional comments:

    * the topN results from such incomplete search may be way off, if there
    were some high scoring documents somewhere beyond the limit.

    * if you know that there are more important and less important documents
    in your corpus, and their relative weight is independent of the query
    (e.g. PageRank-type score), then you can restructure your index so that
    postings belonging to highly-scoring documents come first on the posting
    lists - this way you have a better chance to collect highly relevant
    documents first, even though the search is incomplete. You can find an
    implementation of this concept in Nutch
    (org.apache.nutch.indexer.IndexSorter).

    --
    Best regards,
    Andrzej Bialecki <><
    ___. ___ ___ ___ _ _ __________________________________
    [__ || __|__/|__||\/| Information Retrieval, Semantic Web
    ___|||__|| \| || | Embedded Unix, System Integration
    http://www.sigram.com Contact: info at sigram dot com


    ----- original message -----
    From : yodapoubelle [at] yahoo
    Stop search process when a given number of hits is reached Remove
    Highlighting

    Hello

    Is there a way to stop the search process when a given number of hits is
    reached?

    I have a counter feature which displays how many docs match a query.
    This counter is blocked; I mean that if there are more than 500 docs, it
    will just display "more than 500".
    I don't care about the exact amount of docs matched by the query, the order
    of the hits or whatever...
    What I want is to stop the search process when it reaches at least 500 hits
    in order to improve performance...
    (I want an average search time in about 50 - 100 ms)

    I experimented with the following methods :
    For the same query:
    with search(Query query, Filter filter, Sort sort) hits=157691 docs in
    searchingTime=2514 ms
    with search(Query query, Filter filter, int n) (with n = 50) TopDocs
    totalHits 157691 in searchingTime= 2360 ms

    For another query:
    With search(Query query, Filter filter, Sort sort) hits=1208 docs in
    searchingTime=750 ms
    With search(Query query, Filter filter, int n) (with n = 50) TopDocs
    totalHits 1208 in searchingTime= 718 ms

    For another query:
    With search(Query query, Filter filter, Sort sort) hits=16174 cv(s)
    searchingTime=1297 ms
    With search(Query query, Filter filter, int n) (with n = 50) TopDocs
    totalHits 16174 in searchingTime= 1219 ms

    According to this results, replacing the first method by the other has no
    effect on either the search
    time or total number of hits returned

    Also the lucene version used now is 1.9.1 (but i work on the evolution to
    2.3.2)


    Thanks a lot
    (Sorry for my bad English ... you will easily guess, I’m French ;)

    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
  • Simon Willnauer at Aug 22, 2009 at 4:19 pm
    Hi Len,

    On Sat, Aug 22, 2009 at 5:51 PM, Len Takeuchiwrote:
    Hello,

    I have attached the original thread from where I got my information at the
    very bottom in case it is of any help. In regards to whether I want just a
    boolean retrieval model, in the usage we are currently discussing, the
    answer is yes (I don't care about the score).  However, we also do other
    queries in other use cases where we do care about the score.

    In regards to the type of query, we are using a prefix query.  We have a
    problem with performance when the prefix entered by the user is short
    because it yields a large number of hits.  I was hoping that taking the
    approach I mentioned, search engine would call the HitCollector
    incrementally and give me a chance to end the search earlier but it seems
    like it is not happening.  Do you think the problem is with term expansion?
    that is what my first guess was and I'm pretty sure that the long time
    is taken before the documents get scored. A short prefix can easily
    expand to thousands of terms, do you encounter
    TooManyClausesExceptions and in turn do you set
    BooleanQuery#setMaxClauseCount() to a higher value than 1024?
    I wonder if BooleanQuery#setAllowDocsOutOfOrder(true) would give you
    any performance hit if you don't care about the order of how the docs
    come in. Any idea how many terms your prefix query expands to?

    simon
    Regards,
    Len

    ----- original message -----
    From: simon.willnauer [at] googlemail
    Re: Using HitCollector to Collect First N Hits

    Hi Len,
    what kind of query do you execute when you collect the hits.
    HitCollector should be called for each document by the time it is
    scored. Is it possible that you run a query that could be expensive in
    terms of term expansion like WildcardQuery?

    simon

    ----- original message -----
    From : ltakeuchi [at] jostleme
    Stop search process when a given number of hits is reached
    Hello,

    I’m using Lucene 2.4.1 and I’m trying to use a custom HitCollector to
    collect
    only the first N hits (not the best hits) for performance. I saw another
    e-mail in this group where they mentioned writing a HitCollector which
    throws
    an exception after N hits to do this. So I tried this approach and it seems
    as if my HitCollector isn’t called until the hits have been determined, i.e.
    the time until my HitCollector is called is dependent on the number of hits
    and my performance is no better than when I was not using a custom
    HitCollector. Does anyone have insight into my problem? The person who tried
    approach mentioned performance improved significantly for him.

    Regards,
    Len







    ORIGINAL THREAD BELOW
    =====================

    ----- original message -----
    From : yodapoubelle [at] yahoo
    Re : Stop search process when a given number of hits is reached

    Thanks a lot for your responses...

    I have tried the HitCollector and throw an exception when the limit of hits
    is reached...
    It works fine and the search time is really reduce when there is a lot of
    docs which are matching the query...

    I did that :

    public class CountCollector extends HitCollector{
    public int cpt;
    private int _maxHit;
    public CountCollector(int maxHit)
    {
    cpt = 0;
    _maxHit = maxHit
    }
    public void collect(int arg0, float arg1)
    {
    cpt++;
    if (cpt > _max_Hit)
    {
    throw new LimitIsReachedException();
    }
    }
    }

    With a simple try catch, I catch the exception, and display "cpt" (the
    counter)...

    Best regards

    ----- Message d'origine ----
    De : Andrzej Bialecki <ab[at]getopt.org>
    À : java-user[at]lucene.apache.org
    Envoyé le : Jeudi, 7 Août 2008, 14h29mn 31s
    Objet : Re: Stop search process when a given number of hits is reached

    Doron Cohen wrote:
    Nothing built in that I'm aware of will do this, but it can be done by
    searching with your own HitCollector.
    There is a related feature - stop search after a specified time - using
    TimeLimitedCollector.
    It is not released yet, see issue LUCENE-997.
    In short, the collector's collect() method is invoked in the search process
    for each matching document.
    Once 500 docs were collected, your collector can cause the search to stop by
    throwing an exception.
    Upon catching the exception you know that 500 docs were collected.
    Two additional comments:

    * the topN results from such incomplete search may be way off, if there
    were some high scoring documents somewhere beyond the limit.

    * if you know that there are more important and less important documents
    in your corpus, and their relative weight is independent of the query
    (e.g. PageRank-type score), then you can restructure your index so that
    postings belonging to highly-scoring documents come first on the posting
    lists - this way you have a better chance to collect highly relevant
    documents first, even though the search is incomplete. You can find an
    implementation of this concept in Nutch
    (org.apache.nutch.indexer.IndexSorter).

    --
    Best regards,
    Andrzej Bialecki <><
    ___. ___ ___ ___ _ _ __________________________________
    [__ || __|__/|__||\/| Information Retrieval, Semantic Web
    ___|||__|| \| || | Embedded Unix, System Integration
    http://www.sigram.com Contact: info at sigram dot com


    ----- original message -----
    From : yodapoubelle [at] yahoo
    Stop search process when a given number of hits is reached Remove
    Highlighting

    Hello

    Is there a way to stop the search process when a given number of hits is
    reached?

    I have a counter feature which displays how many docs match a query.
    This counter is blocked; I mean that if there are more than 500 docs, it
    will just display "more than 500".
    I don't care about the exact amount of docs matched by the query, the order
    of the hits or whatever...
    What I want is to stop the search process when it reaches at least 500 hits
    in order to improve performance...
    (I want an average search time in about 50 - 100 ms)

    I experimented with the following methods :
    For the same query:
    with search(Query query, Filter filter, Sort sort) hits=157691 docs in
    searchingTime=2514 ms
    with search(Query query, Filter filter, int n) (with n = 50) TopDocs
    totalHits 157691 in searchingTime= 2360 ms

    For another query:
    With search(Query query, Filter filter, Sort sort) hits=1208 docs in
    searchingTime=750 ms
    With search(Query query, Filter filter, int n) (with n = 50) TopDocs
    totalHits 1208 in searchingTime= 718 ms

    For another query:
    With search(Query query, Filter filter, Sort sort) hits=16174 cv(s)
    searchingTime=1297 ms
    With search(Query query, Filter filter, int n) (with n = 50) TopDocs
    totalHits 16174 in searchingTime= 1219 ms

    According to this results, replacing the first method by the other has no
    effect on either the search
    time or total number of hits returned

    Also the lucene version used now is 1.9.1 (but i work on the evolution to
    2.3.2)


    Thanks a lot
    (Sorry for my bad English ... you will easily guess, I’m French ;)

    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
  • Simon Willnauer at Aug 22, 2009 at 4:32 pm
    Hey Len,
    one more thing... while I have no idea about your usecase if you don't
    care about the score you could you expand the terms yourself just like
    PrefixQuery does.

    simon

    On Sat, Aug 22, 2009 at 6:19 PM, Simon
    Willnauerwrote:
    Hi Len,

    On Sat, Aug 22, 2009 at 5:51 PM, Len Takeuchiwrote:
    Hello,

    I have attached the original thread from where I got my information at the
    very bottom in case it is of any help. In regards to whether I want just a
    boolean retrieval model, in the usage we are currently discussing, the
    answer is yes (I don't care about the score).  However, we also do other
    queries in other use cases where we do care about the score.

    In regards to the type of query, we are using a prefix query.  We have a
    problem with performance when the prefix entered by the user is short
    because it yields a large number of hits.  I was hoping that taking the
    approach I mentioned, search engine would call the HitCollector
    incrementally and give me a chance to end the search earlier but it seems
    like it is not happening.  Do you think the problem is with term expansion?
    that is what my first guess was and I'm pretty sure that the long time
    is taken before the documents get scored. A short prefix can easily
    expand to thousands of terms, do you encounter
    TooManyClausesExceptions and in turn do you set
    BooleanQuery#setMaxClauseCount() to a higher value than 1024?
    I wonder if BooleanQuery#setAllowDocsOutOfOrder(true) would give you
    any performance hit if you don't care about the order of how the docs
    come in. Any idea how many terms your prefix query expands to?

    simon
    Regards,
    Len

    ----- original message -----
    From: simon.willnauer [at] googlemail
    Re: Using HitCollector to Collect First N Hits

    Hi Len,
    what kind of query do you execute when you collect the hits.
    HitCollector should be called for each document by the time it is
    scored. Is it possible that you run a query that could be expensive in
    terms of term expansion like WildcardQuery?

    simon

    ----- original message -----
    From : ltakeuchi [at] jostleme
    Stop search process when a given number of hits is reached
    Hello,

    I’m using Lucene 2.4.1 and I’m trying to use a custom HitCollector to
    collect
    only the first N hits (not the best hits) for performance. I saw another
    e-mail in this group where they mentioned writing a HitCollector which
    throws
    an exception after N hits to do this. So I tried this approach and it seems
    as if my HitCollector isn’t called until the hits have been determined, i.e.
    the time until my HitCollector is called is dependent on the number of hits
    and my performance is no better than when I was not using a custom
    HitCollector. Does anyone have insight into my problem? The person who tried
    approach mentioned performance improved significantly for him.

    Regards,
    Len







    ORIGINAL THREAD BELOW
    =====================

    ----- original message -----
    From : yodapoubelle [at] yahoo
    Re : Stop search process when a given number of hits is reached

    Thanks a lot for your responses...

    I have tried the HitCollector and throw an exception when the limit of hits
    is reached...
    It works fine and the search time is really reduce when there is a lot of
    docs which are matching the query...

    I did that :

    public class CountCollector extends HitCollector{
    public int cpt;
    private int _maxHit;
    public CountCollector(int maxHit)
    {
    cpt = 0;
    _maxHit = maxHit
    }
    public void collect(int arg0, float arg1)
    {
    cpt++;
    if (cpt > _max_Hit)
    {
    throw new LimitIsReachedException();
    }
    }
    }

    With a simple try catch, I catch the exception, and display "cpt" (the
    counter)...

    Best regards

    ----- Message d'origine ----
    De : Andrzej Bialecki <ab[at]getopt.org>
    À : java-user[at]lucene.apache.org
    Envoyé le : Jeudi, 7 Août 2008, 14h29mn 31s
    Objet : Re: Stop search process when a given number of hits is reached

    Doron Cohen wrote:
    Nothing built in that I'm aware of will do this, but it can be done by
    searching with your own HitCollector.
    There is a related feature - stop search after a specified time - using
    TimeLimitedCollector.
    It is not released yet, see issue LUCENE-997.
    In short, the collector's collect() method is invoked in the search process
    for each matching document.
    Once 500 docs were collected, your collector can cause the search to stop by
    throwing an exception.
    Upon catching the exception you know that 500 docs were collected.
    Two additional comments:

    * the topN results from such incomplete search may be way off, if there
    were some high scoring documents somewhere beyond the limit.

    * if you know that there are more important and less important documents
    in your corpus, and their relative weight is independent of the query
    (e.g. PageRank-type score), then you can restructure your index so that
    postings belonging to highly-scoring documents come first on the posting
    lists - this way you have a better chance to collect highly relevant
    documents first, even though the search is incomplete. You can find an
    implementation of this concept in Nutch
    (org.apache.nutch.indexer.IndexSorter).

    --
    Best regards,
    Andrzej Bialecki <><
    ___. ___ ___ ___ _ _ __________________________________
    [__ || __|__/|__||\/| Information Retrieval, Semantic Web
    ___|||__|| \| || | Embedded Unix, System Integration
    http://www.sigram.com Contact: info at sigram dot com


    ----- original message -----
    From : yodapoubelle [at] yahoo
    Stop search process when a given number of hits is reached Remove
    Highlighting

    Hello

    Is there a way to stop the search process when a given number of hits is
    reached?

    I have a counter feature which displays how many docs match a query.
    This counter is blocked; I mean that if there are more than 500 docs, it
    will just display "more than 500".
    I don't care about the exact amount of docs matched by the query, the order
    of the hits or whatever...
    What I want is to stop the search process when it reaches at least 500 hits
    in order to improve performance...
    (I want an average search time in about 50 - 100 ms)

    I experimented with the following methods :
    For the same query:
    with search(Query query, Filter filter, Sort sort) hits=157691 docs in
    searchingTime=2514 ms
    with search(Query query, Filter filter, int n) (with n = 50) TopDocs
    totalHits 157691 in searchingTime= 2360 ms

    For another query:
    With search(Query query, Filter filter, Sort sort) hits=1208 docs in
    searchingTime=750 ms
    With search(Query query, Filter filter, int n) (with n = 50) TopDocs
    totalHits 1208 in searchingTime= 718 ms

    For another query:
    With search(Query query, Filter filter, Sort sort) hits=16174 cv(s)
    searchingTime=1297 ms
    With search(Query query, Filter filter, int n) (with n = 50) TopDocs
    totalHits 16174 in searchingTime= 1219 ms

    According to this results, replacing the first method by the other has no
    effect on either the search
    time or total number of hits returned

    Also the lucene version used now is 1.9.1 (but i work on the evolution to
    2.3.2)


    Thanks a lot
    (Sorry for my bad English ... you will easily guess, I’m French ;)

    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
  • Len Takeuchi at Aug 24, 2009 at 4:41 pm
    Hi Simon,
    that is what my first guess was and I'm pretty sure that the long time
    is taken before the documents get scored. A short prefix can easily
    expand to thousands of terms, do you encounter
    TooManyClausesExceptions and in turn do you set
    BooleanQuery#setMaxClauseCount() to a higher value than 1024?
    I wonder if BooleanQuery#setAllowDocsOutOfOrder(true) would give you
    any performance hit if you don't care about the order of how the docs
    come in. Any idea how many terms your prefix query expands to?

    I looked into it and the prefix query we are finding to be slow expands
    to about 150 terms (and hence we're not getting
    TooManyClausesExceptions).

    one more thing... while I have no idea about your usecase if you don't
    care about the score you could you expand the terms yourself just like
    PrefixQuery does.
    I was trying out a few things and if term expansion is limited to about
    20 or so the performance becomes okay for us. I will have to try to
    find some way to limit the expansion for some queries. I'll look into
    expanding the terms myself as you suggest.

    Thanks for your help,
    Len


    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupjava-user @
categorieslucene
postedAug 22, '09 at 11:45a
activeAug 24, '09 at 4:41p
posts7
users4
websitelucene.apache.org

People

Translate

site design / logo © 2022 Grokbase