FAQ
Hi all,

Is there any guaranty that the maxDoc returned by a reader will be about the
total number of indexed documents?

The motivation of this question is that I want to associate some info to
each document in the index, and in order to access this additional data in
O(1) I would like to do this through an array indexing. But the array size
shouldn't be a lot greater than the total number of documents. I see that
something similar is done in the example of section 6.1 of Lucene in Action,
but for sorting purposes, which is not my case.

Related to this: how can update my array of extra data when documents are
added/removed to/from the index? Is there any feedback mechanism by means of
callbacks or event handlers?

Thank you in advance.
Regards,
Carlos

Search Discussions

  • Erick Erickson at May 24, 2007 at 5:25 pm
    See below...
    On 5/24/07, Carlos Pita wrote:

    Hi all,

    Is there any guaranty that the maxDoc returned by a reader will be about
    the
    total number of indexed documents?


    No. It will always be at least as large as the total documents. But that
    will also count deleted documents.

    Why wouldn't numdocs serve?

    Best
    Erick


    The motivation of this question is that I want to associate some info to
    each document in the index, and in order to access this additional data in
    O(1) I would like to do this through an array indexing. But the array size
    shouldn't be a lot greater than the total number of documents. I see that
    something similar is done in the example of section 6.1 of Lucene in
    Action,
    but for sorting purposes, which is not my case.

    Related to this: how can update my array of extra data when documents are
    added/removed to/from the index? Is there any feedback mechanism by means
    of
    callbacks or event handlers?

    Thank you in advance.
    Regards,
    Carlos
  • Carlos Pita at May 24, 2007 at 6:05 pm
    Why wouldn't numdocs serve?

    Because the document id (which is the array index) would be in the range 0
    ... maxDoc and not 0 ... numDocs, wouldn't it?

    Cheers,
    Carlos

    Best
    Erick


    The motivation of this question is that I want to associate some info to
    each document in the index, and in order to access this additional data in
    O(1) I would like to do this through an array indexing. But the array size
    shouldn't be a lot greater than the total number of documents. I see that
    something similar is done in the example of section 6.1 of Lucene in
    Action,
    but for sorting purposes, which is not my case.

    Related to this: how can update my array of extra data when documents are
    added/removed to/from the index? Is there any feedback mechanism by means
    of
    callbacks or event handlers?

    Thank you in advance.
    Regards,
    Carlos
  • Carlos Pita at May 24, 2007 at 6:31 pm


    No. It will always be at least as large as the total documents. But that
    will also count deleted documents.


    Do you mean that deleted document ids won't be reutilized, so the index
    maxDoc will grow more and more with time? Isn't there any way to compress
    the range? It seems strange to me, considering that an example in the book
    suggests to use the document id as an array index for an array of maxDoc
    elements.

    Cheers,
    Carlos

    Why wouldn't numdocs serve?
    Best
    Erick


    The motivation of this question is that I want to associate some info to
    each document in the index, and in order to access this additional data in
    O(1) I would like to do this through an array indexing. But the array size
    shouldn't be a lot greater than the total number of documents. I see that
    something similar is done in the example of section 6.1 of Lucene in
    Action,
    but for sorting purposes, which is not my case.

    Related to this: how can update my array of extra data when documents are
    added/removed to/from the index? Is there any feedback mechanism by means
    of
    callbacks or event handlers?

    Thank you in advance.
    Regards,
    Carlos
  • Erick Erickson at May 24, 2007 at 8:06 pm
    Document IDs will be re-utilized, after, say, optimization.
    One consequence of this is that optimization will change the IDs
    of *existing* documents.

    You're right, that numdocs may well be shorter than maxdocs.
    That's what I get for reading quickly...

    Best
    Erick
    On 5/24/07, Carlos Pita wrote:



    No. It will always be at least as large as the total documents. But that
    will also count deleted documents.


    Do you mean that deleted document ids won't be reutilized, so the index
    maxDoc will grow more and more with time? Isn't there any way to compress
    the range? It seems strange to me, considering that an example in the book
    suggests to use the document id as an array index for an array of maxDoc
    elements.

    Cheers,
    Carlos

    Why wouldn't numdocs serve?
    Best
    Erick


    The motivation of this question is that I want to associate some info to
    each document in the index, and in order to access this additional
    data
    in
    O(1) I would like to do this through an array indexing. But the array size
    shouldn't be a lot greater than the total number of documents. I see that
    something similar is done in the example of section 6.1 of Lucene in
    Action,
    but for sorting purposes, which is not my case.

    Related to this: how can update my array of extra data when documents are
    added/removed to/from the index? Is there any feedback mechanism by means
    of
    callbacks or event handlers?

    Thank you in advance.
    Regards,
    Carlos
  • Carlos Pita at May 24, 2007 at 8:14 pm
    That's no problem, I can regenerate my entire extra data structure upon
    periodic index optimization. That way the array size will be about the size
    of the index. What I find more difficult is to know the id of the last
    added/removed document. I need it to update the in-mem structure upon more
    fine-grained index changes. Any ideas?

    TIA.
    Cheers,
    Carlos
    On 5/24/07, Erick Erickson wrote:

    Document IDs will be re-utilized, after, say, optimization.
    One consequence of this is that optimization will change the IDs
    of *existing* documents.

    You're right, that numdocs may well be shorter than maxdocs.
    That's what I get for reading quickly...

    Best
    Erick
    On 5/24/07, Carlos Pita wrote:



    No. It will always be at least as large as the total documents. But
    that
    will also count deleted documents.


    Do you mean that deleted document ids won't be reutilized, so the index
    maxDoc will grow more and more with time? Isn't there any way to compress
    the range? It seems strange to me, considering that an example in the book
    suggests to use the document id as an array index for an array of maxDoc
    elements.

    Cheers,
    Carlos

    Why wouldn't numdocs serve?
    Best
    Erick


    The motivation of this question is that I want to associate some info
    to
    each document in the index, and in order to access this additional
    data
    in
    O(1) I would like to do this through an array indexing. But the
    array
    size
    shouldn't be a lot greater than the total number of documents. I see that
    something similar is done in the example of section 6.1 of Lucene in
    Action,
    but for sorting purposes, which is not my case.

    Related to this: how can update my array of extra data when
    documents
    are
    added/removed to/from the index? Is there any feedback mechanism by means
    of
    callbacks or event handlers?

    Thank you in advance.
    Regards,
    Carlos
  • Erick Erickson at May 24, 2007 at 8:22 pm
    From the Javadoc for IndexReader.....
    Returns one greater than the largest possible document number. This may be
    used to, e.g., determine how big to allocate an array which will have an
    element for every document number in an index.

    Isn't that what you're wondering about?

    Erick
    On 5/24/07, Carlos Pita wrote:

    That's no problem, I can regenerate my entire extra data structure upon
    periodic index optimization. That way the array size will be about the
    size
    of the index. What I find more difficult is to know the id of the last
    added/removed document. I need it to update the in-mem structure upon more
    fine-grained index changes. Any ideas?

    TIA.
    Cheers,
    Carlos
    On 5/24/07, Erick Erickson wrote:

    Document IDs will be re-utilized, after, say, optimization.
    One consequence of this is that optimization will change the IDs
    of *existing* documents.

    You're right, that numdocs may well be shorter than maxdocs.
    That's what I get for reading quickly...

    Best
    Erick
    On 5/24/07, Carlos Pita wrote:



    No. It will always be at least as large as the total documents. But
    that
    will also count deleted documents.


    Do you mean that deleted document ids won't be reutilized, so the
    index
    maxDoc will grow more and more with time? Isn't there any way to compress
    the range? It seems strange to me, considering that an example in the book
    suggests to use the document id as an array index for an array of
    maxDoc
    elements.

    Cheers,
    Carlos

    Why wouldn't numdocs serve?
    Best
    Erick


    The motivation of this question is that I want to associate some
    info
    to
    each document in the index, and in order to access this additional
    data
    in
    O(1) I would like to do this through an array indexing. But the
    array
    size
    shouldn't be a lot greater than the total number of documents. I
    see
    that
    something similar is done in the example of section 6.1 of Lucene
    in
    Action,
    but for sorting purposes, which is not my case.

    Related to this: how can update my array of extra data when
    documents
    are
    added/removed to/from the index? Is there any feedback mechanism
    by
    means
    of
    callbacks or event handlers?

    Thank you in advance.
    Regards,
    Carlos
  • Carlos Pita at May 24, 2007 at 8:28 pm
    Yes Erick, that's fine. But the fact is that I'm not sure whether the next
    added document will have an id equal to maxDocs. If this is guaranteed, then
    I will update the maxDocs slot of my extra data structure upon document
    addition and get rid of the hits.id(0) slot upon document deletion. Then,
    when the index is optimized, I will recreate the entire structure from
    scratch. Do you think I could rely on this?

    Cheers,
    Carlos
    On 5/24/07, Erick Erickson wrote:

    From the Javadoc for IndexReader.....

    Returns one greater than the largest possible document number. This may be
    used to, e.g., determine how big to allocate an array which will have an
    element for every document number in an index.

    Isn't that what you're wondering about?

    Erick
    On 5/24/07, Carlos Pita wrote:

    That's no problem, I can regenerate my entire extra data structure upon
    periodic index optimization. That way the array size will be about the
    size
    of the index. What I find more difficult is to know the id of the last
    added/removed document. I need it to update the in-mem structure upon more
    fine-grained index changes. Any ideas?

    TIA.
    Cheers,
    Carlos
    On 5/24/07, Erick Erickson wrote:

    Document IDs will be re-utilized, after, say, optimization.
    One consequence of this is that optimization will change the IDs
    of *existing* documents.

    You're right, that numdocs may well be shorter than maxdocs.
    That's what I get for reading quickly...

    Best
    Erick
    On 5/24/07, Carlos Pita wrote:



    No. It will always be at least as large as the total documents.
    But
    that
    will also count deleted documents.


    Do you mean that deleted document ids won't be reutilized, so the
    index
    maxDoc will grow more and more with time? Isn't there any way to compress
    the range? It seems strange to me, considering that an example in
    the
    book
    suggests to use the document id as an array index for an array of
    maxDoc
    elements.

    Cheers,
    Carlos

    Why wouldn't numdocs serve?
    Best
    Erick


    The motivation of this question is that I want to associate some
    info
    to
    each document in the index, and in order to access this
    additional
    data
    in
    O(1) I would like to do this through an array indexing. But the
    array
    size
    shouldn't be a lot greater than the total number of documents. I
    see
    that
    something similar is done in the example of section 6.1 of
    Lucene
    in
    Action,
    but for sorting purposes, which is not my case.

    Related to this: how can update my array of extra data when
    documents
    are
    added/removed to/from the index? Is there any feedback mechanism
    by
    means
    of
    callbacks or event handlers?

    Thank you in advance.
    Regards,
    Carlos
  • Yonik Seeley at May 24, 2007 at 8:36 pm

    On 5/24/07, Carlos Pita wrote:
    Yes Erick, that's fine. But the fact is that I'm not sure whether the next
    added document will have an id equal to maxDocs.
    Yes. The highest docId will always be the last document added, and
    docIds are never re-arranged with respect to each other.

    So if you do an addDocument(), it will have an id of maxDoc()-1
    *But* beware of renumbering caused by squeezing out of deleted docs on
    segment merges, as I mentioned in the last message.

    -Yonik

    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
  • Carlos Pita at May 24, 2007 at 8:51 pm
    I have done some benchmarks. Keeping things in an array makes the entire
    search, including postprocessing from first to last id for a big result set,
    extremely fast. So I would really like to implement this approach. But I'm
    concerned about what Yonik remarked. I could use a large mergeFactor but
    anyway, just to be sure, is there a way to make the index inform my
    application of merging events?

    Cheers,
    Carlos
    On 5/24/07, Yonik Seeley wrote:
    On 5/24/07, Carlos Pita wrote:
    Yes Erick, that's fine. But the fact is that I'm not sure whether the next
    added document will have an id equal to maxDocs.
    Yes. The highest docId will always be the last document added, and
    docIds are never re-arranged with respect to each other.

    So if you do an addDocument(), it will have an id of maxDoc()-1
    *But* beware of renumbering caused by squeezing out of deleted docs on
    segment merges, as I mentioned in the last message.

    -Yonik

    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
  • Chris Hostetter at May 24, 2007 at 9:15 pm
    : extremely fast. So I would really like to implement this approach. But I'm
    : concerned about what Yonik remarked. I could use a large mergeFactor but
    : anyway, just to be sure, is there a way to make the index inform my
    : application of merging events?

    this entire thread seems to be a discussion about reimplementingthe
    FiledCache ... please review that API, it should solve all of your
    problems.



    -Hoss


    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
  • Carlos Pita at May 24, 2007 at 9:39 pm
    Mh, some of my fields are in fact multivaluated. But anyway, I could store
    them as a single string and split after retrieval.
    Will FieldCache work for the first search with some query or just for the
    successive ones, for which the fields are already cached?

    Cheers,
    Carlos
    On 5/24/07, Chris Hostetter wrote:


    : extremely fast. So I would really like to implement this approach. But
    I'm
    : concerned about what Yonik remarked. I could use a large mergeFactor but
    : anyway, just to be sure, is there a way to make the index inform my
    : application of merging events?

    this entire thread seems to be a discussion about reimplementingthe
    FiledCache ... please review that API, it should solve all of your
    problems.



    -Hoss


    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
  • Chris Hostetter at May 24, 2007 at 9:59 pm
    : Mh, some of my fields are in fact multivaluated. But anyway, I could store
    : them as a single string and split after retrieval.
    : Will FieldCache work for the first search with some query or just for the
    : successive ones, for which the fields are already cached?

    The first time you access the cache, it will populate it for every
    document. ... it makes the first hit slow, but you can always force the
    first hit arbitrarily prior to using the IndexReader for "real" queries



    -Hoss


    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
  • Carlos Pita at May 24, 2007 at 10:59 pm
    Nice, I will write the ids into a byte array with a DataOutputStream and
    then marshal that array into a String with a UTF8 encoding. This way there
    is no need for parsing or splitting, and the encoding is space efficient.
    This marshaled String will be cached with a FieldCache. Thank you for your
    suggestions! I will tell you how well this worked as soon as I've
    implemented it.
    Cheers,
    Carlos
    On 5/24/07, Chris Hostetter wrote:

    : Mh, some of my fields are in fact multivaluated. But anyway, I could
    store
    : them as a single string and split after retrieval.
    : Will FieldCache work for the first search with some query or just for
    the
    : successive ones, for which the fields are already cached?

    The first time you access the cache, it will populate it for every
    document. ... it makes the first hit slow, but you can always force the
    first hit arbitrarily prior to using the IndexReader for "real" queries



    -Hoss


    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
  • 童小军 at May 25, 2007 at 5:55 am
    I have some application will indexing new data to one Index Directory. And other some application will read the index and Data Mining.
    But my Mining Application must re-open the index Directory. The Index file have 5G . and must real time mining .
    How Can I do it at many computer at one network ?
    If I must do it ,my mining application must Automatic reopen index ? I want let all application share use one IndexReader or Directory instance. Can I use RMI or ICE ?

    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
  • Stephen Gray at May 25, 2007 at 6:08 am
    Hi,

    My understanding is that once you have added documents to your index you
    need to close and reopen your IndexReader and Searcher, otherwise the
    documents added will not be available to these.

    You might want to try LuceneIndexAccessor
    (http://www.blizzy.de/lucene/lucene-indexaccess-0.1.0.zip) which is very
    good - this caches a single copy of IndexWriter, IndexReader and
    Searcher, and hands out references to your application. Once the index
    is changed with IndexWriter, and the reference to IndexWriter is
    released, it automatically closes and re-opens the IndexReader and
    Searcher for you.

    Regards,
    Steve

    童小军 wrote:
    I have some application will indexing new data to one Index Directory. And other some application will read the index and Data Mining.
    But my Mining Application must re-open the index Directory. The Index file have 5G . and must real time mining .
    How Can I do it at many computer at one network ?
    If I must do it ,my mining application must Automatic reopen index ? I want let all application share use one IndexReader or Directory instance. Can I use RMI or ICE ?

    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org


    --
    Stephen Gray
    Archive IT Officer
    Australian Social Science Data Archive
    18 Balmain Crescent (Building #66)
    The Australian National University
    Canberra ACT 0200

    Phone +61 2 6125 2185
    Fax +61 2 6125 0627
    Web http://assda.anu.edu.au/

    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
  • Yonik Seeley at May 24, 2007 at 8:26 pm

    On 5/24/07, Carlos Pita wrote:
    I need it to update the in-mem structure upon more
    fine-grained index changes. Any ideas?
    Currently, a deleted doc is removed when the segment containing it is
    involved in a segment merge. A merge could be triggered on any
    addDocument(), making it difficult to incrementally update anything.

    If you set mergeFactor to a very high number, you could control when
    merging occured at least (at the expense of generating many segments).

    -Yonik

    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
  • Carlos Pita at May 29, 2007 at 2:15 am
    Hi again,
    On 5/24/07, Yonik Seeley wrote:

    Currently, a deleted doc is removed when the segment containing it is
    involved in a segment merge. A merge could be triggered on any
    addDocument(), making it difficult to incrementally update anything.

    sorry but is the document id renumbering caused by a merge visible from an
    opened reader? Or can I assume that document ids are kept the same while my
    searcher is opened, disregarding how many updates or optimizations happened
    after its opening? This way I could regenerate my extra in-memory data
    structures indexed by document id just when the searcher is reopened (say,
    every 1000 updates) instead of after each update.

    Thank you in advance.
    Cheers,
    Carlos
  • Erick Erickson at May 29, 2007 at 1:19 pm
    As far as I know, no changes are visible to an already-opened reader
    so for the life of that reader document IDs are unchanged.

    Erick
    On 5/28/07, Carlos Pita wrote:

    Hi again,
    On 5/24/07, Yonik Seeley wrote:

    Currently, a deleted doc is removed when the segment containing it is
    involved in a segment merge. A merge could be triggered on any
    addDocument(), making it difficult to incrementally update anything.

    sorry but is the document id renumbering caused by a merge visible from an
    opened reader? Or can I assume that document ids are kept the same while
    my
    searcher is opened, disregarding how many updates or optimizations
    happened
    after its opening? This way I could regenerate my extra in-memory data
    structures indexed by document id just when the searcher is reopened (say,
    every 1000 updates) instead of after each update.

    Thank you in advance.
    Cheers,
    Carlos
  • Otis Gospodnetic at May 24, 2007 at 6:43 pm
    Carlos:
    Answer to your last question: No, but if you look in JIRA, Karl Wettin has written something that does have a notification mechanism that you are describing.

    Otis

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
    Simpy -- http://www.simpy.com/ - Tag - Search - Share

    ----- Original Message ----
    From: Carlos Pita <carlosjosepita@gmail.com>
    To: java-user@lucene.apache.org
    Sent: Thursday, May 24, 2007 12:41:11 PM
    Subject: maxDoc and arrays

    Hi all,

    Is there any guaranty that the maxDoc returned by a reader will be about the
    total number of indexed documents?

    The motivation of this question is that I want to associate some info to
    each document in the index, and in order to access this additional data in
    O(1) I would like to do this through an array indexing. But the array size
    shouldn't be a lot greater than the total number of documents. I see that
    something similar is done in the example of section 6.1 of Lucene in Action,
    but for sorting purposes, which is not my case.

    Related to this: how can update my array of extra data when documents are
    added/removed to/from the index? Is there any feedback mechanism by means of
    callbacks or event handlers?

    Thank you in advance.
    Regards,
    Carlos




    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
  • Antony Bowesman at May 25, 2007 at 6:34 am

    Carlos Pita wrote:
    Hi all,

    Is there any guaranty that the maxDoc returned by a reader will be about
    the
    total number of indexed documents?
    It struck me in this thread was that there may be a misunderstanding of the
    relationship between numDocs/maxDoc and an IndexReader.

    When an IndexReader is opened its maxDoc and numDocs will never change
    regardless of the additions or deletions to the index. At least I've not been
    able to make them change in my test cases.

    So, when adding a new document after a reader has been opened, this new document
    is not yet visible via the original reader, so if you are caching that array,
    you would not update that array as it relates to the reader on the index at the
    time the reader was opened.

    When you open a new reader, the numDocs and maxDoc will reflect that addition.
    Same applies to deletions. After opening the reader, you would need to
    regenerate you array cache.

    As Hoss has said, this is pretty much what FieldCache does and it holds the
    caches keyed by the IndexReader.

    Antony



    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org
  • Carlos Pita at May 25, 2007 at 6:59 am
    I see. Anyway I would update the array when adding a document, so my reader
    would be closed then, and just a writer would be accessing the index.
    Supposing that no merging is triggered (for this I'm choosing a big
    mergeFactor and forcing optimization when a number of documents has been
    added) the numeration will be kept.

    OTOH, I do some tests with FieldCache too. As I have to keep a number of ids
    (an array of shorts) for each document, I tried marshaling them into a
    String so they can be retrieved from the cache later. The performance is far
    better than that for directly retrieving documents but still notably behind
    the array approach. Maybe this is due to the unmarshaling step, I'm still
    unsure.

    I will try an hybrid approach where the FieldCache stores just an int which
    is itself the index to the array with the effective ids. This way I won't
    have to deal with index renumbering and at the same time will keep most of
    the data in memory in its definitive format.

    Thank you for your answer.
    Cheers,
    Carlos
    On 5/25/07, Antony Bowesman wrote:

    Carlos Pita wrote:
    Hi all,

    Is there any guaranty that the maxDoc returned by a reader will be about
    the
    total number of indexed documents?
    It struck me in this thread was that there may be a misunderstanding of
    the
    relationship between numDocs/maxDoc and an IndexReader.

    When an IndexReader is opened its maxDoc and numDocs will never change
    regardless of the additions or deletions to the index. At least I've not
    been
    able to make them change in my test cases.

    So, when adding a new document after a reader has been opened, this new
    document
    is not yet visible via the original reader, so if you are caching that
    array,
    you would not update that array as it relates to the reader on the index
    at the
    time the reader was opened.

    When you open a new reader, the numDocs and maxDoc will reflect that
    addition.
    Same applies to deletions. After opening the reader, you would need to
    regenerate you array cache.

    As Hoss has said, this is pretty much what FieldCache does and it holds
    the
    caches keyed by the IndexReader.

    Antony



    ---------------------------------------------------------------------
    To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
    For additional commands, e-mail: java-user-help@lucene.apache.org

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupjava-user @
categorieslucene
postedMay 24, '07 at 4:41p
activeMay 29, '07 at 1:19p
posts22
users8
websitelucene.apache.org

People

Translate

site design / logo © 2022 Grokbase