On Fri, Jan 09, 2009 at 08:11:31PM +0100, Karl Wettin wrote:

SSD is pretty close to RAM when it comes to seeking. Wouldn't that
mean that a bitset stored on an SSD would be more or less as fast as a
bitset in RAM?
Provided that your index can fit in the system i/o cache and stay there, you
get the speed of RAM regardless of the underlying permanent storage type.
There's no reason to wait on SSDs before implementing such a feature.

One thing we've contemplated in Lucy/KS is a FilterWriter, which would write
out cached bitsets at index time. Adding that on would look like something

public class MyArchitecture extends Architecture {
public ArrayList<SegDataWriter> segDataWriters(InvIndex invindex,
Segment segment) {
ArrayList<SegDataWriter> writers
= super.segDataWriters(invindex, segment);
writers.add(new FilterWriter(invindex, segment));
return writers;
public class MySchema extends Schema {
public Architecture architecture() { return new MyArchitecture(); }
public MySchema() {
TextField textFieldSpec = new TextField(new PolyAnalyzer("en"));
specField("title", textFieldSpec);
specField("content", textFieldSpec);

IndexWriter writer = new IndexWriter(new MySchema().open("/path/to/index"));

This isn't quite the same thing, because I believe you're talking about
adaptively caching filters on the fly at search time. However, I expect this
to work quite well when a finite set of filters is known in advance, e.g. for
faceting categories.

Marvin Humphrey

To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org

Search Discussions

Discussion Posts


Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 3 of 22 | next ›
Discussion Overview
groupdev @
postedJan 9, '09 at 7:12p
activeJan 19, '09 at 5:33p



site design / logo © 2021 Grokbase