Hello,

When I display Explanation of Lucene Scoring, sometimes the queryWeight

component of the score doesn't show, and it seems it's not even calculated

into the score. And sometimes it appear in the scoring. That seems wrong.

Has anyone run into this, and fixed it?

And why when the queryWeight is calculated the Field contain 2 NGrams ?

I am using the following Analyzer:

<analyzer>

<tokenizer class="solr.LetterTokenizerFactory"/>

<!--tokenizer class="solr.WhitespaceTokenizerFactory"/-->

<filter

class="fr.splayce.analysis.detection.TokenFilterNgramFactory"

nGramSize="8"/>

<!--filter class="solr.StopFilterFactory" words="stopwords.txt"

ignoreCase="true"/-->

</analyzer>

1.0 = (MATCH) fieldWeight(contenu:No doubt my dear friend no doubt but in

0), product of:

1.0 = tf(termFreq(contenu:No doubt my dear friend no doubt but)=1)

1.0 = idf(docFreq=1, maxDocs=10)

1.0 = fieldNorm(field=contenu, doc=0)

1.0 = (MATCH) fieldWeight(contenu:doubt my dear friend no doubt but in in

0), product of:

1.0 = tf(termFreq(contenu:doubt my dear friend no doubt but in)=1)

1.0 = idf(docFreq=1, maxDocs=10)

1.0 = fieldNorm(field=contenu, doc=0)

1.0 = (MATCH) fieldWeight(contenu:my dear friend no doubt but in the in

0), product of:

1.0 = tf(termFreq(contenu:my dear friend no doubt but in the)=1)

1.0 = idf(docFreq=1, maxDocs=10)

1.0 = fieldNorm(field=contenu, doc=0)

1.0 = (MATCH) fieldWeight(contenu:dear friend no doubt but in the

meanwhile in 0), product of:

1.0 = tf(termFreq(contenu:dear friend no doubt but in the meanwhile)=1)

1.0 = idf(docFreq=1, maxDocs=10)

1.0 = fieldNorm(field=contenu, doc=0)

1.0 = (MATCH) fieldWeight(contenu:friend no doubt but in the meanwhile

suppose in 0), product of:

1.0 = tf(termFreq(contenu:friend no doubt but in the meanwhile

suppose)=1)

1.0 = idf(docFreq=1, maxDocs=10)

1.0 = fieldNorm(field=contenu, doc=0)

1.0 = (MATCH) fieldWeight(contenu:no doubt but in the meanwhile suppose we

in 0), product of:

1.0 = tf(termFreq(contenu:no doubt but in the meanwhile suppose we)=1)

1.0 = idf(docFreq=1, maxDocs=10)

1.0 = fieldNorm(field=contenu, doc=0)

1.0 = (MATCH) fieldWeight(contenu:doubt but in the meanwhile suppose we

talk in 0), product of:

1.0 = tf(termFreq(contenu:doubt but in the meanwhile suppose we talk)=1)

1.0 = idf(docFreq=1, maxDocs=10)

1.0 = fieldNorm(field=contenu, doc=0)

1.0 = (MATCH) fieldWeight(contenu:but in the meanwhile suppose we talk of

in 0), product of:

1.0 = tf(termFreq(contenu:but in the meanwhile suppose we talk of)=1)

1.0 = idf(docFreq=1, maxDocs=10)

1.0 = fieldNorm(field=contenu, doc=0)

1.0 = (MATCH) fieldWeight(contenu:in the meanwhile suppose we talk of this

in 0), product of:

1.0 = tf(termFreq(contenu:in the meanwhile suppose we talk of this)=1)

1.0 = idf(docFreq=1, maxDocs=10)

1.0 = fieldNorm(field=contenu, doc=0)

4.0 = weight(contenu:"the meanwhile suppose we talk of this annuity

meanwhile suppose we talk of this annuity Shall" in 0), product of:

2.0 = queryWeight(contenu:"the meanwhile suppose we talk of this annuity

meanwhile suppose we talk of this annuity Shall"), product of:

2.0 = idf(contenu: the meanwhile suppose we talk of this annuity=1

meanwhile suppose we talk of this annuity Shall=1)

1.0 = queryNorm

2.0 = fieldWeight(contenu:"the meanwhile suppose we talk of this annuity

meanwhile suppose we talk of this annuity Shall" in 0), product of:

1.0 = tf(phraseFreq=1.0)

2.0 = idf(contenu: the meanwhile suppose we talk of this annuity=1

meanwhile suppose we talk of this annuity Shall=1)

1.0 = fieldNorm(field=contenu, doc=0)

1.0 = (MATCH) fieldWeight(contenu:suppose we talk of this annuity Shall we

in 0), product of:

1.0 = tf(termFreq(contenu:suppose we talk of this annuity Shall we)=1)

1.0 = idf(docFreq=1, maxDocs=10)

1.0 = fieldNorm(field=contenu, doc=0)

1.0 = (MATCH) fieldWeight(contenu:we talk of this annuity Shall we say in

0), product of:

1.0 = tf(termFreq(contenu:we talk of this annuity Shall we say)=1)

1.0 = idf(docFreq=1, maxDocs=10)

1.0 = fieldNorm(field=contenu, doc=0)

Thank you.

Amel.