I don't think that is the case. You will have single deletion
neighborhood. The number of unique terms in the field is going to be
the union of the deletion dictionaries of each source term.
For example, given the following documents A which have field 'X'
with value best, and document B with value jest (and k == 1).
A will generate est bst, bet, bes, B will generate est, jest, jst, jes
so field FieldXFuzzy contains
I don't think the storage requirement is any greater doing it this way.
For all words in a dictionary, and a given number of edit operations
generates all variant spellings recursively and save them as tuples
v′ ∈ Ud (v, k) → (v, x) where v is a dictionary word and x a
list of deletion
Theorem 5. Index uses O(nmk+1) space, as it stores al l the variants
dictionary words of length m with k mismatches.
For a query p and edit distance k, first generate the neighborhood Ud
Then compare the words in the neighborhood with the index, and find
matching candidates. Compare deletion positions for each candidate with
the deletion positions in U(p, k), using Theorem 4.