Tuning redisearch for FT.SEARCH (prefix matching) performance for MINPREFIX 1, 2 or 3 any suggestions?

Hello Redisearch 2 Gurus!

I have read only index (the index is pre-populated before read only traffic) :

 FT.CREATE myindex ON HASH STOPWORDS 0 SCHEMA MyField TEXT PHONETIC dm:en

Each document stores up to 10 tokens/words in MyField field. Each word/token is a number or english word.

The myindex has around 14.2M keys/documents, and currently it is running on one redisearch instance. Total memory usage is 9.51 GB.

The index is being queried by following command:

(@MyField:(token1*..... )=> { $weight: 1.0; $slop: 5; $inorder: false; $phonetic: true})

There could be multiple tokens in the query:

(@MyField:(token1* token2* token3* .... )=> { $weight: 1.0; $slop: 5; $inorder: false; $phonetic: true})

Basically I have to do full text search (with prefix matching) to provide something like autocomplete/suggest. Unfortunately I can not use Suggestions as queried token can appear anywhere in the document (beginning, middle, end, etc).

I am getting very slow responses for queries where there is one token with very few (1, 2, 3) characters.
For example (info from slowlog):
-> (@MyField:(1*)) - 1136.98 ms
->(@MyField:(12*)) - 553.58 ms
->(@MyField:(123*)) - 46.35 ms
-> (@MyField:(a*)) - 396.02 ms
-> (@MyField:(ab*)) - 177.05 ms
-> (@MyField:(abb*)) - 60.58 ms

I understand this is due to the nature of the prefix search and number of documents.

However I would like to understand what are the obvious things that can improve this search? For example:

  • would limiting MAXEXPANSIONS to let say 100 or even lower help?
  • partition data along multiple redis instances - if so how many assuming above numbers of documents and memory?
  • caching slow responses in Redis (for some period of time)?
  • any other…?

Thank a lot!