second is quite a lot, but a bulk of 2000 items in a second >>isn't that
I'd try to use the Bulk API, because 2000 single index operations per
actually I am using Bulk API. However, it couldn't help much as the index
thread could not wait to long because we don't know when the next db update
comes, waiting too long will hurt realtime. I even think about using
algorithm like elevator used by Linux kernel for I/O.
Could you explain a little more on this?
- increase the flush settings for your translog
And I am thinking if I can use a NoSql database for cache. My main
requirement is making all read operations go to cache, in order to support
massive concurrent read. In my software, mysql would be updated frequently
by user action, but most DB access es are from UI which is simply reading
some data to display.
I don't have too strong requirement on search, all I need is term query and
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to email@example.com.
For more options, visit https://groups.google.com/groups/opt_out.