Historically venue search used to be done in Mongo at foursquare; we later migrated it to Solr and now we are using Elastic Search. While Solr has been great (and we continue to use it for users and to-dos) we want to make sure that we can keep up with the growth of our venues database as our wonderful users keep adding more places.
Elastic Search provides a really nice way to shard search data and manage the related problems around this. By sharding search data we are able to split it up the documents into groups which can easily fit on a single machine and then get the results back from a collection of machines. Elastic Search and Solr have quite a few similarities that come from both being built on top of lucene. The Elastic Search team has been super helpful during the migration process, providing ideas on how to tweak our queries to get better performance out of Elastic Search. We got a non-trivial performance improvement by switching the field types for our geohash to allow us to skip the analyzer. One of the biggest improvements for performance we found was paramaterazing our ranking scripts to avoid the per-search compilation over-head. We also saw an improvement when using a custom scoring plugin (written in scala.)
By far the largest improvement came from switching the type of search we were asking Elastic Search to do. When you are doing a search for “blue bottle coffee shop" you probably care more about the terms “blue bottle" matching then the term “coffee shop" matching, since blue bottle doesn't show up very often in comparison to “coffee shop". By switching the search type from DFS_QUERY_THEN_FETCH to QUERY_THEN_FETCH we use the term frequencies in each shard rather than getting a global set of term frequencies. This works really well for us since our venues are fairly uniformly distributed between the shards, but might not work so well if that isn't the case for your data.
The migration from Solr to Elastic Search has really emphasized the usefulness of having easy to update throttles. The throttles let us switch code paths in our application for different groups of users without having to do deploy. As with deploying any large piece of infrastructure, there were a few hiccups during the roll-out* of Elastic Search and throttles helped us minimize the impact on end users.
As we've previously written about, the majority of our search queries are written using Slashem, an in-house DSL for querying search backends. Rather than re-write all of our queries for Elastic Search we instead updated Slashem to generate queries for Elastic Search as well as Solr. The support for Elastic Search in Slashem is not as comprehensive as the Solr support, although its pretty good for basic searching at this point. A lot of the query generation is still fairly naive, but we are working on improving this. As always patches are welcome :).
* Although #sfsearch has normally listened to levels for our deploys.