Leverage machine learning to boost individual results, categories, brands or other groups based on user interactions.
Dynamic boosting learns from your users behaviour and automatically promotes records or categories that are most commonly interacted with. It enables dynamic re-ranking based on changing preferences and automatically selects the most relevant categories for a search term, eliminating hundreds of rules and hours of maintenance.
Setting up a Dynamic Boost only takes a few minutes. Click on "Relevance" in the main navigation and select "Dynamic boosting" from the sub navigation on the left.
Dynamic boosts have two modes: Record and Category. The Record mode boosts individual records and the category mode boosts aggregate fields. Aggregate fields define any grouping of your records such as brand, tags, collection, or category.
When running a dynamic boost in record mode, Search.io will return the most relevant records based on previous user interactions. This allows the search algorithm to adapt to changing trends without the need to maintain a large set of manual rules.
Instead of recording interactions for individual records, category mode captures the specified aggregate field and determines which value is the best match for a particular query.
For example, given a search for "tv" in an online store, the standard result set might return TVs as well as accessories. However, if most people purchase an actual TV as a result of the query, the machine learning model will learn that products from the "Television" category are more relevant than products from the "Accessories" category. Future searches then automatically boost products from the more relevant category.
If a result is associated with multiple categories each category will be boosted equally.
Boosted items are recorded under this name. The boost name can not contain any spaces or special characters aside from dashes "-". Boosted items and categories are returned using this name in the search response along with a score for each boosted item/category.
List of interaction events to use in the performance calculations. If blank, all events are used. Search.io tracks "click" events by default, but it is possible to track custom events via the API or client libraries.
For example: to only use "click" and "conversion" for the dynamic boost model, use the following
Apply the performance boost at an individual record level or aggregated at a category level. This is useful for categories, brands, tags and generally any grouping you may see used in facets or aggregations.
Select a category, brand or tag fields. Generally any grouping you may see used in facets or aggregations.
Look back days to use in the performance calculations. Longer is more accurate, less is more adaptive but will impact fewer queries. Max is 60 days.
Limit the number of boosts to return. For non-aggregate boosts this is the number of records to boost. For aggregate boosts this is the number of categories to boost. Max is 40.
Minimum number of positive events to be considered for a performance boost. For purchases this makes sense to be very low. For clicks a larger number may be less noisy.
Minimum number of negative events to be considered for a performance boost.
The condition needs to be true for the boost to be applied. Conditions are written as Filter expressions. If not specified, the boost will be applied to all search queries.
For example, to apply a dynamic boost to only on products in the "Electronics" category, write the following condition:
category = "Electronics"
Dynamic boosting, as described, tries always to generate the best boosts based on what is currently known. In other words it optimises for exploiting the learnings we already have rather than exploring and generating new learnings. Search.io supports a more balanced explore/exploit approach through the
reinforcement-learning-rerankquery post step. It is configured in a similar way to
dynamic-boostitself, but instead of generating boosts, it looks at the confidence it has for each result and reranks results to improve the rank of results we have no historical data for in order to learn whether they are good results or not.
As we gain more confidence in returned results, the reranking has less effect and over time the best performing results will make their way to the top.