Now Reading
Moving Beyond Transformers: Microsoft Enhances Bing Search Results With MEB

Moving Beyond Transformers: Microsoft Enhances Bing Search Results With MEB

  • Microsoft’s team used heuristics for each Bing search impression to determine whether the users were satisfied with the results.
Microsoft MEB

Microsoft has recently introduced ‘Make Every feature Binary’ (MEB) to improve its search engine Bing. MEB is a large scale parse model that goes beyond pure semantics and reflects a more nuanced relationship between search queries and documents. To make the search more accurate and dynamic, MEB harnesses the power of large data and accepts an input feature space with over 200 billion binary features.

DNN & Transformers for Bing

The Bing search stack depends on natural language models to improve the core search algorithm’s understanding of user search intent and related web pages. Deep learning computer vision techniques are used to enhance the discoverability of billions of images even when text descriptions or summary metadata does not accompany the queries. Machine learning-based models are used to retrieve captions within the larger text body that answer specific questions.

Register for FREE Workshop on Data Engineering>>

The introduction of Transformers was a game changer in natural language understanding. Unlike DNN architectures that processed words individually and sequentially, Transformers could understand the context and the relationship between each word and all other words around it in a sentence. From April 2019, Bing incorporated large Transformer models to deliver high quality improvements.

How does MEB improve search performance

Transformer-based deep learning models have been favoured due to their advanced understanding of semantic relationships. While these models have shown great promise, it still fails at capturing nuanced understanding of individual facts. Enter MEB.

MEB model has 135 billion parameters which help it map single facts to features to gain a more nuanced understanding. It is also trained with more than 500 billion query/document pairs from three years of Bing searches. This gives MEB the capability to memorise facts represented by the binary features while reliably learning from a vast amount of data continuously.

Microsoft’s team used heuristics for each Bing search impression to determine whether the users were satisfied with the results. The ‘satisfactory’ documents were labelled as positive samples. Other documents for the same impression were labelled as negative samples. For each query-document pair, the features were extracted from the query text, the URL of the document, title, and the body text. These binary features are then fed to the sparse neural network model. It helps in minimising the cross-entropy loss between the model’s predicted click probability and the actual click label.

Feature design and large-scale training are key to the MEB model. Traditional numeric features only care about the matching count query and document. On the other hand, MEB features are very specific and are defined on the N-gram level relationship between the query and the document. All the features are designed as binary features to cover manually crafted numeric features easily. These features are directly extracted from raw text, allowing MEB to perform end-to-end optimisation in one path. The current production model uses three major features:

  • Query and document N-gram pair features
  • One-hot encoding of bucketised numeric features
  • One-hot encoding of categorical features


MEB is currently running in production for all Bing searches in all regions and languages, making it the largest universal model at Microsoft. Compared to Transformer-based deep learning models like GPT-3, the MEB model can even learn hidden intents between query and document. It can also identify negative relationships between words or phrases to reveal what users might not want to see for a query.

See Also

With the introduction of MEB in Bing, Microsoft has following advantages:

  • A 2 percent increase in the clickthrough rate (CTR) on the top search results
  • 1 percent reduction in manual query reformulation
  • Over 1.5 percent reduction in the number of clicks on pagination (need to click on next page button)

The MEB model consists of a binary feature input layer, a feature embedding layer, a pooling layer, and two dense layers. Generated from 49 feature groups, the input layers contain 9 billion features. Each of the binary features is encoded into a 15-dimension embedding vector. After per-group sum-pooling and concatenation, the vector is passed through the dense layers for producing a click probability estimation.

“If you are using DNNs to power your business, we recommend experimenting with large sparse neural networks to complement those models. This is especially true if you have a large historical stream of user interactions and can easily construct simple binary features,” the team said in a blog.

Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.
Join our Telegram Group. Be part of an engaging community

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top