OpenAI has introduced embeddings, a new endpoint in the OpenAI API, to assist in semantic search, clustering, topic modeling, and classification.
OpenAI’s embeddings outperform top models in three standard benchmarks, including a 20% relative improvement in code search. Embeddings are really useful for working with natural language and code.
The embeddings that are numerically similar are also semantically similar. For example, the embedding vector of “canine companions say” will be more similar to the embedding vector of “woof” than “meow.” The new endpoint by OpenAI uses neural network models to map text and code to a vector representation—“embedding” them in a high-dimensional space. Each dimension captures some aspect of the input.
The company has released three families of embedding models for different functionalities including text similarity, text search, and code search. The models take either text or code as input and return an embedding vector.
Text similarity models
The text similarity models provide embeddings that capture the semantic similarity of pieces of text. These models are useful for many tasks including clustering, data visualization, and classification.
Text search models
The text search models provide embeddings that allow large-scale search tasks, such as finding a relevant document among a collection of documents given a text query. The model first embeds for the documents and produces query separately, and then cosine similarity is used to compare the similarity between the query and each document. Such embedding-based search generalize better than word overlap techniques used in the classical keyword search, as it captures the semantic meaning of the text and is also less sensitive to exact phrases or words.
Code search models
Code search models provide code and text embeddings for code search tasks. Given a collection of code blocks, the task is to find the relevant code block for a natural language query.
Find the embeddings documentation here.