Listen to this story
|
In an interesting observation, a user posted on the LocalLLaMA thread of Reddit that Perplexity summarises the content from the top 5-10 results from Google Search.
“Search for the exact same thing on google and perplexity and compare the sources, they match 1:1,” the user said.
This means that Perplexity runs a Google Search for every user query, extracts the content from the top results from the results, then summarises it using an LLM, and gives the response to its users.
CEO of You.com, Richard Socher, also put up his post on X, “We’ve observed this for a while. Pplx does literally just take Google snippets often verbatim. Same with most images they show.”
This is quite similar to what Google does on Gemini. The only difference is Perplexity hosts different models such as Claude 3, GPT-4 Turbo, and Mistral Large for users to choose from. The real question is that did Google give Perplexity the permission to do so.
Another user pointed out that this is an oversimplification of what Perplexity actually offers. “The Co-pilot tool helps to refine or expand the search. A simple search = google result because it’s the base case,” the user explained. But this is just an explanation of why Perplexity is a faster choice.
Several users have been discussing this for a long time on Reddit and X, saying that it is just a front end model of web search that summarises and gives references. “why not just use OpenAI rather than its wrapper,” said a user, also saying there is no innovation in the product.
“Isn’t this always the case?, asked a user, saying, Bing chat, You.com and they are just LLM summarising the web search. “The ultimate RAG application. For the search index, they all licence Bing under the hood,” he explained.