Listen to this story
Meta is working on an ML model to fact-check 6.5 million articles on Wikipedia for accuracy and correctness. The firm has open-sourced its model, allowing users to access a demo of the verification tool.
By analyzing analytical models of blocks of text, the model is set to ‘understand’ content using natural language understanding (NLU) approaches, rather than examining text strings to ensure they include the same words.
Sign up for your weekly dose of what's up in emerging technology.
As of early 2020, Wikipedia’s English version has 255 million page views per day, making it the eighth-most-visited website. The company’s employees agree that AI could help make Wikipedia more accurate.
Meta’s fundamental AI research tech lead manager Fabio Petroni, said, “What we have done is to build an index of all these web pages by chunking them into passages and providing an accurate representation for each passage. That is not representing word-by-word in the passage, but the meaning of it. That means two chunks of texts with similar meanings will be represented in a very close position in the resulting n-dimensional space where all these passages are stored.”
Researchers at Meta AI have started developing the building blocks of the next generation of citation tools by training neural networks to pinpoint relevant source material in an internet-size pool of data.
The AI is being trained on four million Wikipedia citations with its inventors hoping that it will soon be able to recommend trusted sources. This will be drawn from a large index of data that is updated constantly. Meta’s artificial intelligence research team plans to keep working on the tool that will enhance the online encyclopedia.
Read more about the research project here.