A pool of Google’s members has published a very interesting paper entitled Knowledge-Based Trust: Estimating the Trustworthiness of Web Sources that proposes a new algorithm to calculate the ranking of web pages. They propose to move from exogenous signals to endogenous signals, that is, to replace popularity criteria with correctness information criteria. But, how to know the facts contained in a web page are trustworthy? Authors expose a methodology that begins with the automatic extraction of the facts from each source. Then, the trustworthy of these facts is evaluated “by using joint inference in a novel multi-layer probabilistic model”. They call this trustworthiness score Knowledge-Based Trust (KBT).
Does it work? That seems, because the paper shows as manual evaluation confirms the effectiveness of the method applied to a database of 2.8B facts extracted from 119M web pages.
Will be the KBT the web ranking of the future?
University of Barcelona