This is a bit creepy.
Though it is true that there is lots of misinformation on the web, and that the quest for higher quality search results is very good one, that the most used search engine in the world may soon be in the business of defining what is “true” and what is not is of some concern.
That Google is now a large government contractor while defining “truth” on the Net is of particular concern.
Google produces thousands of valuable products. Many of which I use every day. But it seems that the ability to decipher “truth” on the Net really is a matter of experience online more than anything. If one uses the Internet regularly for news etc. one learns quickly how to discern items of value and items of questionable value. This is I think the most honest way to discover what is “true” and what is not. It may be a bit messier on the front end, but it seems a better route to the “truth” than an algorithm written by very fallible human beings
Encoding the “pretense of knowledge” (as F.A. Hayek put it) into the Net would be a terrible thing.
Consider for a moment if you really want the guys who are behind the below video discerning what is true and what is not*
(From The New Scientist)
A Google research team is adapting that model to measure the trustworthiness of a page, rather than its reputation across the web. Instead of counting incoming links, the system – which is not yet live – counts the number of incorrect facts within a page. “A source that has few false facts is considered to be trustworthy,” says the team (arxiv.org/abs/1502.03519v1). The score they compute for each page is its Knowledge-Based Trust score.
We implore Google to proceed with caution.
*Boston Dynamics is owned by Google.