This idea that somehow search engines _can_ arbitrate "truth" is just so… not how any of this works or could even conceivably work.

The reason that search engines "backstop" with wikipedia is because wikipedia is a giant curated and mostly-audience-appropriate collection of knowledge.

Knowing what is "true" is so incredibly nontrivial.

@hrefna I believe the goal for search engines is easier, focusing attention on credible information. That's a lower bar and has been done for decades using techniques like TrustRank:
http://ilpubs.stanford.edu:8090/770/
Google has used this since the beginning -- they didn't use PageRank for long because it got spammed immediately -- and it's also in Google's mission statement, making information accessible (despite adversaries flooding the zone) and useful (focusing attention on the best stuff).
Combating Web Spam with TrustRank - Stanford InfoLab Publication Server

@glinden @hrefna The really unfortunate thing is that the spammers never really got the memo about how things changed so they’re still spamming every single forum and web form that looks like a comment entry with the hopes that the resulting links won’t be rel=“nofollow” and it’s made the Internet a much worse place overall.

I mean it’s the spammers’ fault, but like, it’d be nice if their efforts were even meaningful towards their goal and not just shitting on lawns unnecessarily.