RE: https://mastodon.social/@gauthampai/113626318842111574
It's been exactly one year since I mentioned this and boy, have things progressed so much! I ended up building my own super app for feeds, bookmarks, notes and more. More details soon...
| Personal Site | https://buzypi.in/ |
| My startup | https://jnaapti.com/ |
| X | https://x.com/gauthampai |
RE: https://mastodon.social/@gauthampai/113626318842111574
It's been exactly one year since I mentioned this and boy, have things progressed so much! I ended up building my own super app for feeds, bookmarks, notes and more. More details soon...
This is the reason the "Semantic Web" had a concept of trustworthiness.
From the W3C site:
Not everything found from the Web is true and the Semantic Web does not change that in any way. Truth - or more pragmatically, trustworthiness - is evaluated by each application that processes the information on the Web. The applications decide what they trust by using the context of the statements; e.g. who said what and when and what credentials they had to say it.
Note that, this is not the classic LLM hallucination but a problem in how the AI handles misleading data.
https://simonwillison.net/2024/Dec/29/encanto-2/
You can check it yourself:
https://www.google.com/search?q=encanto+2
An interesting issue with #LLM training:
Jason Schreier found that Google's AI incorrectly generated a non-existent sequel to the movie Encanto. The AI misinterpreted a fan-generated idea from an idea wiki as a real movie, including a non-existent release date and misleading links. This shows a key limitation in LLM training where it lacks the ability to distinguish real v/s made up information.