Seeing all this bad and negative things going on the internet let's change the pace, what is something that's good and you're excited for?

https://lemmy.one/post/1153223

Seeing all this bad and negative things going on the internet let's change the pace, what is something that's good and you're excited for? - Lemmy.one

The fediverse is something I’m excited for and it’s still in it’s infancy so it’ll be intersting to see how it plays out as it starts getting more polished and user friendly.

Despite the doom and gloom of AI I think it’s been really cool, and one area I’ve seen it help is for my relatives where instead of me having to solve basic tech issues AI has helped. When it develops into a full on companion that will do things it asks them to I’ll be bothered less and less.

It’s frustrating to see peoples’ imaginations run wild with AI. They’re not building “sentient” machines. There will never be machines that are sentient in anything other than appearance, and we’re notoriously easy to fool in that way.

My favorite way to describe AI that I’ve heard is “applied statistics.” It’s basically just processing huge amounts of data, very fast, simultaneously, and then presenting conclusions that are usually very likely.

Yes, it will be used to make weapons that are horrifically efficient, but likewise it will be used to make defenses that are equally efficient.

I think the good will ultimately outweigh the bad. Hopefully by a long shot.

LLMs are spontaneously developing theory of mind and nobody knows why or how, meaning that now ChatGPT and the like are able to consider what the user is thinking, opening some avenues for actual manipulation. GPT-4 can solve 95% of ToM tasks that a 7-year-old could.

Source: arxiv.org/abs/2302.02083

Theory of Mind Might Have Spontaneously Emerged in Large Language Models

We explore the intriguing possibility that theory of mind (ToM), or the uniquely human ability to impute unobservable mental states to others, might have spontaneously emerged in large language models (LLMs). We designed 40 false-belief tasks, considered a gold standard in testing ToM in humans, and administered them to several LLMs. Each task included a false-belief scenario, three closely matched true-belief controls, and the reversed versions of all four. Smaller and older models solved no tasks; GPT-3-davinci-003 (from November 2022) and ChatGPT-3.5-turbo (from March 2023) solved 20% of the tasks; ChatGPT-4 (from June 2023) solved 75% of the tasks, matching the performance of six-year-old children observed in past studies. These findings suggest the intriguing possibility that ToM, previously considered exclusive to humans, may have spontaneously emerged as a byproduct of LLMs' improving language skills.

arXiv.org