Beth Carey

@tomorrowsAI
49 Followers
9 Following
204 Posts
co-founder of machine intelligence company Pat Inc - creators of machine readable meaning for the world's languages
@garymarcus I asked the question if others, like Steven Marlow, found 'that other platform' lonelier now for AI writers since Gary left ......
Languages don't need many words

... but machine learning needs more and more

John Ball inside AI
' Implicit in today’s platforms is the requirement for ongoing training, on top of the initial training. ..... the cost of training is expensive. 1 of the tech giants we met mentioned that they can spend $m's to create a corpus to use in machine learning! https://buff.ly/3kEiJEj
The New NLU Industry - John Ball - Medium

Machine learning to support human language interactions has been unsuccessful when measured by its complex setup and lack of accuracy in use. While hopes were high when every major IT corporation…

Medium

RT @[email protected]

*Even if* the knowledge/facts are in the model, doesn't mean it can be retrieved/reasoned over. Super clear example from @[email protected]

🐦🔗: https://twitter.com/MMikeMMa/status/1612564264266633216

Mike Ma (Hiring NLP, React!) on Twitter

“*Even if* the knowledge/facts are in the model, doesn't mean it can be retrieved/reasoned over. Super clear example from @jbthinking”

Twitter
Realistically, they are "doing amazing tasks that do not require an understanding of the real world and are consequently utterly incapable of determining whether the statistical patterns they discover are meaningful or coincidental" https://buff.ly/3G95faL
An AI that can "write" is feeding delusions about how smart artificial intelligence really is

GPT-3, which can converse and write compelling text, is more like a pseudo-intelligence than a real AI

Salon
Anthropomorphizing leads us to THINK LLMs are on the continuum to AGI, but are they? "Large Language Models (LLMs) like GPT-3 do not use calculators, attempt any kind of logical reasoning, or try to distinguish between fact and falsehood" https://buff.ly/3G95faL
An AI that can "write" is feeding delusions about how smart artificial intelligence really is

GPT-3, which can converse and write compelling text, is more like a pseudo-intelligence than a real AI

Salon
Dileep George, DeepMind Researcher......worried that scaling alone would not be enough to bring us to general intelligence, raising an analogy with dirigibles like the Hindenberg that at one point seemed to be outpacing airplane development https://buff.ly/3Z3iodV
An epic AI Debate—and why everyone should be at least a little bit worried about AI going into 2023

A time capsule of AI thought leaders in 2022 gives us a lot to think about, going forward

Marcus on AI
As explained by @[email protected] - even though the increasingly larger LLM text sequencers look like a path to the holy grail, as per @[email protected] 's paper of the 4 #AI fallacies, it's not (necessarily) a continuum. The missing piece of understanding isn't a bolt on.
"I, long for the day when search engines can reliably spit back text plus genuine, appropriate references....But until those bits all fit together in a way we can trust, I prefer to borrow a memorable phrase from Ariana Grande: Thank U, Next" https://buff.ly/3YUrv0N
Is ChatGPT Really a “Code Red” for Google Search?

Maybe not

The Road to AI We Can Trust
“Humans actively maintain imperfect but reliable world models. LLMs don’t and that has consequences” Marcus said. “They can’t be updated incrementally by giving them new facts. They need to be typically retrained to incorporate new knowledge” https://buff.ly/3FYidbo
What we learned about AI and deep learning in 2022

AGI Debate #3, held on Friday, featured talks by scientists discussing lessons from cognitive science and neuroscience.

VentureBeat