0 Followers
0 Following
2 Posts
It wasn’t for us, though. I speak EU Spanish and I didn’t gets lot of it, nor understand all the symbolism, but the celebration of the language and culture seems self evident. I didn’t get all the symbolism in the Chinese Olympic opening performance, or countless other cultural performances, but I don’t see it as necessary to understand 100% to be able to alleviate the expression. I don’t particularly care for Bad Bunny’s music, but the show felt to me like a powerful expression of cultural validity on a huge stage in a country where people are facing state violence for looking like they belong to that culture. From what I’ve seen shared online from members of that latino/a community, they feel like it was incredibly powerful for them to see genuine representation at that level. To me that is more important than whether or not I personally understood all the language or symbolism.
“I used to do drugs. I still do drugs but I used to too” - Mitch Hedberg
I think it helps to understand that when some people say “chemicals” in the context of highly processed foods, they mean “industrial additives”.
But nobody means selective breeding when they say GMO. That term emerged specifically to describe the products of genetic engineering. There a plenty of legitimate concerns.
Genetically modified organism - Wikipedia

But it didn’t though. Old apps work just fine. There are plenty of reasons to complain about Apple - but the way they changed architecture twice and did so with impressive backwards compatibility both times is not one of them.

I’ve previously argued that current gen “AI” built on transformers are just fancy predictive type, but as I’ve watched the models continue to grow in complexity it does seem like something emergent that could be described as a type of intelligence is happening.

These current transformer models don’t possess any concept of truth and, as far as I understand it, that is fundamental to their nature. That makes their application severely more limited than the hype train suggests, but that shouldn’t undermine quite how incredible they are at what they can do. A big enough statistical graph holds an unimaginably complex conceptual space.

They feel like a dream state intelligence - a freewheeling conceptual synthesis, where locally the concepts are consistent, while globally rules and logic are as flexible as they need to be to make everything make sense.

Some of the latest image and video transformers, in particular, are just mind blowing in a way that I think either deserves to be credited with a level of intelligence, or should make us question more deeply what we means by intelligence.

I find dreams to be a fascinating place. It often excites people to thing that animals also dream, and I find it as exciting that code running on silicon might be starting to share some of that nature of free association conceptual generation.

Are we near AGI? Maybe. I don’t think that a transformer model is about to spring into awareness, but maybe we’re only a free breakthroughs away from a technology which will pull all these pieces off specific domain AI together into a working general intelligence.

Squeezing a metal cylinder out my shoot sounds a lot less pleasant than just pooping poop.
Ah yes, one of my favourite quotes by Orreleeise: “Overcomine challenges and oeeence ine teisge and rivively renence verover re rescience”
So unrealistic

I read a series of super interesting set of posts a few months back where someone was exploring the dimensional concept space in LLMs. The jump off point was the discovery of weird glitch tokens which would break GPTs, making them enter a tailspin of nonsense, but the author presented a really interesting deep dive into how concepts are clustered dimensionally. I don’t know if any of that means we’re any where cost to being able to find those conceptual weighting clusters and tune them, but well worth a read for the curious. There’s also a YouTube series which really dives into the nitty gritty of LLMs, much of which goes over my head, but helped me understand at least the outlines of how the magic happens.

(Excuse any confused terminology here, my knowledge level is interested amateur!)

Posts on glitch tokens and exploring how an LLM encodes concepts in multidimensional space. lesswrong.com/…/solidgoldmagikarp-iii-glitch-toke…

YouTube series is by 3Blue1Brown - m.youtube.com/@3blue1brown

SolidGoldMagikarp III: Glitch token archaeology — LessWrong

The set of anomalous tokens which we found in mid-January are now being described as 'glitch tokens' and 'aberrant tokens' in online discussion, as w…