5 Followers
51 Following
43 Posts

Howdy,

I make computers go 'beep'

DistroFedora - KDE spin

@amydentata I hate the fact that whenever I tell someone that I work in ML, they either think I'm making the singularity or that it's some sort of crypto fad. It's just pattern matching guys, anybody who actually works in the field knows it's limitations.

The real threat isn't AGI, it's prediction models trained on unprocessed, biased and stolen data. Companies like Google and Microsoft know this which is why they try to direct the debate towards science fiction which is harder to regulate.

This new piece from @nitashatiku is so, so, so good. It explains wtf is going on with Silicon Valley and AI risk. Between the lines, it connects a lot of dots for me I hadn't worked out yet. Insanely well-written and well-researched.
https://www.washingtonpost.com/technology/2023/07/05/ai-apocalypse-college-students/
How elite schools like Stanford became fixated on the AI apocalypse

Student-led groups focused on AI Safety have popped up at Stanford University and other schools, backed by billionaires fixated on the AI apocalypse

The Washington Post
@badrs @TechDesk The article agrees with you. It goes into a lot of detail as to why some people think it's the case though, which I thought was pretty well done. I thought it was well researched too, which is always nice to see

@lilithsaintcrow We have used the same definition for AI as far back as the 1960s. All it means is that a computer can make decisions based on changing input data.

When people talk about AI they usually mean either Machine Learning or Deep Learning, which use either statistical models or neural networks to learn to perform a given task. These are both subsets of AI. Unless we decide to change the definition of AI that we've been using in academia for years, GPT is an AI.

@eniko Just going to use this opportunity to recommend Roadwarden, for anybody who wants to read a very well written game

growth #art #pixelart #ドット絵

if you like my work, you can support me by buying me a coffee for only $4 https://ko-fi.com/maruki

Support maruki on Ko-fi! ❤️

maruki is a pixel artist that makes food, nature and ghosts related pixel art. Every support counts and helps her in this art journey.

Ko-fi
@rmorey @Infosecben @briankrebs Multi modal models are interesting because they can effectively take any form of information and map it to a shared vector/embedding space, which can then be used to convert those representation vectors into a different medium then what they were originally, text->audio. However, while these models can create a map of 'meaning' they are still effectively just mapping functions. They map inputs to most likely outputs without doing much more than that.

I was there for the whole Google Talk part, even had my own XMPP server, to discuss (at first) with my friends on Google Talk.
And I do think history does repeat :-(

How to Kill a Decentralised Network (such as the Fediverse)
https://ploum.net/2023-06-23-how-to-kill-decentralised-networks.html

How to Kill a Decentralised Network (such as the Fediverse)

How to Kill a Decentralised Network (such as the Fediverse) par Ploum - Lionel Dricot.

@briankrebs as Rodney Brooks said, we keep confusing performance for competence https://spectrum.ieee.org/gpt-4-calm-down
Just Calm Down About GPT-4 Already

<p>And stop confusing performance with competence, says Rodney Brooks</p>

IEEE Spectrum
@rmorey @Infosecben @briankrebs I think the point isn't so much that we can't build a system that can answer those questions, but rather we still can't build systems that understand the domain in which they operate. Besides we don't actually know how well GPT4 can answer those questions anyways. The only demonstration of its multimodal capabilities was in a paper published by OpenAI months ago at the peak of its hype cycle. The paper was not peer reviewed and had more than a few issues.