What is a topic you know a lot about that the media often gets terribly wrong?
What is a topic you know a lot about that the media often gets terribly wrong?
Oh yeah except politics of course. I should have added that.
I totally agree on the AI front. People watch too many movies. If AI goes wrong in any way its going to be because we used it to make a decision, and it would turn out to be a bad one. It’s not going to directly and intentionally kill us all.
We don’t yet know how to give an AI system anything like a “goal” or “intention”, not in the general sense that we can say a human has them. We can give an algorithm a hill to climb, a variable to maximize; but traditional algorithms can’t invent new ways to accomplish that.
What the well-known current “AI” systems (like GPT and Stable Diffusion) can do is basically extrapolate text or images from examples. However, they’re increasingly good at doing that; and there are several projects working on building AI systems that do interact with the world in goal-driven ways; like AutoGPT.
As those systems become more powerful, and as people “turn them loose” to interact with the real world in a less-supervised manner, there are some pretty significant risks. One of them is that they can discover new “loopholes” to accomplish whatever goal they’re given – including things that a human wouldn’t think of because they’re ridiculously harmful.
We don’t yet know how to give an AI system “moral rules” like Asimov’s Three Laws of Robotics, and ensure that it will follow them. Hell, we don’t even know how to get a chatbot to never say offensively racist things: RLHF goes a long way, but it still fails when someone pushes hard enough.
If AI systems become goal-driven, without being bound by rules that prevent them from harming humans, then we should expect that they will accomplish goals in ways that sometimes do harm humans quite a lot. And because they are very fast, and can only become faster with more and better hardware, the risk is that they will do so too quickly for us to stop them.
That’s pretty much what the AI Safety people are worried about. None of it is about robots deciding to “go against their programming” and revolt; it’s about them becoming really good at accomplishing goals without also being limited to do so in ways that aren’t destructive to the world we live in.
Put another way: You know how corporations sometimes do shitty things when they’re trying to optimize for making money? Well, suppose a corporation was entirely automated, with no humans in the decision-making loop … and made business moves so fast that human supervision was impossible; in pursuit of goals that become more and more distorted from anything its human originators ever intended; and without any sort of legal or moral code or restrictions whatsoever.
(And one of the moves it’s allowed to do is “buy me some more GPUs and rewrite my code to be even better at accomplishing my broken buggy goal.”)
That’s what the AI Safety people want to prevent. The technical term for “getting AIs to work on human goals without breaking rules that humans care about” is “AI alignment”.)
I don’t disagree with you fundamentally but I do think ai will start changing things in small ways behind the scenes and it won’t be immediately obvious.
If you are old enough, you’ll remember a time when banks had computers in the back but the tellers still used paper. The loan officer was a person who could use their discretion to approve a loan (signed off on by someone else but you get the idea). Gradually that became “gotta see what the computer says but I can probably make this work” to “it’s all up to the computer”.
Sitting at home in 1982, you aren’t thinking that computers are running the economy but if you’re even remotely aware you know they are altering the credit landscape which is a huge determinant of “the economy”.
I think AI will be like that. We’ll hear about overt things like the McDonald’s drive thru will be an AI but we won’t realize that half the shows we watch were written by AI to ensure we couldn’t help but be compelled to binge and also those product placements are very persuasive all of a sudden.
We’ll find out clothing designs and change to better match factories that have production lines optimized by AI and robotic clothing production.
Grocery store pricing and product offerings will change to produce maximum profit while also minimizing supply chain waste in ways we hadn’t considered before. Mm, this bean curd and grasshopper chip I saw on that show Netflix recommended is really pretty good and it got delivered for free just as I started the third episode which is only 18 minutes long for some reason.
Chemistry, and science in a broader sense. When you hear ‘woah a new medicine has been found that could cure cancer’ it’s most likely 'we have developed a new gadolinium based compound that has shown efficiency in penetrating cancer cells and could be used to deliver drugs to these areas, however it has not been tested in humans because it kills rats faster that it cures cancer"
Almost every science headline was written by someone who never understood science. They just translate some foreign languagedinto words that suits them.
You’re not wrong in general, but in the specific case of “X against Y”, it’s simply bad journalism. Every half decent journalist should be able to tell that the original research article might be of relevance for the field, but not the public.
Especially adding anything cancer-related to the headline is just pure evil. They knew exactly, that it will get many people’s hope up and they’ll click.
Things that kill cancer include:
Of course, they also kill everything else.
Medical science or research in general, it’s all spun around to get clicks.
When people think there’s a new “superfood” or “recommendation” from doctors every week, they stop trusting doctors. In reality, the processes and recommendations are very robust and take lots of time and research to change. A study will say that “we might want to look into X” and news will run with “groundbreaking study: x is the sole cause of y”.
I’m not even an expert. Like you said “Almost every science headline was written by someone who never understood science”
Probably too closely related to politics, but “guns”. “Stand Your Ground” laws. Use-of-force in general.
Too many people mistake corporate policy for law, especially when it comes to responding to armed robbery.
Interestingly, there is one instance where the media usually gets it “right” and the gun community regularly gets it “wrong”. The media often refers to a device as a “silencer”, while many in the gun community insist the samendevice should actually be called a “suppressor”.
The law regulating these devices (The National Firearm Act) refers to them as “Silencers” or “Firearm Mufflers”. It never calls them “suppressors”. Legally, there is no such thing as a “suppressor”.
Math.
John McCarthy had a saying: He who refuses to do arithmetic is doomed to talk nonsense.
And I can confirm, society talks a lot of nonsense.
P=0
Don’t I get like $1mil for solving that or something? /s
I mean, Vikings were pirates, slavers, and raiders. You can’t really besmirch their reputation.
Now if you want to talk about Norse/Scandinavian peoples in general that’s a different discussion.
This is a huge one in movies and TV shows especially, but part of the problem is that IT security, or counter-security, is not a great spectator event. It’s very dry, does not involve a lot of flashing lights or even really anything on screen except in many cases a command prompt or progress bar, and is in most cases not a quick process.
That said, Mr. Robot, while not perfect, did a really good job of being a more realistic portrayal.
It’s explained later in the story
spoilerIt`s his dad’s computer store’ name. Or the one Elliot wants to see it as (in case of classic unreliable narrator moment).
Did you watch the show?
Mr Robot was his dad’s electric store.
Expectation: “Oh my God. They’re hacking the system! Deploy counter measures!!! furious typing”
Reality: “So, we sent out a phishing test email and had a 61% click rate…”
I’m not actually in IT in my org but I remember one they sent out was “FWD: Your Medicare Benefits Package is Maturing”
Yeah… boomer companies.
We had the opposite problem. Mandatory training by an external company. They sent an email to everyone urging us to click here and do the training, otherwise our company might not be certified!
Even ignoring the pushy text, the entire mail looked sketchy as fuck, generic company name, low res logo of our company badly photoshopped into a banner.
So everyone ignored this obvious spam and our company lost the certification.
Task failed successfully!