Can the AI haters give it a rest already? Yes, I know there are concerns, but as a person with a disability, if I didn’t use every tool that was out there because I had concerns about it, I wouldn’t use anything. All this AI hatred is just cutting off our nose to spite our face.
@technocounselor It's not AI that most people I've heard this from hate. It's the fact people insist AI can and should be used for everything everywhere. There's a time and a place. It's a tool, not a support system and not a replacement for people.
@quanin @technocounselor It's also not AI so much as its implementation. The concerns acknowledged in the original post include: boiling the planet and sapping its dwindling water supply; the cognitive atrophy, proven by studies already, that results from using AI to do thinking for you; the privacy and unwarranted surveilence risk inherent in using AI to read your confidential letters etc; and its use to divorce scapital from labour and concentrate wealth.
@quanin @technocounselor You may personally view those concerns, in addition to current AI's unreliability as being less important than the empowerment it offers to describe things to visually impaired people, sometimes inaccurately etc, and that is your perogative, but, given the magnitude of these concerns, I think it is unreasonable to ask people to stop expressing them. A more constructive approach might be to counter-argue how the benefits outweigh them
@JustinMac84 @technocounselor First, I haven't asked anyone to stop expressing anything. Second, I have no idea what original post you're referring to. The original post I replied to said nothing about that and it's not in the thread. Third, you'll need to look elsewhere if what you're after is a view from nowhere.
@quanin Perhaps things have become mis-threaded or I have replied with an inappropriate syntax. I apologise in either case. The OP I was referring towas the exhortation for everyone to stop hating on AI because of its benefits to disabled people.
@JustinMac84 The post in question explicitly stated that the poster is aware there are concerns. However, you do not need to bring those concerns up every single day. They existed yesterday. They exist today. They will exist tomorrow, even if you say nothing. You are no better than the AI all the time everywhere folks, and both of you need to knock it off.
@quanin It is only because of massive pushback that Mosilla has done its users the courtesy of allowing its user-base to opt out of AI features...for now. I'm not sure what kind of opposition you would, therefore, be okay with. The only alternative I can see would be, "Hey, remember those worries we had about all the negative effects of AI that we stopped talking about because people asked us to? We're just back to point out that
@quanin they're still here and a lot worse. Do you fancy putting the brakes on a bit or should we go back to being quiet?"
@JustinMac84 Scream at the companies, not the users. The users likely already know, and the ones that don't agree with you are probably using it in those concerning ways to begin with. I cannot do anything about the damage AI is doing to the planet. OpenAI can. Yell at them, not me.
@quanin I return to my original point, as a disabled person, I'm not against the benefits AI *might* bring. I am against the negatives. The more users that are alive to those negatives and refuse to use products saddled with those negatives or push back in other ways, the better the final situation might be.
@JustinMac84 And I return to the original point of the thread. If we refused to use every device that was to our benefit because we had concerns, we'd get absolutely nowhere. People have concerns about video games. Should we stop using those, or should we address and/or disprove those concerns? People have concerns about microwaves. Should we stop using those? People have concerns about wifi. Should we stop using that? The list, she goes on.
@quanin I would argue that those concerns don't outweigh the benefits in the other examples you mentioned. If a Microsoft study, a study by the very company forcing us to accept AI, shows that AI produces cognitive decline, isn't that a whole new level of alarming? I return to my point: show me the benefit that outweighs the very real, tangible proven negatives I have outlined. If there are massive benefits I'm missing, happy to adjust my position. Until then...
Microsoft Study Finds AI Makes Human Cognition “Atrophied and Unprepared”

Researchers find that the more people use AI at their job, the less critical thinking they use.

404 Media
@JustinMac84 The diference here is I'm not trying to change your mind. You're trying to change mine. And I'm not saying there aren't concerns. I'm saying every single conversation about and around AI does not need to circle back to those concerns. Yes, we know. You told us yesterday. There comes a point when you're just being a broken record.
@quanin I take that point and I certainly don't want to sound like a broken record, but what is the alternative? I would be happy to see one. We are slightly side-tracked by the fact that I wasn't actually trying to change your mind by my OP, but to explain to the poster that originated this thread why we feel we can't "give it a rest" and that I think expecting such is unreasonable.
@quanin psychological studies show that minority influence, to be successful, must be consistent, i.e. it must keep pushing its message. It must also be flexible, hence my assertion that, were I shown sizable benefits that stack against the negatives I've advanced, I would be happy to moderate my position. What is the alternative therefore, to keep trying to raise awareness of the harm AI can and is doing? Those that don't care won't listen, but those that do, might.
@JustinMac84 The alternative is, as I keep telling you, not bringing this up in every single conversation about AI. Yes, those studies exist. And yes, in 6 months we'll see studies that say the opposite. It's the social media mental health debate all over again. You have made what you believe clear. But here's the thing. It doesn't matter whether I agree with what you believe or not, because nothing that was being discussed in the thread you replied to was arguing for or against what you believe. It became about what you believe when you entered the thread.
@quanin I'm not seeing that. The OP told AI haters to give it a rest because of minor benefits disabled people experience. I think we can both agree that I come under what the OP would class as an "AI hater". Therefore the conversation was absolutely relevant to me and I felt it important to point out that its not personal against the users, nor is it a blanket hate, from me anyway, of all things AI, mearly the current implementation thereof.
@JustinMac84 Right now, you sound like an AI hater. Particularly because you literally came into a thread where the AI haters were being asked to knock it off because this literally comes up in every conversation, and you're basically saying no. For the record, because you apparently won't let this go unless I explicitly say it, I agree with you. And in general AI is making most people lazier, even if you remove all of those other concerns. We still don't need to hear about it in every single AI conversation. That's the broken record.
@quanin I'm sorry it comes off that way. I came into the thread with the specific hope, along with you, of moderating the OP's position. You said it wasn't AI people were against, but the idea it should be used for everything and that it shouldn't replace people. I agreed with you on all the points of that post and wanted to add that it isn't AI as a concept I dislike, but its current implementation.
@quanin I hoped to show her that it isn't the benefits she derives I hate, nor her for using them, but the costs attached to those benefits. I can derive those self same benefits, but don't think the cost is worth it. Do I hate the costs? Absolutely! Hate and oppose them! We need to address those costs with the utmost urgency. If that makes me an AI hater, so be it.
@quanin It's interesting that you mention the social media debate because the same companies pushing AI so hard are currently on trial because of their implementation of social media, i.e. that they make it addictive, cognitively harmful, and have been aware of the mental health risks it poses. Australia's recently banned it for children, the UK wants to do likewise. I think social media and AI fears contextualise and relate to one another.
@JustinMac84 Australia's social media ban for children has nothing to do with actually protecting the children, and neither does the UK's. What age verification laws will actually do, and there are actual studies that also prove this, is grant Meta and companies like that a virtual monopoly over the social media space, preventing smaller startups from competing with them. It's the same reason Meta's also completely onboard with repealing section 230 in the US. It's not about protecting people. It's about protecting Meta. And I'm on purpose ignoring the fact that age verification as it currently exists is also a privacy violation waiting to happen.
@quanin Agreed on all points. I believe social media can harm children, but oppose the means being advanced to do it.
@JustinMac84 Everything is harmful if done in the wrong way, including this conversation. There's a reason the expression is, "everything in moderation, including moderation". We don't need to be actively talking about the harms of that everything in every single conversation about or having to do with that everything. We know. We see the same headlines you do. It's up to the social media companies to help people use them the right way, because government won't do that without also being harmful at worst and ineffective at best. We've been trying to protect the children since COPPA. How're we doing?
@quanin As a parent, I'd say it's up to the parents. While I deplore social media companies building their platform to be addictive etc, I believe it is my responsibility as a father to keep my child safe. Social media can't bare the responsibility for every bad post and bad actor.
@JustinMac84 See, that's mostly reasonable. Social media doesn't bare any of the responsibility for a bad actor, short of if that bad actor has done something that warrants their removal (as defined by the social media company's policies, not by your feelings as a parent). Because a lot of the problem is there's a lot of shit we, as a society, don't talk about. So kids end up talking about it to people on social media. Eating disorders? We don't talk about that with people. So into the local Facebook group they go. Anxiety? Not in my house. So onto TikTok they go. Your son might actually be your daughter? Not here. So onto WhatsApp they go. And the problem with saying outright "children are no longer allowed on social media" is now, they don't even have that as an option. So, they can't talk about it at home because that's not talked about here, and they can't talk about it on social media because it's illegal. And, I mean, you were a kid once too. You know damn well the best way to guarantee your kid does someting is to make doing that something as difficult as possible.
@quanin See I think we agree more than we disagree. I was in favour of an outright under 16s social media ban. Then I listened to NPR's Consider this and a report on the NSPCC's position that the approach should be more nuanced and I agree. there are no easy answers around social media other than that platforms should stop harmful attention-grabbing methods. I am opposed to age verification and VPN clampdown to achieve any of it though.
@JustinMac84 And see, I think we need to take about 6 steps back in much the same way with AI. Yes, these are problems. But screaming about them being problems only results in governments coming up with solutions that are as helpful as their age verification measures - some of which, as it happens, also use AI. The only thing that I, as a user can do, to contribute to fixing the problems with AI directly is... well, never using any AI service. And at that point, the benefits I may or may not be extracting from AI are irrelevant because I want to solve those concerns. That, right there? That's how your position reads.
@quanin I didn't understand the last of what you said. My position reads how?
@JustinMac84 Your position reads like this: "These concerns with AI exist, therefore, stop using AI". And we should either agree with those concerns and thus stop using AI, or justify why the benefits we're receiving outweigh those concerns. Because if you, as a user, want to directly contribute to addressing those concerns, not using AI is your only option. And at that point, any benefits you're extracting from AI do not matter.
@quanin I would modify from that assessment to specify AI in its current implementation and, rather than stop using, I would say don't use or use as little as possible. Rather than saying the benefits to you don't matter, I would say the benefits are outweighed by the costs, both to you personally and to society. Otherwise, I would say that's pretty accurate.
@JustinMac84 "stop using" and "don't use" are basically the same thing, Justin. What you're saying is escentially, if you haven't started, don't, and if you have, stop. If you use AI, then those concerns are not a priority. That is an absolutist position, and that's what turns people off.
@quanin The crucial difference I was trying to get across is that, if you have to use it, use it as little as possible. To me that is different from don't use, at all,ever. It acknowledges that there may be a need for use, but that it would be in everyone's interests, except perhaps the people pushing the tech, if that use was heavily moderated. Perhaps it is an absolutist, or virtually absolutist position. I'm not seeing a practicable 3rd way atm.
@quanin With all the info I have that I have alluded to here, knowing that the same companies pushing AI have deliberately and knowingly made all the rest of their stuff addictive and they don't want AI regulated at all, hence Meta's spending 65 mil to endorse AI-friendly politicians, to me, saying a little AI is fine, is like saying you'll be fine if you only do smack occasionally. To be clear, I neither like nor want to have that view and would love to be wrong.
The ‘Social Media Addiction’ Narrative May Be More Harmful Than Social Media Itself

This week, a major trial kicked off in Los Angeles in which hundreds of families sued Meta, TikTok, Snap, and YouTube, accusing the companies of intentionally designing their products to be addicti…

Techdirt
@quanin Execs have gone on record to say that they know they're making stuff addictive. One such was quoted in "Screen Time Stand-off" a book I am currently reading. Internal Ticktock memos were cited in the recent case against them by the US wanting to shut them down.
@quanin That! Was an interesting article and thanks for sharing. My response would be that a both and approach would seem most appropriate, i.e. tech giants should be restricted in making habit-forming apps and should be punshed for having done so, but we should use language around those who have fallen pray to that deliberate, bad faith practice, that doesn't disempower them. This should have the most positive impact.
@quanin Still catching up. the biggest problem I had with the article was that it equates addiction with powerlessness. Tain's necessarily so. Addicts overcome addictions all the time. Second issue I had was: the reference to chemical addiction. Something doesn't have to be heroine-style chemically addictive to be addictive. Psychological addiction, without a chemical basis, is well-documented, and doesn't have to imply powerlessness.
@JustinMac84 You can't regulate psychology, though. Everything can be habit-forming if you let it. The important question is why are these people leaning so hard into social media? And like I said, the answer could be as simple as they're getting something from social media that we're not getting from the people around us. As for the article, it's not the author trying to link addiction to powerlessness. It's the people pushing the social media addiction angle doing that. The same people who blame the social media for the addiction are the same people who blamed drugs before that. You might also be interested in: https://www.techdirt.com/2024/06/07/schools-social-media-ban-backfires-jeopardizing-student-privacy/
Schools’ Social Media Ban Backfires, Jeopardizing Student Privacy

What if banning social media from schools actually put kids at even greater risk? One of the more annoying things in talking about tech policy is how many people refuse to think one step ahead abou…

Techdirt
@quanin Which brings me back to my OP in this thread suggesting that a more constructive way forward might be to allay the substantial concerns so that a non absolutist position might emerge, rather than suggest that those considerably alarmed with multiple, independent good reasons to be so just pipe down and let everyone else get on with it. If it is possible, I am eager to be shown what I'm missing, if not, I see no other alternative than that I'm right.
@JustinMac84 You realize the US only wanted to shut TikTok down because they wouldn't sell to Oracle, right? IT's why they no longer want to shut TikTok down. Did you happen to read what I linked you to? Because even the experts can't agree social media is addictive. What's happening is we've trained ourselves to see things that way, so they are. 2% of adult social media users actually display the signs of addiction, but 18% will say they think they're addicted. Tell yourself something often enough it'll become true.
@quanin Caught up now with this post. Yes I know the motivation behind that case, but that doesn't mean the internal memos cited don't exist. If a company is trying to develop a platform that will be so habit-forming as to discourage physical movement, my best memory of the quote from the memo I heard on NPR's Up First, how else can you describe that big tech behaviour other than misanthropic and counter to users' interests and health.
@quanin Still processing the article, but a brief summary of my ramblings is why not simultaneously punish the companies, i.e. the trial is right, and empower the users, i.e. change the language and advanced coping strategies offered to users?
@quanin I would also amplify that position to say that the jaenie is out of the bottle now. We're stuck with it. Personally, I wish we weren't, but, since we are, I would argue it's our responsibility, both individually at the user level and societally, to use it in as ethical, considerate and environmentally friendly way as possible.