Can the AI haters give it a rest already? Yes, I know there are concerns, but as a person with a disability, if I didn’t use every tool that was out there because I had concerns about it, I wouldn’t use anything. All this AI hatred is just cutting off our nose to spite our face.
@technocounselor It's not AI that most people I've heard this from hate. It's the fact people insist AI can and should be used for everything everywhere. There's a time and a place. It's a tool, not a support system and not a replacement for people.
@quanin @technocounselor It's also not AI so much as its implementation. The concerns acknowledged in the original post include: boiling the planet and sapping its dwindling water supply; the cognitive atrophy, proven by studies already, that results from using AI to do thinking for you; the privacy and unwarranted surveilence risk inherent in using AI to read your confidential letters etc; and its use to divorce scapital from labour and concentrate wealth.
@quanin @technocounselor You may personally view those concerns, in addition to current AI's unreliability as being less important than the empowerment it offers to describe things to visually impaired people, sometimes inaccurately etc, and that is your perogative, but, given the magnitude of these concerns, I think it is unreasonable to ask people to stop expressing them. A more constructive approach might be to counter-argue how the benefits outweigh them
@JustinMac84 @technocounselor First, I haven't asked anyone to stop expressing anything. Second, I have no idea what original post you're referring to. The original post I replied to said nothing about that and it's not in the thread. Third, you'll need to look elsewhere if what you're after is a view from nowhere.
@quanin Perhaps things have become mis-threaded or I have replied with an inappropriate syntax. I apologise in either case. The OP I was referring towas the exhortation for everyone to stop hating on AI because of its benefits to disabled people.
@JustinMac84 The post in question explicitly stated that the poster is aware there are concerns. However, you do not need to bring those concerns up every single day. They existed yesterday. They exist today. They will exist tomorrow, even if you say nothing. You are no better than the AI all the time everywhere folks, and both of you need to knock it off.
@quanin It is only because of massive pushback that Mosilla has done its users the courtesy of allowing its user-base to opt out of AI features...for now. I'm not sure what kind of opposition you would, therefore, be okay with. The only alternative I can see would be, "Hey, remember those worries we had about all the negative effects of AI that we stopped talking about because people asked us to? We're just back to point out that
@quanin they're still here and a lot worse. Do you fancy putting the brakes on a bit or should we go back to being quiet?"
@JustinMac84 Scream at the companies, not the users. The users likely already know, and the ones that don't agree with you are probably using it in those concerning ways to begin with. I cannot do anything about the damage AI is doing to the planet. OpenAI can. Yell at them, not me.
@quanin If there is a demand, the tech will continue to churn it out. If people accept what they're being offered, use the unreliable tech that hampers the ability to think, willingly sacrifice their privacy at an exponentially increasing rate, that will validate the investment. If users refuse to use it or limit their use, if users that want AI to be a good thing but don't want the trade-offs increase the push-back, maybe we'll get somewhere.
@JustinMac84 I have unfortunate news for you. Users, in most cases, aren't the ones creating demand for these things. Companies and governments are. You think I'd even have a GPT account if I didn't suspect employers would make knowledge of how to manipulate AI a hard requirement in two years?
@quanin I don't know how to say this without it sounding like a personal attack, so I hope you will believe that it is not meant as one, but that is complying in advance. You assume that something will happen, therefore pave the way for it, where insufficient uptake might make the eventuality you foresee a non-event. In my view, if making tech compulsary is the only way to get it adopted, it's obviously not very good tech. Good tech should sell itself.