On Exceptions
On Exceptions
why would you follow someone you agree with?
if you want to learn, you search discord.
if you want to learn, you search discord.
Searching Discord is precisely the opposite of learning. You lose knowledge every second spent on Discord.
~/~ ~s~
if you want to learn, you search discord.
This is why when learning guitar I looked up guitar lessons and then looked for people who didn’t believe learning to play guitar was possible at all and the abilities instead were based upon innate talent and genetics! /s
Seriously, if learning was done by discord, then US politics (and cable news viewers) would be full of absolute scholars, instead of, you know, the exact fucking opposite of that.
guitar example does not work :/
politicians are not genuine in their discourses. Most are there for profit and they say things that even they don’t believe in 🤷
That isn’t all discord.
Relatedly, if you think social media threads are a great way to learn stuff I don’t know what to tell you other than maybe try picking up a book and see if there’s a difference there.
late reply to your question is, no. OP who is also a mod on this community started arbitrarily deleting my replies, which shows that the discussion here is not genuine and it’s altered to serve mod’s beliefs.
i was listening to this, which made me think of this thread: The Gray Area with Sean Illing: Stop comparing yourself to AI
Episode webpage: www.vox.com/vox-conversations-podcast
Media file: www.podtrac.com/pts/…/VMP8317922785.mp3?updated=1…
The Gray Area with Sean Illing takes a philosophy-minded look at culture, technology, politics, and the world of ideas. Each week, we invite a guest to explore a question or topic that matters. From the state of democracy, to the struggle with depression and anxiety, to the nature of identity in the digital age, each episode looks for nuance and honesty in the most important conversations of our time. New episodes drop every Monday. Transcripts of the show are available here. Follow: Apple Podcasts | Spotify | TuneIn | Amazon Music | All apps
AI is whatever, but man, has social media been mind poison.
I say we burn it all down, honestly. Including this place.
Coming up with a genuinely original idea is a rare skill, much harder than judging ideas is. Somebody who comes up with one good original idea (plus ninety-nine really stupid cringeworthy takes) is a better use of your reading time than somebody who reliably never gets anything too wrong, but never says anything you find new or surprising. Alyssa Vance calls this positive selection – a single good call rules you in – as opposed to negative selection, where a single bad call rules you out. You should practice positive selection for geniuses and other intellectuals.
I think about this every time I hear someone say something like “I lost all respect for Steven Pinker after he said all that stupid stuff about AI”. Your problem was thinking of “respect” as a relevant predicate to apply to Steven Pinker in the first place. Is he your father? Your youth pastor? No? Then why are you worrying about whether or not to “respect” him? Steven Pinker is a black box who occasionally spits out ideas, opinions, and arguments for you to evaluate. If some of them are arguments you wouldn’t have come up with on your own, then he’s doing you a service. If 50% of them are false, then the best-case scenario is that they’re moronically, obviously false, so that you can reject them quickly and get on with your life.
Steven Pinker is a black box who occasionally spits out ideas, opinions, and arguments for you to evaluate. If some of them are arguments you wouldn’t have come up with on your own, then he’s doing you a service. If 50% of them are false, then the best-case scenario is that they’re moronically, obviously false, so that you can reject them quickly and get on with your life.
Yes. And. The worst-case scenario is: the black box is creating arguments deliberately designed to make you believe false things. 100% of the arguments coming out of it are false - either containing explicit falsehoods, or presenting true facts in such a way as to draw a false conclusion. If you, personally, cannot reject one of its arguments is false, it’s because you lack the knowledge rhetorical skill to see how it is false.
I’m sure you can think of individuals and groups whom this applies to.
(And there’s the opposite issue. An argument that is correct, but that looks incorrect to you, because your understanding of the issue is limited or incorrect already.)
The way to avoid this is to assess the trustworthiness and credibility of the black box - in other words, how much respect to give it - before assessing its arguments. Because if your black box is producing biased and manipulative arguments, assessing those arguments on their own merits, and assuming you’ll be able to spot any factual inaccuracies and illogical arguments, isn’t objectivity. It’s arrogance.
paint is poison. I don’t see people making anti-print posts. Not to diss on your antiAI zealotry ( i am your asshole, you see; i love antiAI.) I just would like to see more antiPrint posts, if the environment is your concern.
When it comes to intellectual property… are you mouthing the corporations that profit from it? Is intellectual property the solution to keep creative people alive in an exploitive economy?
it’s not “whataboutism”. An image maker, has to take it into consideration.
marginally few artists use natural materials. If pollution is the corcern, then there will be no movies, no touring groups, no streaming, no paintings, no prints, no this… no that.
:) thanks for “triggering me”.
i ask for coherence. If you take it as whataboutism 🤷
Harmfulness of paint doesn’t discredit environmental questions around certain uses of AI.
Textbook whataboutism.
AI is harmful to the environment, to which you responded, NO paint is harmful.
Now you are tripling down. Cut the crap.
to which you responded, NO paint is harmful.
i didn’t wrote that. I wrote something like this ☞ “AI is harmful to the environment”, yes, as well as some other image generation media. I don’t see, like tge example i chose in that post, anti-paint posts.
is whataboutism always irrelevant? Always? AAALLLWWWAAAYYYSSS?
The environment is more my area of interest, so I’m going to focus on that part.
Before I looked into AI’s environmental impacts, I too had thought it might be overfocusing a bit on the wrong areas, but I didn’t realize how much the order of magnitudes had changed. Before the large AI models we’re seeing now, data centers weren’t a major source of change in energy consumption. Overall power consumption in places like the US had been mostly level for the previous 10-20 years (up until 2020). But AI is not like most past datatcenter workloads, it is constantly high power usage. Especially for model training, it’s using the equipment at full utilization for almost the entire time. It’s using higher energy chips and far more chips overall. Besides training, typical datacenter workloads before high AI usage weren’t super high energy per request, but that isn’t true of AI either. The rapid increase in the energy consumption from it is what’s driving the issue
It’s causing us to delay closing of fossil fuel plants. It’s making previous declining datacenter energy stop declining and go the opposite direction and projected to increase datacenter energy usage go up by 165% by 2030
In Europe too, a data center-led surge in power demand is under way, after 15 years of decline in the power sector. Having surveyed utilities across the continent, Goldman Sachs Research found that the number of connection requests received by power distribution operators (a leading indicator of future demand) has risen exponentially over the past couple of years, mostly driven by data centers.
goldmansachs.com/…/ai-to-drive-165-increase-in-da…
If we were talking about water usage of AI and someone brought up agriculture’s (especially animal agriculture) more dominant use, that would be fair to mention and talk about. But that doesn’t excuse AI’s water usage, just pose another area to also focus on
I just would like to see more antiPrint posts
Sounds like a niche just waiting for you to fill it, my friend.
My issues are fundsmentally two fold with gen AI:
Who owns and controls it (billionares and entrenched corporations)
How it is shoehorned into everything (decision making processes, human-to-human communication, my coffee machine)
I cannot wait until finally the check is due and the AI bubble pops; folding this digital snake oil sellers’ house of cards.
The way they were trained is the way they were trained.
I dont mean to say that the ethics dont matter, but you are talking as though this isnt already present tense.
The only way to go back is basically a global EMP.
What so you actually propose that is a realistic response?
This is an actual question. To this point the only advice I’ve seen to come from the anti-ai crowd is “dont use it. Its bad!” And that is simply not practical.
You all sound like the people who think we are actually able to get rid of guns entirely.
Okay, you know those gigantic data centers that are being built that are using all our water and electricity?
Stop building them.
Seems easy.
Guns can be concealed and smuggled.
Compute warehouses the size of football fields that consume huge amounts of electricity and water absolutely can’t. They can all be found extremely easily and shut down, and it would be extremely easy to prevent more from being built.
This isn’t hard.
That’s what the boosters aren’t talking about.
The chatbots are not profitable. They’ll need to charge way more for usage at some point to actually make money; at some point investors are going to demand returns on their investments, and that’s going to come out of user’s pockets.
I’m not sure your “this is the present” argument holds much water with me. If someone stole my work and made billions off it, I’d want justice whether it was one day or one decade later.
I also don’t think “this is the way it is, suck it up” is a good argument in general. Nothing would ever improve if everyone thought like that.
Also, not practical? I don’t use genAI and I’m getting along just fine.
yes, a lot of my immoral actions are because it’s hard or against the grain to be more moral (e.g. being a strict vegan even when traveling or not easily accommodated, or using cars when technically I could bicycle, but on dangerous roads and long distances).
I have definitely spent most of my adult life going against the grain in extreme ways to be a “better” person, but I have been left victimized and disabled for it, so I’m trying to learn to be more moderate and not take big social problems as entirely my personal responsibility. Obviously it’s not one extreme or the other, it’s an interplay between personal and social / structural.
Generative AI and their outputs are derived products of their training data. I mean this ethically, not legally; I’m not a copyright lawyer.
Using the output for personal viewing (advice, science questions, or jacking off to AI porn you requested) is weird but ethical. It’s equivalent to pirating a movie to watch at home.
But as soon as you show someone else the output, I consider it theft without attribution. If you generate a meme image, you’re failing to attribute the artists whose work trained the AI without permission. If you generate code, that code infringes the numerous open source licenses of the training data, by failing to attribute it.
Even a simple lemmy text post generated by AI is derived from thousands of unattributed novels.
Not directly no.
The training data trains an algorithm that effectively just describes an image it sees (which BTW is super useful for blind people) and gives a score for each keyword.
Then the actusl generative part takes a random background, tries to denoise it into somerthing recognisable, then shows it to thr first algorithm that gives it a score on how closely it resembles the prompts. Then does some fancy maths and performs another denoising cycle and gets another score from the first algorithm, more maths, another cycle etc. Until it spits out and image that maches the prompt.
So the algorithm that genrstes the image has no data from the training process whatsoever.
So the algorithm that genrstes the image has no data from the training process whatsoever.
It gets a, uh, score. You wrote that yourself, I don’t know how you could forget.
It’s so surreal when someone posts a meme about That Guy™ doing That Thing™ and then all of a sudden That Guy™ shows up in the comments, doing That Thing™
Like, can I get your autograph? You’re famous, bro!
so you think that artists, “actually creative people”, don’t use genAI?
🤔
Do y’all hate chess engines?
If yes, cool.
If no, I think you hate tech companies more than you hate AI specifically.