0 Followers
0 Following
3 Posts
You could say it was one of the websites of all time.

Ask six people to identify the ‘soul’ in a piece of art and you’ll get seven different answers. It’s an entirely subjective concept.

Beauty is in the eye of the beholder, as the saying goes. Maybe we should stop gatekeeping what art people enjoy and stop brigading them when they dare to like something ‘real artists’ decide they shouldn’t.

Sure, makes sense. How any car made post 2020 can justify not having wireless Android Auto or Carplay is beyond me.
Interesting. Terrible how? Slow?

Some things to consider with regards to software like this. If music can be heard, it can be scraped, period. Even if you put up barriers on major streaming platforms or embed “anti-AI” tags, all it takes is:

  • Someone recording the audio with a mic (analog loophole)

  • A downloader that bypasses the protection (e.g., YouTube-dl)

  • A source that doesn’t respect the protections (pirate sites, leaks, live audience recordings)

If a human can access it, an AI can be trained on it, even secondhand.

Unlike traditional use cases where clean, labeled data is critical, AI models can learn from messy or partial data. Even if you degrade the quality or watermark it, a model can still extract style, rhythm, melody, Timbre, just like how humans can recognize a song through static.

Also, you can’t control every upload, every sample, every remix, every bootleg. As soon as someone puts your protected content in a place without safeguards, it’s back in the ‘training pool’.

Even if AI models never directly train on your content, they can still learn your style by training on other artists influenced by you, or on users uploading “in the style of” recreations. Protection doesn’t stop style emulation which is what many people want from AI anyway.

Finally just because AI avoids your data doesn’t mean it avoids imitating you. You may block scrapers, but unless copyright law adapts to handle stylistic theft, there’s no real recourse when AI replicates your sound or vibe.

Ya fuckin’ think there, champ? You reckon? Maybe? Maybe a but naive was it? You think?

Bellend

Outdoor posters, audio advertising on podcasts and streaming services such as Spotify, and partnerships with social media influencers are not covered by the regulations.

That seems quite the loophole, tbh…

Ok, first of all, AI doesn’t “learn” the way humans do. That’s not how AI imaging works. It basically translates images into a form of static computers can read, uses an algorithm to mix those into a new static, then translates it back. That’s easy different than someone studying what negative space is or learning how to draw hands.

The comparison to human learning isn’t about identical processes, it’s about function. Human artists absorb influences and styles, often without realizing it, and create new works based on that synthesis. AI models, in a very different but still meaningful way, also synthesize patterns based on what they’re exposed to. When people say AI ‘learns from art,’ they aren’t claiming it mimics human cognition. They mean that, functionally, it analyzes patterns and structures in vast amounts of data, just as a human might analyze color, composition, and form across many works. So no, AI doesn’t learn “what negative space means” it learns that certain pixel distributions tend to occur in successful compositions. That’s not emotional or intellectual, but it’s not random either.

Second, posting a picture implies consent for people to see and learn from it, but that doesn’t imply consent for people to use it however they want. A 16 year old girl posting pictures of her birthday party isn’t really consenting to people using that to generate pornography based off of her body. There’s also the issue of copyright, which is there to protect your works from just being used by anyone. (Yes, it’s advised by corporations, don’t bother trying to bring that up, I’m already pissed at Disney.) But even with people saying specifically that they don’t want their art to be used for AI, even prominent artists like Miyazaki, doesn’t stop AI from taking those images and doing something they don’t consent to, scraping, with them.

I agree, posting art online doesn’t give others the right to do anything they want with it. However, there’s a difference between viewing and learning from art versus directly copying or redistributing it. AI models don’t store or reproduce exact images — they extract statistical representations and blend features across many sources. They aren’t taking a single image and copying it. That’s why, legally and technically, it isn’t considered theft. Equating all AI art generation with nonconsensual exploitation like kiddie porn is conflating separate issues: ethical misuse of outputs is not the same as the core technology being inherently unethical.

Also, re your point on copyright, it’s important to remember that copyright is designed to protect specific expressions of ideas not general styles or patterns. AI-generated content that does not directly replicate existing images does not typically violate copyright, which is why lawsuits over this remain unresolved or unsuccessful so far.

Third, trying to say that it’s only fear over new tech is a bullshit, hand waving way of dismissing people legitimate concerns with the issue. I like new technology and how it can help people. I even like some applications for AI. Using a bread checkout tool to detect breast cancer is awesome. The problems that have come up with other applications of it are pretty terrible, and you shouldn’t stick your head in the sand about them.

(As an aside, trying to compare ai generated slop to all other arts is apples and oranges. There’s much more art than digital images, so saying that an AI image takes less energy to make than a Ming vase or literally any other pottery for that matter is a false equivalence. They are not the same even if they have similarities, so comparing their physical costs doesn’t track.)

We’re specifically talking about AI art here, so the comparison and data is still apt.

Fourth, I’m not just talking about people using AI to make lies, I’m talking about AI making lies unintentionally. Like putting glue on pizza to keep the cheese on. Or to eat rocks. AI doesn’t know what’s a joke or misinformation, and will present it as true, and people will believe it as true if they don’t know any better. It’s inaccurate, and can’t be accurate because it doesn’t have a filter for its summeries. It’s typing only using the suggested next word on your cell phone.

Concerns about misinformation, environmental impact, and misuse are real. That’s why the responsible use of AI must involve regulation, transparency, and ethical boundaries. But that’s very different from claiming that AI is an ‘eye stabbing machine’. That kind of absolutist framing isn’t helpful. It stifles productive discussion about how we can use these tools in ways that are helpful, including in medicine like you mention.

I didn’t say to get rid of AI entirely, like I said, some applications are great, like with the breast cancer. But to say that the only issues people have with AI are because of capitalism is incorrect. It’s a poorly working machine and saying that communism will make it magically not broken, when the problems are intrinsic to it, is a false and delusional statement.

I have never once mentioned capitalism or communism.

Yeah. Have you seen the new INSTER? It shares some of the same design language as the IONIQ to.
Can they not install a under-road toad tunnel to allow them to cross safely? Seems a more efficient way of doing things.