georgemcbay

0 Followers
0 Following
6 Posts
[email protected]
This account is a replica from Hacker News. Its author can't see your replies. If you find this service useful, please consider supporting us via our Patreon.
Officialhttps://
Support this servicehttps://www.patreon.com/birddotmakeup

> and they're very, very harmful, to both adults and children.

And society as a whole. Even if you don't participate you don't escape the blast radius of the harm they've caused over the past 10-15 years.

> Hopefully it continues to get commoditized to the point where no monopoly can get a stranglehold on it

I believe this is the natural end-state for LLM based AI but the danger of these companies even briefly being worth trillions of dollars is that they are likely to start caring about (and throwing lobbying money around) AI-related intellectual property concerns that they've never shown to anyone else while building their models and I don't think it is far fetched to assume they will attempt all manner of underhanded regulatory capture in the window prior to when commoditization would otherwise occur naturally.

All three of OpenAI, Google and Anthropic have already complained about their LLMs being ripped off.

https://www.latimes.com/business/story/2026-02-13/openai-acc...

https://cloud.google.com/blog/topics/threat-intelligence/dis...

https://fortune.com/2026/02/24/anthropic-china-deepseek-thef...

OpenAI accuses China's DeepSeek of stealing AI technology

OpenAI accuses China’s DeepSeek of stealing AI technology

Los Angeles Times

There's a bit more steps than that for someone who hasn't yet launched a Play Store app. When I launched a hobby app I had to find and maintain 20 unique testers (they have since reduced it to 12, I think? it was 20 back in 2024) willing to test the app for 2 weeks before Google let me list it on the Play Store.

Though even this wasn't that hard... I easily found more testers than I needed on a relevant subreddit (the app is a PvP stat tracking app for a videogame, and I found plenty of people willing to sign up to test my app on a subreddit dedicated to that videogame).

> I was listening to an advertisement at the time of my near-death experience.

You'll probably never forget that advertisement, which is an exciting business opportunity for Waymo.

They could partner with Spotify and other media content partners so that the Waymo can generate an adrenaline-rush near crash experience when a premium advertiser's ad is playing. /s (hopefully)

I would say it is privilege (now) combined with denialism (for the future).

Not only do you have to believe that you're in the group that benefits, but you also have to believe that "AI" improvement from here forward will stall out prior to the point where it goes from assisting your job to replacing it wholesale. I suspect there are many less people for whom that applies to them than there are people who believe it applies to them.

It is very easy for us to exist in that denialism bubble until we see the machine nipping at our heels.

And that is not even getting into second order effects, like even if you do provide AI-proof value, what happens when some significant percentage of everyone else (your potential customers) loses their income and society starts to crumble?

> I suppose the notion that you could just distribute untested software onto an unlimited amount of other peoples computers without consent wasn't yet considered unethical

As someone who is old enough to have been a teenage hacker back in this timeframe and who spent his time on old Diversi Dial dialup systems which lead to early internet systems via gnu/fsf's open access policy, which lead to bitnet relay, and who was around during the initial development of irc right around this very year (1988) I can say that it was absolutely considered a bad act to do this sort of thing back then even as just a prank or demonstration (which made it kind of cool to back-then me, as a teenager, but which made it certainly unethical in a professional sense even for the time).

... however when you oopsied and the shit hit the fan, you could get away with it if your dad worked for the NSA.

The vast majority of people who weren't RTM would have had a far more severely negative outcome in his situation.