Serious ask:

I need a crash-course in AI.

Context: my line manager has been asked to evaluate the use of AI at work. He's come to me to ask if I want to help, as he knows I hate it (and he does too), but I'm...vaguely aware...that not all things that are called "AI" are equal.

(like, the "AI" of NPCs in a game is not the same as the "AI" used to create the type of image we generally call "slop", right?)

We want to make sure we're armed with decent knowledge, because we don't want people to say "Oh, ignore them, they're just haters" if we're talking about something that maybe isn't the "bad" kind of AI (if such a thing exists - I don't know enough to be confident right now)

At the moment, *to the best of my currently limited knowledge*, our AI usage is pretty much limited to people using Gemini to create emails and transcripts of meetings.

(I hope that makes sense)

@neonsnake adjacent to what you asked for, but this is someone doing similar at their workplace and made a nice, concise list of reasons one might want to avoid AI use. It's very broad, but could be a great starting-point for reading more? There's a LOT of good links in the comments.

https://alaskan.social/@seachanger/116281340936546500

wet forest moon folklorist (@[email protected])

Attached: 2 images I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?) *if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil

Alaskan Social

@xanna
@seachanger

This is really, really helpful - thank you!

The more technical explanations from some of the other folks (and thank you all, that's a lot of work you've all put in to help me!) are going somewhat over my head, but the general gist appears to be that the output is *at best* shoddy - which is something I sort of "knew", from memes like the "doctors say to eat two rocks per day", but it's really helpful to understand "why".

From what I've gathered, here and from earlier conversations I've had, it's because it just *cannot* assess for accuracy, only for "plausibility".

In the case of supply and demand planning, that's not actually a problem - even the best human planner in the world can only ever get to "plausible". I'm not sure if LLMs of the "slop" variety exist to do this, I'm thinking not. For now, without further info, it feels like this might be "machine learning", which a couple of folks have noted as being materially different.

Given that I work for a company owned by venture capitalists, the "ethics" argument is going to be tricky at best.

BUT - there's two on here that feel usable; 6 and 10. Data breaches and theft? Hellz yeah, I can sell that. That puts "us" on the hook, legally.

10, also - it doesn't save time.

One of the uses that has been suggested is to write "copy" (think: specifications, features and benefits etc), which we currently do manually. If we end up spending more time in checking and correcting then it's a waste - I also suspect that we won't pick up all the errors - with the best will in the world, people are just going to click "accept" when busy, which is also hugely problematic and could, again, leave us on the hook legally.

Thank you to both!

(and everyone else who has contributed)

@neonsnake oh good, glad it was a helpful thing to connect. Good luck!

@xanna Thank you - fingers crossed.

I think it's going to be damage control. I think that we just *are* going to end up using it, at best my manager and I might be able to limit it to hopefully the "least bad" versions.