Stop personifying LLMs. It assigns agency and responsibility to a tool, and we all know you can't blame (or credit!) the tools.

My LLM wrote this code, and then committed it.

I wrote this code and committed it using an LLM.

Getting this language wrong skews our relationship with our tools. It lets us dismiss our responsibility to check and correct the output, while preventing us from crediting ourselves for the final result.

What it means to be human should grow from new tech.

For any other tech, this would be absurd.

Does your calculator do your maths for you?
Does your keyboard write for you?

Not generally. You do maths with a calculator, and write with a keyboard.

LLMs are deterministic in their probabilistic outputs, and they're not intelligent. At least reserve your subservience for properly intelligent robot overlords.

So… Anyone got any tech philosophy podcasts or do I have to start my own? 

@SalvagedTechnic it is your time to shine! 😍

And agreed about personifying LLMs, I read an article about this that’s interesting how it’ll affect us