If you want to know why people don't trust #OpenAI or Microsoft or Google to fix a broken faux-#AGI #chatbot #LLM, consider that using suicidal teens for A/B testing was regarded as perfectly fine by a Silicon Valley "health" startup developing "#AI"-based suicide prevention tools.

(Aside: This is also where we get when techbros start doing faux-utilitarian moral calculus instead of just not doing obviously unethical shit.)

https://www.vice.com/en/article/5d9m3a/horribly-unethical-startup-experimented-on-suicidal-teens-on-facebook-tumblr-with-chatbot

'Horribly Unethical': Startup Experimented on Suicidal Teens on Social Media With Chatbot

Koko, a mental health nonprofit, found at-risk teens on platforms like Facebook and Tumblr, then tested an unproven intervention on them without obtaining informed consent. “It’s nuanced,” said the founder.

Put another way: I don't want cars (or product managers) solving the Trolley Problem. I want them to understand that roads aren't trolley tracks & the world is not an elegantly-constrained utilitarian thought experiment.

Even more than not wanting cars to do it, I don't want an #LLM to solve the #TrolleyProblem.

There's reason to suppose a sample rectruited from #MechanicalTurk users isn't so great, but even if the results DON'T bear out, this is terrifying, because these researchers apparently did all this work without it once occuring to them what a horrible idea this would be.

https://www.nature.com/articles/s41598-023-31341-0

[h/t @ct_bergstrom / https://fediscience.org/@ct_bergstrom/110172332118763433]

ChatGPT’s inconsistent moral advice influences users’ judgment - Scientific Reports

ChatGPT is not only fun to chat with, but it also searches information, answers questions, and gives advice. With consistent moral advice, it can improve the moral judgment and decisions of users. Unfortunately, ChatGPT’s advice is not consistent. Nonetheless, it does influence users’ moral judgment, we find in an experiment, even if they know they are advised by a chatting bot, and they underestimate how much they are influenced. Thus, ChatGPT corrupts rather than improves its users’ moral judgment. While these findings call for better design of ChatGPT and similar bots, we also propose training to improve users’ digital literacy as a remedy. Transparency, however, is not sufficient to enable the responsible use of AI.

Nature

#TrolleyProblem is predicated on the assumption that it's possible to know all possible outcomes in a scenario. Just like all other thought-experiments. "Broad Logical Possibility" as an old prof used to put it when students would get hung up on the fact that #THOUGHTEXPERIMENTS ALMOST INVARIABLY PRESUME PRIOR CONDITIONS THAT ARE NOT IN ANY CONCEIVABLE WAY POSSIBLE.

In a real self-driving car scenario, the car will almost always be able to do something else besides killing someone.

Put another way: #TrolleyProblem variations always presume binary options. There's no third (or fourth or other) way - as there almost always is in real life.

& that's just the start of the problems with it.

50+ years ago Murray Shanahan warned us that the way we talk about chatbots would shape how we think about the idea of #MachineSentience ("#AGI" wasn't a term then) - which is exactly what's happened in the decades since.

Similarly, even if the point of the #TrolleyProblem is to get us thinking clearly about a particular moral question, the binary constraint on the problem conditions us to restrict ourselves to binary solutions.

@FeralRobots

It occurred to me this morning that for #LLMs at least, the “I” in #AI stands for “improv”

Maybe folks would be a little less likely to entrust their life decisions to a machine if they thought of it as an underemployed half-drunk actor trying to impress its buddies by making jokes on stage in a seedy L.A. nightclub.

@alexch @FeralRobots as someone who has recently gotten into improv.... you're not wrong, but for your analogy to make sense you've also gotta remember that this particular actor spent the early years of their career screaming graphic slurs at an exploited and captive audience to try weed out the worst parts of the act.

@trenchworms @FeralRobots

hey man, everyone’s got their process…!

;-)

@FeralRobots The trolley problem is a way to analyze human choices with the least number of confounding variables. Trying to finangle a way out of those two options is an indicator of the problem people have with making the choice. It was never intended to be an indicator of real situations.
@dan613
But it's treated as one by a lot of people who have control of a lot of capital & who have deep influence on how systems are built.
@FeralRobots @ct_bergstrom or worse, they knew it was awful and turned a blind eye to the issue
@FeralRobots Getting input for automated moral analysis from a lower bidder is kind of like procuring poop for fecal transplants from the lowest-bidding poop-monger.
@ct_bergstrom
@FeralRobots As the heart and brains of any going concern, #productmanagement as a function is in for a lot of soul-searching: https://open.substack.com/pub/paninid/p/metaphysical-product-philosophy?r=4hxgy&utm_medium=ios&utm_campaign=post
Mystical Product Philosophy

Getting ahead of misinformation with mystical confidence

Daily Bread
@FeralRobots I find the Trolley Problem amusing discourse on the complete disconnect of academia and reality. Humans will fail the Trolley Problem every time. Why are we even thinking of using it as example of the "failure" of robotics.
The one thing I am sure of is that the robot is likely not to panic and destroy both choices like a human would.
@Ralph058
My main issue with it is that it's a pure exercise in a way that the real world almost never can be. If you've got a scenario that clear cut in real life, you're looking at a supervillain or an evil dictator - it's totally coerced, IOW, & so irrelevant to actual moral reasoning.