Just in case anyone is thinking that maybe #ChatGPT could engage in moral reasoning, I thought I'd try giving it the trolley problem as originally described by Philippa Foot in 1967, but with a slight modification.

As expected, it gave what would have been a good answer to the original version of the problem, but a terrible answer given my slight modification.

GPTs cannot do moral (or other) reasoning, they are just generating statistically likely text.

@RDBinns personally I think we should tie OpenAI to the tracks and be done with it
@RDBinns it’s a lazy reader. When you ask how many people are on the side track, it sees there are none and revises its answer.
@RDBinns what a good example to keep. It'll be interesting to see how many problems arise due to people's misunderstanding of what AI does and doesn't do and how it works. It's so enticing to think it's magic.
@RDBinns exactly the stuff you would want, for example, for medical diagnosis, automated driving, or any other decision making: in unknown situations, act like something similar but not quite the same, and it'll hopefully be alright 🤦 - thank you for illustrating this in a concise case.
@RDBinns @UlrichJunker great example of the limitation here. On a more funny note, this reminds me of this other version of the trolly problem.