When people tell me generative AI will solve real-world problems
(If you like this kind of thing I think I might be worth a follow, this is basically my entire vibe in a single post)
@CyberneticForests they should have hidden those layers better
@CyberneticForests funny, but technically more accurate would be the people laying on the tracks as input (utilitarian problem) and the levers as output (utilitarian solution), the moving tram is an external constant, so could be put above the whole thing ;)
@CyberneticForests ... then again, with an inference model, the levers would be input and people on the tracks output (i.e. these are the levers people have pulled in examples, and these people have survived as a result); the machine is then run inverted and suggests, for a desired outcome, one possible configuration in which levers could be pulled.
@CyberneticForests and yet a third interpretation, completely in tune with the version here, is that the input resembles instances of unavoidable doom that awaits their victims, and all hidden layer nodes are levers (which is quite correct). this machine would likely be trained to find a configuration of levers that optimizes for the maximum of survivors, given a particular configuration of trams on rails. the network would then begin to map the train tracks themselves.
@lritter You're describing the IA in self-driving cars, aren't you? @CyberneticForests
@lritter @CyberneticForests
@stablehorde_generator
draw for me a representation of a deep neural network but all the input resembles instances of unavoidable doom caused by unstoppable trolleys that awaits their victims, and all hidden layer nodes are levers. the output is optimized numbers of survivors.

@manu @lritter @CyberneticForests Here are some images matching your request
Prompt: a representation of a deep neural network but all the input resembles instances of unavoidable doom caused by unstoppable trolleys that awaits their victims, and all hidden layer nodes are levers. the output is optimized numbers of survivors.
Style: featured

#aiart #stablediffusion #aihorde

@manu @CyberneticForests @stablehorde_generator i'm sorry but as a large language model i am unable to draw neural networks of unstoppable trolleys as that would be unethical
@lritter @stablehorde_generator
Seems like Ou were right. We received a cute trolley instead.
@lritter but that would make the joke not any more fun
@betalars it would keep energy vampires like me away though
@lritter @betalars It might take the joke out of the illustration but surely none of the fun! 🙆
@lritter @CyberneticForests
My old AI prof (circa 1988) would've loved your extrapolations here & used them to frame a class discussion.
@CyberneticForests I'm more reminded of the good place, Michael's solution to the trolley problem.
@CyberneticForests mom: we have #TrolleyProblem at home. The trolley problem at home:
@CyberneticForests the answer is clearly more trolley cars.
@CyberneticForests
If I were in charge, I'd just hire an outside consultant and blame/credit them for the decision.
@CyberneticForests Maybe it's just a training data issue, we need more people and more trolleys.
@CyberneticForests This is the first time I have actually seen the Trolley Problem represent a real-world policy decision. Usually it's just pointless pseudo-intellectual mental masturbation.

@CyberneticForests The outcome of the algorithm, incidentally, will only be based on the average of the choices that human beings would make

Not exactly comforting

@CyberneticForests 5 trolleys hitting 11 guys 11 times.
Conclusions: trolleys are made of photons
@CyberneticForests probably needs a few more hidden layers

@CyberneticForests @randomgeek When people tell me #trolley thought experiments in #philosophy provide guidance for real-world problems: https://newideal.aynrand.org/why-todays-ethics-offers-no-real-guidance/

(I don't think generative #AI will solve #ethics problems either, just whitewash where the solutions come from)

Why Today's Ethics Offers No Real Guidance

A morality focused on resolving conflicts tells us nothing about how to live well.

New Ideal

@CyberneticForests Feed various trolley problems into a small language model to encode(?) so that other ML models can understand it (like CLIP I think), then train a model to solve trolley problems. Its training data will come from stuff like Absurd Trolley Problems. [1]

Sorry if this is nonsense, I don't know that much about the specifics of ML. But I think I've got very roughly the right idea.

[1] https://neal.fun/absurd-trolley-problems/

Absurd Trolley Problems

Every problem is the trolley problem.

@CyberneticForests I love it. Here's another that I like quite a bit

@CyberneticForests
The Trolley Problems for AI

😂

jenny (phire) (@[email protected])

Attached: 1 image if you’ve ever wondered what it’s like to work with me, wonder no more

phire.place

@CyberneticForests
@Binder

Actual conversation I've had with my old AI developer roommate circa 2018

RM: "AI is super cool right now. I'm currently working with a research firm and we've sort of simulated PAIN for our model and we're seeing how it reacts when it receives only a pain stimulus and no reward."

Me: Hey man What the Fuck.

@Nagaram @CyberneticForests @Binder [stares in "Don't Create The Torment Nexus"]

@Nagaram @CyberneticForests @Binder

Ted Chiang (*The story of your life and others*), interviewed by Ezra Klein, said he hoped we never developed artificial consciousness, because of all the suffering it would mean for the prototype consciousness along the way.

@Nagaram @CyberneticForests I've often wondered what the smallest possible digital circuit is that experiences pain.

Fucked up that someone is trying for it.