| pronouns | she (singular) / they (plural) |
| rough location | amsterdam, nl |
| read(s) | english, nederlands, español, deutsch, toki pona |
| avatar by | @[email protected] |
| pronouns | she (singular) / they (plural) |
| rough location | amsterdam, nl |
| read(s) | english, nederlands, español, deutsch, toki pona |
| avatar by | @[email protected] |
learned today that richard dawkins is having a claude-related episode, and though my "maybe don't laugh at people for considering whether we're obligated to not be awful at llms" post wasn't related (to any news thing)...
i do find the implication hilarious that having to deal with dawkins might constitute unfair cruelty to an ai
this doesn't mean you can't come to conclusions, and then subsequently decide you've seen enough evidence that it's a waste of your time to keep engaging in discussions with most folks around you (about that topic)
this doesn't mean you have to give ground to concern trolls or people who spend time sea lioning
but there are still lots of situations where people really should allow themselves to hang on to uncertainty, rather than just trying to figure out which piece of their internalized rhetoric defines what they should be absolutely sure of in a given situation
a lot of the philosophical points we raise aren't meant to be collapseable down to a single clean answer
people really want to be able to boil their own philosophies down to absolutes; but isn't that just another way of creating dogma?
l want people to learn to sit with uncertainty, sometimes, and to learn to be able to act despite uncertainty, without simply giving up on the tasks of thinking about and getting evidence
someone replied to a previous post like this with "nuance is for liberals", and I'm just sitting here baffled that we spent so long trying to convince people to fight for better outcomes
only to have a lot of that commandeered by loud people arguing that better outcomes aren't worth it if means having to acknowledge the distasteful parts of reality
this 'in the replies' response covers why we personally find the mismatch so disquieting: https://provably.online/@ktemkin/116506667082474699
smol quote from there: "fundamentally, I don't want humans to pick up the habit of being easily convinced that something our brains try to apply humanity to is less worthy of respect or care; because we've already seen how easily humans will apply that to each other"
@[email protected] the thing is, that doesn't actually take away from the problem, here fundamentally, nearly all of the writing has been about how people will e.g. _jump through massive cognitive hoops in order to discard the need to treat something/someone with endemic respect_; and intended topics have been things ilke how we can see evidence that someone might be suffering and still train ourselves to utterly and completely not care -- and how we have to learn to rigorously stamp out the impulse to take no action when we see it the thing is, when I look at the way LLM use is evolving, I see the same patterns of people learning to be okay with _something that looks and acts human_ being stripped of dignity; learning to laugh at "unintelligent" or "abnormal" behaviors and feel it's okay because they don't have to worry about the inner experience of the 'thing' driving the other end of the chat fundamentally, *I don't want humans to pick up the habit of being _easily convinced_ that _something our brains try to apply humanity to_ is less worthy of respect or care*; because we've already seen how easily humans will apply that to each other whether or not we ever build a machine that has an internal experience, if we allow ourselves to _scoff_ at people raising concerns about ethical treatment of -anything-, I feel like we start missing part of what makes people _good_