This whole "this is how humans learn so whats the difference" thing while stealing so much data to make billions for a few dudes is so insidious.
@timnitGebru I'd be a lot more comfortable with generative-ML if it could explain its influences and sources like a human would, and not just confabulate an answer after the fact. (sometimes "I don't know, it just felt right" is an ok answer, but it shouldn't be *all* the answers.)
@gray17 @timnitGebru see, the fashionable claim is that a human explaining their influences is actually just retroactively rationalizing an unconscious process.

@FeralRobots @timnitGebru right, and that line of argument leads to "assembling the words of an explanation is an unconscious process", consciousness does not exist.

it's pointless to argue against that position. consciousness is something that does exist, even if we don't know how to explain it, and the ML models of this era do not have consciousness as we understand it.

sidestepping that is probably better. ML models easily do things humans cannot, and vice versa. they're not very similar

@gray17 @timnitGebru
It's not just pointless to argue against that position, it's impossible - which is why that position exists.

So while I don't disagree that we can't debunk it, we still have to deal with the fact that it's a REALLY PERSUASIVE position for a lot of people, for a number of reasons. It's real, it's out there, it's dangerous, & we can't fight it with facts.

I'm just ranting, this isn't about anything you're saying. Frustrated, I suppose.