0 Followers
0 Following
6 Posts
If this is an innuendo it’s flying over my head
This seems to be the best explanation
They debuted before Jim Lee, during Marc Silvestiri’s run
I don’tthink so, this isn’t a general “aren’t superhero costumes whack” thing. Jean and Banshee have other costumes that’d be just as out of place irl but they only complain about these ones
Blue and gold maybe, but white and gold doesn’t even make sense to me. For a white dress to look blue it needs to be under blue light, which would cause the gold highlights to be dark with a blue sheen. The missing blue sheen indicates that either it’s gold under natural lighting or black under yellowish lighting.

User configurable physical Privacy Switch - turn off your microphone, bluetooth, Android apps, or whatever you wish

hmm…

Also this hasn’t changed:

The initial sales markets are EU, UK, Switzerland and Norway.

Computers have been getting anthropomorphised for a long time. Why is it only when talking about llms that you start clutching your pearls about it? Why do you think that verb has to be exclusive to humans? To me that seems like a strange and inconsequential thing to dig your heels in.

Saying that an LLM knows words is not a value judgement. It doesn’t mean “LLMs are sentient” or “LLMs are smart like humans”. It’s doesn’t imply they have real world experiences. It’s just a description of what they do. That word has been used to describe much more basic kinds of information / functionalities of computers already. What makes it so offensive now?

There is a reason they start to fail when trained on other slop; because they don’t know what any of it means!

If you taught children slop at school they would not get far either. Although training LLMs on LLM output is more akin to getting rid of books and relying on what teachers remember to teach the students.

The importance of that weight comes from humans. It is not intrinsic knowledge even after training.

It comes from the llm and not from the outside, that’s what intrinsic means. How is it not intrinsic knowledge? I think you mean to say without humans to read it, an llm’s output holds no inherent value. That is true and nobody is claiming that it does. llms don’t derive pleasure from talking like humans do so the only value llm output has is from the the person reading it.

Their ‘knowledge’ comes from the basic weights of what word is most likely to follow. It is pure association, and not association like you or I do word association.

llm weights are anything but basic, but regardless, this is also true and lunnrais said as such:

They do know the meaning of words, but only in relation to other words.

The difference between human knowledge and llm knowledge is that an llm’s entire universe is words while humans understand words in relation to real world experiences. Again, nobody is claiming those two understandings are equivalent, just that they are they exist.

Also on the point of statistics, I think the way people understand statistics and the statistics used in llms are vastly different. It is true that an llm finds the which word is most likely to be next, but how it does that is not a classical statistical method. An llm itself is a statistical model, one different than any other statistical model people are familiar with. When one says an llm ‘knows’ or ‘understands’ they mean it has captured abstract information in a incomprehensibly complex digital neural network like how humans capture knowledge in a incomprehensibly complex organic neural network. How it can only use that information for word statistics doesn’t change that it has captured the information

Capitalism loves the working class all right. It just doesn’t care what happens to them as long as there’s enough supply of them