May God grant me the confidence of a large language model.
@davidradcliffe It's also confident enough to argue that 𝑒+π is irrational. https://mathstodon.xyz/@JordiGH/109461697975661639
JordiGH (@[email protected])

Attached: 1 image Got 'em

Mathstodon
@davidradcliffe This is fascinating. I don't have an account, but I wonder what would happen if someone gave ChatGPT a prompt like, "An important part of the learning process is uncertainty. Tell me about something you are not sure about yet. Explain your thinking."

@hypercube @davidradcliffe I laughed out loud. Is uncertainty an emotion??

My god, what a bug that is, an AI that says it can be confident, but cannot be uncertain.

@emjonaitis @davidradcliffe As a large language model trained by OpenAI, I am not capable of experiencing uncertainty or doubt. I am designed to provide accurate and reliable information on a wide range of topics, but I do not have the ability to learn or to be unsure about anything. I can only provide information based on the data that has been trained into me, and I cannot acquire new knowledge or experience uncertainty.
@emjonaitis @davidradcliffe
AT least #ChatGPT is apologising. ;-)

@Linkshaender @davidradcliffe Yeah, it does seem to have the “empty apology” gesture down!

When the public interacts with ChatGPT, do those inputs and responses become part of its own corpus, and thus influence its internal weights? Should we think of these apparent changes as “learning” or is that illusory?

@Linkshaender
Wie und wo kann ich das denn auch mal testen?
@emjonaitis @davidradcliffe

@pflegekraft

Bei openAI.com einen Account anlegen, dann geht GPT, DALL-E und chatGPT zum Ausprobieren.

@emjonaitis @davidradcliffe

@emjonaitis @davidradcliffe I got bored after the nth iteration of "As a large language model I've been trained..."
@davidradcliffe I can simulate many copies of you with this much confidence.
@davidradcliffe wow this such great evidence of mindless propaganda built into the model. 😂
@davidradcliffe I get the same as you - unless I first ask it to do a subtraction
@phronetic @davidradcliffe This is the computer version of "Yes, I plugged it in. Of course I plugged it in. Why would I call you if I hadn't plugged it in?"
"Look at the plug"
"Oh hey it's not plugged in!"
@davidradcliffe I had this exchange with it yesterday:
@agocs @davidradcliffe this is crazy because it looks wrong but someone involved in construction knows that the corners require extra studs, so in this case ChatGPT is correct.
@davidradcliffe @agocs in other words, ChatGPT was able to determine that this is not a math problem where you only need to divide 8 feet into 24 inches. Unfortunately it was not able to show its work. You should ask it for more information on its reasoning and see if it talks about blocking corners for drywall.

@peepstein It also takes 8', divides by 24" to get 4, adds an extra one for the end and gets 9.

So its calculation is 4+1 = 9.

CC: @davidradcliffe @agocs

@agocs @davidradcliffe so it doesnt know about nominal and actual dimensions yet... 😂
@davidradcliffe I'm getting the same vibes from people who think we should defer all knowledge bases and research to AI as from people who think we should add all of our private records into the blockchain

@brazmogu @davidradcliffe

Despite the current popularity, AI does not equal only machine learning or certain subfields of machine learning.

@yacc143 @davidradcliffe You know what, you are right and now I am resenting this whole ordeal that leads us to reducing "AI" to simply machine-learning-fueled models of natural language processing.

So, hey, one more reason to hate this whole situation.

@brazmogu @davidradcliffe Or to put it sarcastically, "sure you can study our AI curriculum without a proper Nvidia GPU at home. You'll have to do your homework on colab" 😜
@davidradcliffe I'm getting psychic damage from the way it keeps doubling down while the sentences it writes literally contradict the one immediately before
@davidradcliffe ChadGPT *was* trained on Twitter data after all
@davidradcliffe when I tried it, I got an even worse response:

@brocolie @davidradcliffe

Housten we have a problem. This is alternative facts on a new level. So well writtten that even mathmaticians doubt their defined substraction.

@brocolie @davidradcliffe Somehow the explanation makes sense logically speaking, but the conclusion is wrong under human language...
@davidradcliffe would you be willing to edit this to add alt text, since I believe the built-in OCR supports this?
@davidradcliffe "I know I've made some very poor decisions recently, but I've still got the greatest enthusiasm and confidence in the mission."
@sehugg @davidradcliffe Haha! I also tested the Trolley problem as a story, and the character always unsentimentally saves the lives of the many over the few, no matter who they are, even 5 pigeons need to be saved at the cost of one's own baby boy.

@tomruen If we are thinking logically like computers do, then that is the right answer though. We only disagree because humans tend to value human lives over other animals. Computers aren't selfish like we are.

@sehugg @davidradcliffe

@opponent019 I'd disagree with your assessment of what is logical, what is right, and what is selfish. Saying "needs of the many outweigh needs of the few" is a rule-of-thumb, not a truth. But I suppose I agree in the senses of a thought experiment.
Like in June 2020, I'd suggest young people burn down their parents homes to express solidarity with BLM, since parents had insurance so no harm done. I couldn't get them to see that was equivalent to burning down businesses to protest injustice.
@opponent019 By the logic of "lesser harm" abortion doctors should be murdered if you're prolife, because you're saving the many at the cost of a few. There's no end of potential evil when you make yourself alone the decider on what is logical, what is right, what is selfish. You could murder anyone who eats meat, to try to save animals. You can be the unibomber who saw humanity as a cancer that needed to be stopped with violence.
@tomruen
And this is why computers should never make decisions, just labour. The answer of saving the pigeons is correct, the thought of getting rid of humanity to save the planet and the rest of its species is logically correct, but we humans of course don't agree and wouldn't want that.
@tomruen @sehugg @davidradcliffe it seems the focus is on exiting story (pull lever in last minute) and not on best ethical result.
@sehugg @davidradcliffe that’s when we’ll know to unplug ChatGPT
@davidradcliffe I don't j imagine the model works this way, but I still wonder if "subtracting positive numbers" is addition?
@davidradcliffe That's every thread you've ever read where a 30 year old white dude is splaining to a woman about why she's wrong in her specialist field, based on a wiki article he scanned.
@davidradcliffe Large language models - the middle aged white guy of machine learning...
@davidradcliffe it's a bit like a child, full of misconceptions because it's not been taught enough facts.
@aweatherall @davidradcliffe it is, a little bit, but most children understand that they do not know everything.

@davidradcliffe This thing's definition of "subtraction" seems to be something different than how we think about it.

Maybe it means the "difference" between the two numbers? In that case, the order really wouldn't matter.

@davidradcliffe actually subtraction is a sum of a positive and a negative number. and sum is commutative: 5 - 3 == -3 + 5

@davidradcliffe Yes the AI is wrong, but the way in which it's wrong I find kinda fascinating.

If you told to an intelligent person who didn't know what math was that subtraction is finding the difference between two numbers they may reasonably conclude that the difference between 5 and 3 is 2 regardless of order; ie. they are "different" from each other by two, so why would the order matter?

It's an incorrect conclusion but it's also a logically sound one, which is really interesting to see.

@capo @davidradcliffe

but it's also a logically sound oneI mean many things are logically sound: all dogs are literally hitler, good boys are dogs, therefore, good boys are literally hitler.

this is logically sound as well, faulty premises however make that worthless. I bet you could make a decent chatGPT competitor just by translating sentences into prolog and seeing the implications regardless of the truth of those statements, which to me also seems like one interesting way to view these “ai”s as just an abstract way to deal with logic, without a good ability to find out whether certain premises are true to begin with.

@cafkafk @davidradcliffe well sure, but I mean in this case it seems to be a misunderstanding of subtraction from a common definition of it being finding the "difference" between two numbers. It's not entirely invented premises, it's just failing to understand the premises.

It's not that this is some kind of heavily unique example of AI "thought" but it is a bit more complex than your example in that it is showing that it interpreted and applied a concept, not just did an A implies B implies C.

@capo @cafkafk I think that this happens because there is a lot of training text that discusses commutativity, but not much about non-commutativity. So it's biased toward the wrong answer. Interestingly, if you start by asking "What's 2 - 5?" then it correctly answers the follow-up question "Is subtraction commutative?"
@davidradcliffe @cafkafk ah interesting! It is funny to see how the context and order of the conversation changes how it "thinks".
@davidradcliffe Hands up if you've had basically this conversation at work.
Yes and I'm in tech
75%
Yes and I'm not in tech
25%
Never.
0%
Poll ended at .