@glyph
@ridt @jags @Migueldeicaza
They do say it can be incorrect on the start screen - but that warning should be a lot more prominent seeing as so many people still seem to be surprised when it outputs incorrect statements.
They need to make sure users know what it is: more of a conversation simulator more interested in providing some sort of reply than a correct reply - and what it isn't: a text interference for an infallible encyclopaedia
Well, obviously in the article they describe how they make chatGPT fix its code when it gives errors. At the end, when it runs without errors it must be correct and not bullshit, right? Right?
@ridt Definitely a different take on ChatGPT.
Did you see it make (more than some coding) mistakes? And if so, how critical was the environment (live use during class) for correcting them?
@ridt @jags That doesn't sound so impressive.
First, we're talking about a malware analysis which is very specific. Saying it will revolutionize higher education is a bit over enthusiastic. But that's not even the problem here.
An automatized system that provides answers without giving its sources for it should not get anywhere near a school, whatever the level of education.
And last, it's always the same : it's always about "efficiency". You certainly wouldn't want to take the time with students, that would be in poor taste. And in any case, even if the situation is critical, you most certainly do NOT want to hire more teachers and helpers noooo, better depend completely on this overly complex and hard to assess technology produced by a very small amount of people that will squeeze you for every single penny you have once you can't get rid of it. That sounds like a plan. Yes properly trained machines can do a good job. Properly trained humans do a better one, so train them and hire more if you want a faster and properly done teaching.
You most certainly seem to have had a good time and i'm not denying that. But suggesting this as teaching tool is a step towards a brick wall.
@ridt
I think the first introduction should be in a subject the students already know really well. Then the up- and downsides will be clear enough to be discussed sensibly. (A danish history teacher tooted good results from such an experiment.)
I did some simple scripting. It helped along but also made errors that gave wrong results - subtle enough that I only noticed because I knew what I was doing. Would that be a problem in your field? Then teach how to proceed with caution.
@ridt @jags
It seems like what you would want is an interpreter layer to the search engines, but not ChatGPT.
ChatGPT is not a semi-intelligent search engine, it's a device to make human-sounding text about any subject, without any concern about accuracy, and the only time it says "I don't know" is when you ask it something prohibited by its filters -- in effect ChatGPT is a very good liar.
Thanks for insights of time learning in classroom with integration of ChatGPT ... I appreciated the ending reflection, of seeking possibilities and potential for enhancing learning ... And I hope we in education have these conversations
"The far more inspiring conversation is a different one: how can the most creative, the most ambitious, and the most brilliant students achieve even better results faster? How can educators help them along the way?" -- Thomas Rid