The problem is that the tool fails the fundamental rule of "should I use AI to solve this problem".
AI is safe to use *only* if you meet the following criteria:
1) The output doesn't matter (e.g. writing a story for your kids is safe)
2) Someone qualified assesses the veracity of the output prior to using its decision in a way that could cause harm.
The problem is this tool will *always* fail these two tests (so long as its false positive rate is > 0%).
There are virtually no ways to use the tool that are objectively assessable -- there are no experts who can verify its output without external information that invalidates the point of the tool -- and virtually every case for knowing if text is written by an AI is about making a judgement decision regarding the author where failing it is harmful
It's worse than existing plagiarism tools, which say "this text is probably plagiarized from [this other text]", because there you can go and look at the other text and see if there's context that's missing. Like, maybe you're "plagiarizing" a properly-attributed quote, or the author of the other text is also you.
But here it's just "expensive magic computer brain says this student is a fraud", and administrators are going to assume it's true, and have few ways to independently validate it
The power of AI is its ability to *enhance* humans to make decisions, process information, and automate time/mentally consuming but generally pretty low-stakes tasks (like drafting an email).
But when you step past that into delegating life-changing decisions to an AI, either explicitly (loan company says no) or implicitly (magic box says 12% yes and human operator develops rule of thumb to press "go" if number > 10), you're going to run into a world of Kafkaesque garbage really quickly.
A personal view (à la early BBC2 series):
I have noticed a tendency of people to use naïve algorithms (algorithms that do not take into account the specifics of a problem) simply because it is ‘cool’ to use anything associated with the phrase ‘AI’.
This tendency has done great damage, in particular, to the graphical interfaces of Linux, BSDs, etc. (And it is already many years since I gave up trying to influence people away from this mistake.)
It is not that Microsoft and Apple platforms always do the right thing, or that ‘better written’ software on Linux, BSDs, etc., do not do the right thing.
My BIGGEST problem with it is that the ‘coolness’ of the ‘AI’ has MADE THE PEOPLE USING IT STUPID.
For ‘AI’ is what people tend to fall back upon, when they simply do not wish the bother of understanding the problem that is there—that EXISTS—that IS—to be solved.
And there’s perhaps the root of the problem.
@Pwnallthethings It’s not so much that an ‘AI’ generated object looks ‘plausible’—as that we have too many people who are more interested in the generation of ‘plausible’ things than in the generation of ‘correct’ things.
‘AI’ is something that SHOULD exist, but which SHOULD be used only when one CANNOT understand the problem, or there the problem is so complicated as to defy practical solution except by ‘naïve’ means.
This is what is distinctly NOT the case with the graphical xfaces I mention
@sleepy @Pwnallthethings This is my concern, as well. And the moral misapprehension that it because AI pulled the trigger (or said to), it is somehow culpable, rather than the humans who designed, trained, marketed, purchased, or deployed it (or, in your example, listened to it).
No machine makes a decision. A person (or people) makes the decision, every time; but these technologies allow them to obfuscate their role.
At least we have some protection against this type of thing by law (for now)
https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/individual-rights/rights-related-to-automated-decision-making-including-profiling/
The GDPR has provisions on: automated individual decision-making (making a decision solely by automated means without any human involvement); and profiling (automated processing of personal data to evaluate certain things about an individual). Profiling can be part of an automated decision-making process.
@Pwnallthethings
The frustrating thing is, this is not new
There is a 2002 article on this by Sidney Dekker
https://hachyderm.io/@mononcqc/109804346556988230
Reminds me of the saying
The problem with common sense is that it is not so common 😡
Attached: 1 image This week I decided to revisit Sidney Dekker's #paper titled "MABA-MABA or Abracadabra? Progress on Human–Automation Co-ordination", which discusses something called "the substitution myth", a misguided attempt at replacing human weaknesses with automation. Instead, the suggestion is to focus on cooperation and team work, rather than substitution: https://www.researchgate.net/publication/226605532_MABA-MABA_or_abracadabra_Progress_on_human-automation_co-ordination My notes are at: https://cohost.org/mononcqc/post/960352-paper-maba-maba-or #LearningFromIncidents #HumanFactors
@Pwnallthethings
Or even older.
allegedly from a 1979 IBM training slide
(I couldn't verify the origin
The closest I got was "The computer as an advisor, not a decision-maker" by IBM Fellow John Cohn https://www.ibm.com/blogs/think/be-en/2013/11/25/the-computer-as-an-advisor-not-a-decision-maker-the-vision-of-ibm-fellow-john-cohn/ )
It was over 50 years ago that Thomas Watson Jr. launched the IBM Fellows program, the highest honor that can be awarded to an IBM technical staffer. The accolades are handed out for exceptional contributions to scientific, technical or social initiatives that have helped create a smarter planet. IBM Fellows have been behind some of […]