Lauren Lagarde

2 Followers
121 Following
36 Posts

The problem is that the tool fails the fundamental rule of "should I use AI to solve this problem".

AI is safe to use *only* if you meet the following criteria:
1) The output doesn't matter (e.g. writing a story for your kids is safe)
2) Someone qualified assesses the veracity of the output prior to using its decision in a way that could cause harm.

The chance that someone integrates this into a plagiarism test and ends up failing students, or job applications, or reports, or anything else where there's some text where a false claim of plagiarism is going to be hard to prove but have severely negative consequences to the accused is going to be large, and cause a lot of problems before its worked out

The power of AI is its ability to *enhance* humans to make decisions, process information, and automate time/mentally consuming but generally pretty low-stakes tasks (like drafting an email).

But when you step past that into delegating life-changing decisions to an AI, either explicitly (loan company says no) or implicitly (magic box says 12% yes and human operator develops rule of thumb to press "go" if number > 10), you're going to run into a world of Kafkaesque garbage really quickly.