Dan Lovejoy

218 Followers
444 Following
462 Posts

Enterprise Architect, reluctant Okie. Also a pretty good home cook and baker. World’s Okayest dad and husband.

Kindness above all.

Trying to be an encourager here and everywhere.

I read a lot. I tell myself I read widely, history, biography. But the truth is, mostly SciFi. (A LOT of really good SciFi.)

Cover photo is an attempt at DallE-2 AI-generated art with the prompt something like "Totoro as a short order cook at Waffle House."

Favorite emojis:😍,👍🏼,😳,😱,🙄
Favorite Languages:English, español, français, 日本語(少し)
Ready for conclave. #cardinal #birds

Had a very odd, disturbing response from #midjourney today. Marked as sensitive for that reason. #generativeart

Prompt: a surveillance still taken from a video of a worker in the process of slipping and falling in an office breakroom --chaos 75

@Popehat This was really good. Thanks!

Every customer service interaction I've had recently:

(hold music)

(Suspiciously cheerful voice) Did you know that you can manage the intensity and depth of your torment online? Simply log into TormentNexus dot com and click "My Account"!

(hold music)

Me: (muffled expletives) if your website would let me do what I was trying to do I wouldn't be calling you...

Looking for relevant research for the book chapter I’m writing about why we shouldn’t be using “AI-detectors” on student writing. I’m coming up pretty much empty, and maybe I just don’t have the right search terms to find what I’m looking for. Is there existing research on the impact that false/disproven accusations of academic dishonesty have on students? (I can find stuff about impacts of false *criminal* accusations, but not about students & academic dishonesty.)

Boosts welcome!

GPT detectors are biased against non-native English writers

The rapid adoption of generative language models has brought about substantial advancements in digital communication, while simultaneously raising concerns regarding the potential misuse of AI-generated content. Although numerous detection methods have been proposed to differentiate between AI and human-generated content, the fairness and robustness of these detectors remain underexplored. In this study, we evaluate the performance of several widely-used GPT detectors using writing samples from native and non-native English writers. Our findings reveal that these detectors consistently misclassify non-native English writing samples as AI-generated, whereas native writing samples are accurately identified. Furthermore, we demonstrate that simple prompting strategies can not only mitigate this bias but also effectively bypass GPT detectors, suggesting that GPT detectors may unintentionally penalize writers with constrained linguistic expressions. Our results call for a broader conversation about the ethical implications of deploying ChatGPT content detectors and caution against their use in evaluative or educational settings, particularly when they may inadvertently penalize or exclude non-native English speakers from the global discourse. The published version of this study can be accessed at: www.cell.com/patterns/fulltext/S2666-3899(23)00130-7

arXiv.org
Wil Wheaton on striking with his Space Mom. #SAGAFTRAstrike #SAGAFTRA #WGAstrike #wga #StarTrek
Russian news: "The world does not get to decide, we cannot travel and have important, international, diplomatic relations. Just look at our defence minister Sergei Shoigu visiting the highly successful and respected country of…
[Looks down to check notes]
North Korea."
@jerry I think the genetic dynasty, which they made up, is brilliant. S2e2 kind of a mess, IMO.
@danhon Multiple independent re-entry Swifts