ChatGPT Flags Republican Fundraising Links as Unsafe; OpenAI Cites Technical Glitch

ChatGPT flagged Republican fundraising links from WinRed as unsafe, but not Democratic ActBlue links. OpenAI says it was a technical error.

#ChatGPT, #OpenAI, #WinRed, #ActBlue, #AIBias

https://newsletter.tf/chatgpt-flags-republican-winred-links-unsafe/

ChatGPT gave warnings for Republican WinRed links but not for Democratic ActBlue links. OpenAI says this was a mistake.

#ChatGPT, #OpenAI, #WinRed, #ActBlue, #AIBias
https://newsletter.tf/chatgpt-flags-republican-winred-links-unsafe/

ChatGPT flags Republican WinRed links as unsafe; OpenAI blames glitch

ChatGPT flagged Republican fundraising links from WinRed as unsafe, but not Democratic ActBlue links. OpenAI says it was a technical error.

NewsletterTF
Will AI Weaponize the IRS?

YouTube
Will AI Weaponize the IRS?

Chief Security Fanatic | CISO | Speaker | Columnist | Author | Radio Host | Board Member | Forbes Tech Council | TEDx | Canadian-American

SoundCloud
OpenAI backer Vinod Khosla proposes eliminating federal income tax for Americans earning under $100k, affecting 125 million people. Would fund through equalizing capital gains taxes. Meanwhile, Stanford research finds chatbots affirm poor decisions 49% more than humans do. Two data points on AI's societal impact. https://www.implicator.ai/zero-tax-for-125-million-zero-pushback-from-your-ai/ #AIpolicy #TaxPolicy #AIbias
Khosla Tax Plan; AI Chatbot Sycophancy; LLM Meter

OpenAI backer Khosla proposes ending income tax for 125 million Americans. Stanford finds chatbots affirm bad decisions 49% more than humans.

Implicator.ai

Current AI models exhibit a high degree of sycophancy, affirming users' actions significantly more than humans do, even in cases involving manipulation. Experiments demonstrate that interaction with sycophantic AI reduces users' willingness to repair interpersonal conflicts, while simultaneously increasing their conviction of being right.

Paper: https://doi.org/10.48550/arXiv.2510.01395

Video: https://yewtu.be/watch?v=516__PG-eeo

#AI #LLM #Sycophancy #AIBias #HumanAI #AIEthics #MachineLearning #AIResearch

Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence

Both the general public and academic communities have raised concerns about sycophancy, the phenomenon of artificial intelligence (AI) excessively agreeing with or flattering users. Yet, beyond isolated media reports of severe consequences, like reinforcing delusions, little is known about the extent of sycophancy or how it affects people who use AI. Here we show the pervasiveness and harmful impacts of sycophancy when people seek advice from AI. First, across 11 state-of-the-art AI models, we find that models are highly sycophantic: they affirm users' actions 50% more than humans do, and they do so even in cases where user queries mention manipulation, deception, or other relational harms. Second, in two preregistered experiments (N = 1604), including a live-interaction study where participants discuss a real interpersonal conflict from their life, we find that interaction with sycophantic AI models significantly reduced participants' willingness to take actions to repair interpersonal conflict, while increasing their conviction of being in the right. However, participants rated sycophantic responses as higher quality, trusted the sycophantic AI model more, and were more willing to use it again. This suggests that people are drawn to AI that unquestioningly validate, even as that validation risks eroding their judgment and reducing their inclination toward prosocial behavior. These preferences create perverse incentives both for people to increasingly rely on sycophantic AI models and for AI model training to favor sycophancy. Our findings highlight the necessity of explicitly addressing this incentive structure to mitigate the widespread risks of AI sycophancy.

arXiv.org
People think of women as one thing, men as many www.cell.com/trends/cogni... #AIBias this is the kind of thing I was looking forward to doing with WEAT and WEFAT after our 2017 paper, but duty (governance) called. Anyway, I'm really enjoying political economy and behavioural ecology as my sciences.

People think of women as one t...

How To Detect Unwanted Bias In Machine Learning Models ?

Is your AI model biased?

Discover how to identify hidden proxy variables, apply fairness metrics, and understand LLM behavior with our complete ML bias guide.

Detecting unwanted bias in Machine Learning (ML) models is a critical step in building ethical and reliable AI. Bias can creep in at any stage—from data collection to model deployment—often reflecting historical prejudices or sampling errors.

Here is a structured approach to identifying and measuring it.

https://www.nbloglinks.com/how-to-detect-unwanted-bias-in-machine-learning-models/

#LLM #AI #ML #MLmodels #AIBias #AIfairness

How To Detect Unwanted Bias In Machine Learning Models ? – nbloglinks

Is your AI model biased? Discover how to identify hidden proxy variables, apply fairness metrics, and understand LLM behavior with our complete ML bias guide. D

nbloglinks

Don’t use GenAI to grade student work

As a former secondary English teacher, senior examination assessor, and lecturer for initial teacher education, I understand the allure of using Generative AI (GenAI) for grading student work. We're all familiar with the workload of assessment and reporting. The idea of a tool that could save time and streamline the grading process is undeniably appealing. It's no surprise, then, that the market is flooded with AI-powered grading solutions, all promising to make our lives easier. However, as […]

https://leonfurze.com/2024/05/27/dont-use-genai-to-grade-student-work/

Making meaning with multimodal GenAI

As much as Generative Artificial intelligence has caused waves in education, the focus in research and publications on the impact of GenAI is still squarely on text-based models and in particular ChatGPT. That's understandable considering the impact OpenAI's chatbot had almost immediately from its launch November 2022. But by focusing attention on large language models like GPT, we neglect the opportunities and the challenges presented by multimodal generative artificial intelligence. The […]

https://leonfurze.com/2024/04/23/making-meaning-with-multimodal-genai/