"Predictive AI systems have also been shown to be incredibly useful when they leverage certain generative techniques within a constrained set of options. Systems of this type are diverse, spanning everything from outfit visualization to cross-language translation. Soon, predictive-generative hybrid systems will make it possible to clone your own voice speaking another language in real time, an extraordinary aid for travel (with serious impersonation risks). There’s considerable room for growth here, but generative AI delivers real value when anchored by strong predictive methods.

To understand the difference between these two broad classes of AI, imagine yourself as an AI system tasked with showing someone what a cat looks like. You could adopt a generative approach, cutting and pasting small fragments from various cat images (potentially from sources that object) to construct a seemingly perfect depiction. The ability of modern generative AI to produce such a flawless collage is what makes it so astonishing.

Alternatively, you could take the predictive approach: Simply locate and point to an existing picture of a cat. That method is much less glamorous but more energy-efficient and more likely to be accurate, and it properly acknowledges the original source. Generative AI is designed to create things that look real; predictive AI identifies what is real. A misunderstanding that generative systems are retrieving things when they are actually creating them has led to grave consequences when text is involved, requiring the withdrawal of legal rulings and the retraction of scientific articles."

https://www.technologyreview.com/2025/12/15/1129179/generative-ai-hype-distracts-us-from-ais-more-important-breakthroughs

#AI #PredictiveAI #GenerativeAI

Generative AI hype distracts us from AI’s more important breakthroughs

It's a seductive distraction from the advances in AI that are most likely to improve or even save your life

MIT Technology Review

I want fewer "mathy maths" (the gas-guzzlers formerly known as generative AI) and more predictive AI, please.

Point a predictive AI at scammers, spammers, and malware mobsters, and make them go poof!!

https://www.technologyreview.com/2025/12/15/1129179/generative-ai-hype-distracts-us-from-ais-more-important-breakthroughs/

#AI #predictiveAI #utopia

Generative AI hype distracts us from AI’s more important breakthroughs

It's a seductive distraction from the advances in AI that are most likely to improve or even save your life

MIT Technology Review

New from me, Gabriel Geiger,
+ Justin-Casimir Braun at Lighthouse Reports.

Amsterdam believed that it could build a #predictiveAI for welfare fraud that would ALSO be fair, unbiased, & a positive case study for #ResponsibleAI. It didn't work.

Our deep dive why: https://www.technologyreview.com/2025/06/11/1118233/amsterdam-fair-welfare-ai-discriminatory-algorithms-failure/

Inside Amsterdam’s high-stakes experiment to create fair welfare AI

The Dutch city thought it could break a decade-long trend of implementing discriminatory algorithms. Its failure raises the question: can these programs ever be fair?

MIT Technology Review

This AI Animated Music Video) Predicted Our Breakup, Why Did I Ignore the Algorithm?" (Love and Pain)

AI predicted the end before I even saw the signs. This AI-generated video exposed the truth about our relationship—was it fate, or just a really good algorithm? Watch and decide!

#AIBreakupPrediction #AlgorithmKnowsBest #AIRevealedTheTruth #PredictiveAI #LagosLoveDecoded
#AlgorithmicEmbrace #AIGenerated #DigitalArt #AIMusic #TechEmotion
#AIMusic
Watch :(https://youtube.com/shorts/MDf-9gwkx1Y?feature=share

Before you continue to YouTube

"Alexander, more than midway through a 20-year prison sentence on drug charges, was making preparations for what he hoped would be his new life. His daughter, with whom he had only recently become acquainted, had even made up a room for him in her New Orleans home.

Then, two months before the hearing date, prison officials sent Alexander a letter informing him he was no longer eligible for parole.

A computerized scoring system adopted by the state Department of Public Safety and Corrections had deemed the nearly blind 70-year-old, who uses a wheelchair, a moderate risk of reoffending, should he be released. And under a new law, that meant he and thousands of other prisoners with moderate or high risk ratings cannot plead their cases before the board. According to the department of corrections, about 13,000 people — nearly half the state’s prison population — have such risk ratings, although not all of them are eligible for parole.

Alexander said he felt “betrayed” upon learning his hearing had been canceled. “People in jail have … lost hope in being able to do anything to reduce their time,” he said.

The law that changed Alexander’s prospects is part of a series of legislation passed by Louisiana Republicans last year reflecting Gov. Jeff Landry’s tough-on-crime agenda to make it more difficult for prisoners to be released."

https://www.propublica.org/article/tiger-algorithm-louisiana-parole-calvin-alexander

#USA #Louisiana #Algorithms #PredictiveAI #PredictivePolicing #PoliceState

An Algorithm Deemed This Nearly Blind 70-Year-Old Prisoner a “Moderate Risk.” Now He’s No Longer Eligible for Parole.

A Louisiana law cedes much of the power of the parole board to an algorithm that bars thousands of prisoners from a shot at early release. Civil rights attorneys say it could disproportionately harm Black people — and may even be unconstitutional.

ProPublica

"EFF has been sounding the alarm on algorithmic decision making (ADM) technologies for years. ADMs use data and predefined rules or models to make or support decisions, often with minimal human involvement, and in 2024, the topic has been more active than ever before, with landlords, employers, regulators, and police adopting new tools that have the potential to impact both personal freedom and access to necessities like medicine and housing.

This year, we wrote detailed reports and comments to US and international governments explaining that ADM poses a high risk of harming human rights, especially with regard to issues of fairness and due process. Machine learning algorithms that enable ADM in complex contexts attempt to reproduce the patterns they discern in an existing dataset. If you train it on a biased dataset, such as records of whom the police have arrested or who historically gets approved for health coverage, then you are creating a technology to automate systemic, historical injustice. And because these technologies don’t (and typically can’t) explain their reasoning, challenging their outputs is very difficult."

https://www.eff.org/deeplinks/2024/12/fighting-automated-oppression-2024-review-0

#Algorithms #AlgorithmicDecisionMaking #Automation #PredictiveAI

Fighting Automated Oppression: 2024 in Review

EFF has been sounding the alarm on algorithmic decision making (ADM) technologies for years. ADMs use data and predefined rules or models to make or support decisions, often with minimal human involvement, and in 2024, the topic has been more active than ever before, with landlords, employers,...

Electronic Frontier Foundation

@nazokiyoubinbou I mean, I took the 0th Law of Robotics under consideration, but I don’t think any IT policy I could write would save humanity from AI, and by extension, save humanity from itself.

To quote Asimov’s perspective, “Yes, the Three Laws are the only way in which rational human beings can deal with robots—or with anything else. But when I say that, I always remember (sadly) that human beings are not always rational.”
#AI #GenAI #PredictiveAI #CyberSecurity #ITPolicy #Asimov #3Laws #AcceptableUse

Okay, I went ahead and did it. Asimov’s Laws of Robotics 1, 2, 3, and 4 are all mapped in one form or another into the AI AUP I’m writing.

Law 1: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”

Converted into protecting the data of others.

Law 2: “A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.”

Converted into the guidance for sharing useful prompts to encourage more consistent (and beneficial) results.

Law 3: “A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.”

Converted into guidance about understanding the limitations of any AI tool in use so as not to risk misuse and potentially need to revoke access.

Law 4: “A robot must establish its identity as a robot in all cases.”

Converted into guidance that AI results shall be established as AI in all cases.

#AI #GenAI #PredictiveAI #CyberSecurity #ITPolicy #Asimov #3Laws #AcceptableUse

I am writing my company’s Artificial Intelligence Acceptable Use Policy, and I am deeply tempted to reference Asimov’s Laws of Robotics.
#AI #GenAI #PredictiveAI #CyberSecurity #ITPolicy #Asimov #3Laws #AcceptableUse