Anjana Susarla

@asusarla
387 Followers
303 Following
304 Posts

Omura-Saxena Professor of Responsible AI at the Broad College of Business at Michigan State University. My research broadly looks at the economics of information systems, social media analytics and the economics of artificial intelligence. I also like to write about Responsible AI and platform governance issues for a broader audience.

Some of my general writings can be found at: https://anjanasusarla.substack.com

https://theconversation.com/profiles/anjana-susarla-334987/

Gender and standards in AIhttps://www.itu.int/hub/2022/03/breaking-bias-gender-standards-anjana-susarla-ai-keynote/
Mitigating Bias in AIhttps://www.responsible.ai/news/the-strange-and-wondrous-world-of-mitigating-bias-through-ai
Human and Machine Interfaces in Medical AIhttps://medium.com/mit-initiative-on-the-digital-economy/medical-ai-social-media-fall-short-707f2e646756
Algo recommendations and online radicalizationhttps://english.elpais.com/usa/2021-04-22/how-algorithmic-recommendations-can-push-internet-users-into-more-radical-views-and-opinions.html
This cuts closer to the bone than the WaPo did, exposing complaints about Altman's management. Good. I hope to see more examination of the TESCREAL philosophies behind these wizards of Oz:
Sam Altman’s firing and return as CEO of OpenAI followed a pattern https://www.wsj.com/tech/ai/sam-altman-openai-protected-by-silicon-valley-friends-f3efcf68?st=cw8b1c0qdkfark8
Sam Altman’s Knack for Dodging Bullets—With a Little Help From Bigshot Friends

The OpenAI CEO lost the confidence of top leaders in the three organizations he has directed, yet each time he’s rebounded to greater heights

WSJ

We can’t just wait for some malevolent #AI superintelligence to arrive to enact serious safeguards.

In reality, the AI most likely to cause you harm is something as mundane as the loan algorithm at your bank.

“This is why I believe they require greater scrutiny to ensure that AI-augmented work conforms to standards of transparency, accuracy and source crediting,” writes @asusarla

https://theconversation.com/forget-dystopian-scenarios-ai-is-pervasive-today-and-the-risks-are-often-hidden-218222

#technology #artificialintelligence

Forget dystopian scenarios – AI is pervasive today, and the risks are often hidden

The explosion of generative AI tools like ChatGPT and fears about where the technology might be headed distract from the many ways AI affects people every day – for better and worse.

The Conversation

I was on Bloomberg Technology Live talking about the role of AI in election misinformation, and what platforms can do, at the 24:24 mark

A very complex issue for a short interview, but two points: (1) solving election misinformation is not a problem for AI alone, we need human intelligence as well, (2) we need transparency reports from platforms

#election #misinformation #electionmisinformation #socialmedia #contentmoderation #digitalplatforms #transparency

https://www.youtube.com/watch?v=ZrZWjYB-NQQ

Bloomberg Technology 11/07/2023

YouTube

New piece for @TheConversationUS on the Biden Adminstration's sweeping new executive order on "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence"

#aisafety #executiveorder #responsibleai #foundationmodels

https://theconversation.com/biden-administration-executive-order-tackles-ai-risks-but-lack-of-privacy-laws-limits-reach-216694

Biden administration executive order tackles AI risks, but lack of privacy laws limits reach

In the absence of comprehensive AI regulation from Congress, the executive branch is building on its previous efforts to address AI harms.

The Conversation
One year after Elon Musk let that sink in, an elegy for the platform that was — and some notes on the one that is poised to succeed it. With @zoeschiffer https://www.platformer.news/p/twitter-is-dead-and-threads-is-thriving
Twitter is dead and Threads is thriving

One year after Elon Musk let that sink in, an elegy for the platform that was — and some notes on the one that is poised to succeed it

Platformer

The wan attempts to secure large language models with “guardrails” are trivially undone, sometimes even with only $0.20 of API activity.

#chatgpt #LLMs #ai #notai #notsecure

https://www.theregister.com/2023/10/12/chatbot_defenses_dissolve/

AI safety guardrails easily thwarted, security study finds

OpenAI GPT-3.5 Turbo chatbot defenses dissolve with '20 cents' of API tickling

The Register
Apple's statement shelving its attempt to build CSS that can safely/privately scan e2ee. Here, they acknowledge that's not possible without endangering security/privacy:
https://www.documentcloud.org/documents/23933180-apple-letter-to-heat-initiative
DocumentCloud

A mechanical tiger bounds through an epic saga in ‘Loot’

A mechanical tiger from late-18th-century India bounds through « Loot, » Tania James's captivating novel about a young toymaker who travels the world.

The Washington Post
Next was a fabulous event on responsible and open generative AI models at the Princeton CITP. I highly recommend the whole event, with particular standouts being @ruchowdh's (generative red team challenge) and @zicokolter's (common generative AI data training data as an unpatchable security risk and LLMs as virtual machines) respective talks https://www.youtube.com/watch?v=75OBTMu5UEc (5/10) #AI #GenerativeAI
Workshop on Responsible and Open Foundation Models

YouTube
Next was an excellent talk by @gabrielazf (👋) on automated decision-making case law under #GDPR at the School of Advanced Study, University of London. This is an amazing tour of the case law as well as the historical context of #privacy and ADM law. The discussion from Jedrzej Niklas is similarly insightful. Highly recommend https://www.youtube.com/watch?v=uUN0YsL-akA (7/10)
Computer says No! Fair and Accountable Decisions in an Automated World

YouTube