rajesh bilimoria

@rajeshbilimoria
32 Followers
309 Following
98 Posts
… but am a bit partial to the ones just before and after peak #solareclipse #eclipse #eclipse2023
Got some nice photos of the annular eclipse yesterday, including this one at/close to the peak. (Taken in Midland, Texas) #solareclipse #eclipse2023 #eclipse
A conversation between @rtushnet and @lexlanham about #BadSpaniels? Yes, please! You, too, can sign up to attend (onsite or online) for free: https://www.eventbrite.com/e/bad-spaniels-trademark-parody-and-fair-use-doctrines-tickets-603820953727
Bad Spaniels: trademark parody and fair use doctrines

Join Professor Rebecca Tushnet and Professor Alexandra J. Roberts for a conversation about Jack Daniels v. VIP Products.

Eventbrite
Proposing a 6 month moratorium on this clever** question about AI: "humans do this too, what's the difference?"
**If you're not in a reality where humans and computers are distinct things, then you're not in a reality where we can effectively communicate about them.

Statement from the listed authors of Stochastic Parrots on the “AI pause” letter

https://www.dair-institute.org/blog/letter-statement-March2023

"Regulatory efforts should focus on transparency, accountability and preventing exploitative labor practices."

With @timnitGebru @meg and Angelina McMillan-Major

Big problem: authors often support claim X with with a citation to paper Y, even though Y has no bearing on X or even directly refutes X.

Estimates suggest that between 5% and 35% (the latter seems too high to me) of scientific citations do this. It's a grave sin, akin to claiming statistical significance when you clearly don't have it. Yet it's very common.

An extreme case: the first citation in the new FLI letter "Pause Giant AI experiments".

@timnitGebru explains: https://fediscience.org/@timnitGebru@dair-community.social/110110514822795454

Timnit Gebru (she/her) (@[email protected])

The very first citation in this stupid letter, https://futureoflife.org/open-letter/pause-giant-ai-experiments/, is to our #StochasticParrots Paper, "AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1]" EXCEPT that one of the main points we make in the paper is that one of the biggest harms of large language models, is caused by CLAIMING that LLMs have "human-competitive intelligence." They basically say the opposite of what we say and cite our paper?

Distributed AI Research Community
Policy makers: Please don’t fall for the distractions of #AIhype

Below is a lightly edited version of the tweet/toot thread I put together in the evening of Tuesday March 28, in reaction to the open letter put out by the Future of Life institute that same day…

Medium
The AI moratorium letter only fuels AI hype. It repeatedly presents speculative, futuristic risks, ignoring the version of the problems that are already harming people. It distracts from the real issues and makes it harder to address them. The letter has a containment mindset analogous to nuclear risk, but that’s a poor fit for AI. It plays right into the hands of the companies it seeks to regulate. By @sayashk and me. https://aisnakeoil.substack.com/p/a-misleading-open-letter-about-sci
A misleading open letter about sci-fi AI dangers ignores the real risks

Misinformation, labor impact, and safety are all risks. But not in the way the letter implies.

AI Snake Oil

This is a great interview with @geomblog by Sharon Goldman

"This is a deliberate design choice, by ChatGPT in particular...ChatGPT puts little three dots [as if it’s] “thinking” just like your text message does. ChatGPT puts out words one at a time as if it’s typing. The system is designed to make it look like there’s a person at the other end of it. That is deceptive. And that is not right, frankly."

https://venturebeat.com/ai/sen-murphys-tweets-on-chatgpt-spark-backlash-from-former-white-house-ai-policy-advisor/

Sen. Murphy’s tweets on ChatGPT spark backlash from former White House AI policy advisor

Suresh Venkatasubramanian, former AI advisor to the Biden Administration, shared concerns about Senator Chris Murphy's tweets about ChatGPT.

VentureBeat
For this week's podcast, I spoke to Meredith Broussard about her new book, More than a Glitch- Confronting Race, Gender, and Ability Bias in Tech:
https://techpolicy.press/more-than-a-glitch-a-conversation-with-meredith-broussard/
More Than a Glitch: A Conversation with Meredith Broussard

In a new book, Meredith Broussard considers the ways in which racism, sexism, and ableism are coded into technological systems.

Tech Policy Press