David Robinson

@dgrobinson
377 Followers
132 Following
11 Posts
Author of "Voices in the Code," book on democratizing AI: http://amzn.to/3JtsHkg Faculty
at Apple University. Plugged in at UC Berkeley's AFOG and Social Science Matrix.
Next was an excellent panel on #AI #bias prevention at the #RhodesTrust with Elaine Nsoesie, @dgrobinson, and Muhammed Razzak. The topics covered here, from equity to algorithm design to prioritizing particular bias problems, are expansive and treated with the care they deserve. Highly recommend https://www.youtube.com/watch?v=yH76s6BIi7c (7/12) #AIEthics
Tech & Society 2022: 6A. Equity: Bias Prevention, not just Mitigation

YouTube
I used ChatGPT to help me write a peer review. It didn't help at all. There is a big difference between being really cool and being a useful tool. This experience provides some lessons for how we should evaluate LLMs and other new AI. https://freedom-to-tinker.com/2023/03/08/can-chatgpt-and-its-successors-go-from-cool-to-tool/
Can ChatGPT—and its successors—go from cool to tool?

Anyone reading Freedom to Tinker has seen examples of ChatGPT doing cool things.  One of my favorites is its amazing answer to this prompt: “write a

Freedom to Tinker

"By using digital surveillance to enforce rules, we focus our attention on an apparent order that allows us to ignore the real problems in the industry ... But under the actual order, the problem in trucking is that drivers are incentivized to work themselves ... to death."

Karen Levy on trucker trackers: https://www.newyorker.com/culture/annals-of-inquiry/surveillance-and-the-loneliness-of-the-long-distance-trucker?mc_cid=adccaf6bf5&mc_eid=94ccaa1b3c

@alexcengler ... and, *that* argument would have far-reaching implications beyond the realm of auditing.
@alexcengler .... i.e. that selling powerful machine learning systems for important applications _without_ accompanying expertise is a recipe for trouble...
@alexcengler My primary concern would not be the availability of compute, but of (1) expertise and (2) test data. Many downstream users (e.g., police departments with respect to facial recognition) do indeed have "no idea how their system works" -- they lack and won't develop that expertise in house. Part of the pitch from platform vendors is that the end-using business doesn't need to. One might say that such a business model is per se irresponsible...
@geomblog true but on the other hand.... legs are coming!
@alexcengler I consider an analogous question about automated decision-making systems in Voices in the Code
@alexcengler Interesting idea. I think one thing this argument misses is that it's costly to audit large models. Parties that are down the value chain from a large-model provider are located there partly _because_ they lack the resources to scratch cook resource intensive AI applications. It is likely to follow that they also lack the resources to conduct this type of analysis themselves.
i'm here too.