Something I have been thinking about since yesterday’s panel: there is a real existential risk in making humans more machine-readable. In characterising the problem with harmful technology as a “bias” issue that can be solved with greater datasets or more diverse development teams, we risk convincing ourselves we have “solved” technology before asking questions like, for what purpose? And what could go wrong?
@kgt a decent doco you can find on YouTube that touches on the folly of simulating nature in the 60s-70s period is Adam Curtis’ ‘All Watched Over By Machines of Ever Loving Grace’. It’s a loose connection to modern AI tech, but it’s more about the engineer mindset that computer systems just need to get more complex to simulate a chaotic system perfectly, and the consequences of that thinking