Something I have been thinking about since yesterday’s panel: there is a real existential risk in making humans more machine-readable. In characterising the problem with harmful technology as a “bias” issue that can be solved with greater datasets or more diverse development teams, we risk convincing ourselves we have “solved” technology before asking questions like, for what purpose? And what could go wrong?