Has anyone worked on methods for supporting #auditors or governance representatives/ #impactassessments in determining whether the output of an #algorithmicsystems is "acceptable"? Say an #automatedhiringsystem has proven to be #discriminative. Developers can adjust this but only to a degree of 90% "fairness" without losing system "efficiency" . Who is to determine whether that is socially, ethically and morally "acceptable"? And how? @digitalisationofwork