The Iran War provides a clear example of the duality of technology: traffic cameras that can be used for safety purposes are being used by the enemy for surveillance and deployment in attacks.

With every new technology, from healthcare to defence, this duality should be investigated in order to prevent unintended consequences.

#SystemsThinker #SystemsThinking

@DanielleVossebeld @BjornW one of the things that seems characteristic of current AI systems to me is that their profile of respective strengths and weaknesses makes them disproportionally useful for bad actors: the limits on reliability are a stronger bar to positive contributions than they are to destructive ones, where the ability to simply scale responses dominates

@UlrikeHahn

This asymmetry is not specific to "AI"/LLM but affects all statistical algorithms. In regulated sectors (finance, medicine) human control aims to dampen the "scaling" of bad outcomes.

E.g., algorithms in credit scoring used by banks. Many of the same frictions apply: what data are used, how were they collected, how transparent the model to the client, how to explain outcomes etc.

The main novelty of "AI" is bigtech's insane privilege of no regulation

@DanielleVossebeld @BjornW

@openrisk @DanielleVossebeld @BjornW that’sa fair point, but there is still a bit of a difference in that those techniques weren’t generative. So their applicability was much more restricted - (which in turn affords easier regulatory control)

@UlrikeHahn

yes, while a technical aspect, the generative property does introduce a new and tricky dimension. Not sure the difference is so much in applicability (e.g., supervised machine learning models are very widely applied, on all sorts of data and different decision problems) as it is in the "prompts", which couple users in non-trivial ways to the model outputs.

@DanielleVossebeld @BjornW