Do proponents of online ‘safety by design‘ consider that it includes automated content filtering? The latest OSAN evidence submission refers to ‘content-neutral’ measures - which is what I always thought it was supposed to mean. 1/4 https://committees.parliament.uk/writtenevidence/163572/default/
Last year’s submissions to Ofcom’s Additional Measures consultation, however, left me thoroughly confused. E.g. 2/4 https://www.cyberleagle.com/2026/02/safety-by-design-or-systems-for-content.html
Semi-seriously, I propose a moratorium on advocacy for online safety by design until its proponents have figured out what, concretely, it does and (perhaps more importantly) does not mean. Let’s face it, if it includes automated content filtering is it really anything more than an empty slogan? 3/4
Even then, there is a more than lurking suspicion that the theory was crafted specifically with a few large social media platforms in mind, and is liable to collapse if extended beyond that. 4/4 https://www.cyberleagle.com/2026/02/safety-by-design-or-systems-for-content.html
@cyberleagle I can't help thinking that if govt had put all the money and effort associated with the OSA into media education & parent awareness raising we would be in a better place.