A NSFW detector with CoreML
A NSFW detector with CoreML - Lemmy.world
Other samples: Android: https://github.com/nipunru/nsfw-detector-android [https://github.com/nipunru/nsfw-detector-android] Flutter: https://github.com/ahsanalidev/flutter_nsfw [https://github.com/ahsanalidev/flutter_nsfw] I feel it’s a good idea for those building native clients for Lemmy implement projects like these to run offline inferences on feed content for the time-being. To cover content that are not marked NSFW. What does everyone think, about enforcing further censorship, especially in open-source clients, on the client side as long as it pertains to this type of content?