machine learning is a dangerous technology that is making life worse for everyone. deploying a model in a way that could cause harm to the public should require a license and liability insurance. https://archive.is/C7uVc
since your model is so quantifiably reliable, you should have no problem calculating the extent of your potential tort liability. what's that? you're **not** comfortable entrusting your personal financial future to the outcomes of your machine learning model?? but you want to expose **other people** to that model??? interesting!
@peter When people sue the local police department for these violations, it needs to be for huge sums of money and they need to include the "AI" company involved. The first point will hopefully make departments think twice about using this technology. The second will probably get tossed because of the terms of use agreement with the department, but at least make them pay for the lawyer time to argue that in court.
@peter I think there could even be a pretty good argument that even in spite of the official terms of use stating "AI can make mistakes," they could still be held liable. If the vast majority of a company's marketing is "this tool is great and you should use it!" but in the fine print (which the person harmed didn't consent to) it says "this tool often causes great bodily harm," they have misrepresented their product in marketing and should be liable for damages.
@Jumpmed yeah, I mean you can't just put a "this car may spontaneously combust" sticker on a vehicle and expect to get away with selling a harmful product. they think they are being clever, but they are not.