Tool preventing AI mimicry cracked; artists wonder what’s next
Tool preventing AI mimicry cracked; artists wonder what’s next - midwest.social
But just as Glaze’s userbase is spiking, a bigger priority for the Glaze Project has emerged: protecting users from attacks disabling Glaze’s protections—including attack methods exposed in June by online security researchers in Zurich, Switzerland. In a paper published on Arxiv.org [http://Arxiv.org] without peer review, the Zurich researchers, including Google DeepMind research scientist Nicholas Carlini, claimed that Glaze’s protections could be “easily bypassed, leaving artists vulnerable to style mimicry.”