According to a new study by the Stanford Internet Observatory shared with 404 Media ahead of publication, the researchers found the suspected instances of CSAM through a combination of perceptual and cryptographic hash-based detection and analysis of the images themselves.

Largest Dataset Powering AI Images Removed After Discovery of Child Sexual Abuse Material

When Reading Not with a Bug over and over again it was shown that it’s currently impossible to protect AI models from bad data, or leaking data. Once a model is poisoned with bad data, it is never possible to recover the model back to a state before the bad data was input.

Yes that means a responsible AI model owner would throw it out at this point to get the CSAM out because the research into adversarial AI says you can never recover it. Just how much research is there to say this, almost 2 papers a day are published about breaking AI for the last number of years and they all say the same thing…AI can’t be protected and can’t be recovered once it’s off the rails with bad data.

Related Content