- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It’s the earliest AI technology striving to expose unreported CSAM at scale.
Sharing it with people and companies that it wasn’t being shared with before.
The same way it is now: people reporting it and undercover police accounts. People recognise it.
If it’s going to get used as evidence in court a human will have to review and confirm it. I don’t think “Because the AI said so” is going to convince juries.
Or if it’s you or someone you love who is in the CP. Having further copies of it on further hard drives, whether it’s so someone can bake it into their AI tool or any other purpose is wrong. That’s just my view though.