Want AI that flags hateful content? Build it.

Want AI that flags hateful content? Build it.


The challenge asks for two different models. The first, a task for those with intermediate skills, is one that identifies hateful images; the second, considered an advanced challenge, is a model that attempts to fool the first one. “That actually mimics how it works in the real world,” says Chowdhury. “The do-gooders make one approach, and then the bad guys make an approach.” The goal is to engage machine-learning researchers on the topic of mitigating extremism, which may lead to the creation of new models that can effectively screen for hateful images.  

A core challenge of the project is that hate-based propaganda can be very dependent on its context. And someone who doesn’t have a deep understanding of certain symbols or signifiers may not be able to tell what even qualifies as propaganda for a white nationalist group. 

“If [the model] never sees an example of a hateful image from a part of the world, then it’s not going to be any good at detecting it,” says Jimmy Lin, a professor of computer science at the University of Waterloo, who is not associated with the bounty program.

This effect is amplified around the world, since many models don’t have a vast knowledge of cultural contexts. That’s why Humane Intelligence decided to partner with a non-US organization for this particular challenge. “Most of these models are often fine-tuned to US examples, which is why it’s important that we’re working with a Nordic counterterrorism group,” says Chowdhury.

Lin, though, warns that solving these problems may require more than algorithmic changes. “We have models that generate fake content. Well, can we develop other models that can detect fake generated content? Yes, that is certainly one approach to it,” he says. “But I think overall, in the long run, training, literacy, and education efforts are actually going to be more beneficial and have a longer-lasting impact. Because you’re not going to be subjected to this cat-and-mouse game.”

The challenge will run till November 7, 2024. Two winners will be selected, one for the intermediate challenge and another for the advanced; they will receive $4,000 and $6,000, respectively. Participants will also have their models reviewed by Revontulet, which may decide to add them to its current suite of tools to combat extremism. 



Source link

Podobné příspěvky