Keeping Children Safe Online With Limited Resources: Analyzing What is Seen and Heard

Description

The article looks at how AI can analyse activity on mobile screens and audio ports, to detect bullying, porn and sexual harassment. Unlike previous experiments, this AI can see all activity as the user sees it, and not just see input in the form of texts or images that are retrieved from the screen and then processed. The model achieves an average accuracy of 88% when classifying texts, such as classifying sexism and racism. Furthermore, the model achieves 95% accuracy in detecting pornography.

Link
Tags
PreventionClustering/ClassificationNeural networks
Type
Research (peer reviewed)
Year
2021