Keeping Children Safe Online With Limited Resources: Analyzing What is Seen and Heard

Organization

Singidunum University

Description

The article looks at how AI can analyse activity on mobile screens and audio ports, to detect bullying, porn and sexual harassment. Unlike previous experiments, this AI can see all activity as the user sees it, and not just see input in the form of texts or images that are retrieved from the screen and then processed. The model achieves an average accuracy of 88% when classifying texts, such as classifying sexism and racism. Furthermore, the model achieves 95% accuracy in detecting pornography.

File
Keeping_Children_Safe_Online_With_Limited_Resources_Analyzing_What_is_Seen_and_Heard.pdf
Link
Tags
PreventionClustering/ClassificationNeural networks
Type
Research (peer reviewed)
Year
2021
Stella Polaris Knowledge Center
Stella Polaris Knowledge Center