The article looks at how AI can analyse activity on mobile screens and audio ports, to detect bullying, porn and sexual harassment. Unlike previous experiments, this AI can see all activity as the user sees it, and not just see input in the form of texts or images that are retrieved from the screen and then processed. The model achieves an average accuracy of 88% when classifying texts, such as classifying sexism and racism. Furthermore, the model achieves 95% accuracy in detecting pornography.
BI Norwegian Business School, Norwegian University of Science and Technology
Technological University Dublin
Technological University Dublin
Technological University Dublin
Norwegian Business School and Norwegian University of Science and Technology
Zurich Institute of Forensic Medicine
Adhiyamaan College of Engineering
Australian Institute of Criminology
Auckland University of Technology
Humboldt Universitat zu Berlin
Nalla Malla Engineering College, Galgotias University, Vellore Institute of Technology
Uskudar University Medical Faculty, Istanbul, Turkey
University of Edinburgh and George Mason University
ITU/UNESCO Broadband Commission for Sustainable Development
University of New Haven / Digital Forensic Research Workshop
Institute of Electrical and Electronics Engineers (IEEE) and Mississippi State University
Department of Psychology, University of Gothenburg