The article looks at how AI can analyse activity on mobile screens and audio ports, to detect bullying, porn and sexual harassment. Unlike previous experiments, this AI can see all activity as the user sees it, and not just see input in the form of texts or images that are retrieved from the screen and then processed. The model achieves an average accuracy of 88% when classifying texts, such as classifying sexism and racism. Furthermore, the model achieves 95% accuracy in detecting pornography.
World Childhood Foundation
Bracket Foundation and UNICRI Centre for AI and Robotics
BI Norwegian Business School, Norwegian University of Science and Technology
University of Pennsylvania Columbia University
B. S. Abdur Rahman Crescent Institute of Science and Technology
King Faisal Specialist Hospital and Research Centre Princess Nora bint Abdul Rahman University Mississippi State University
Technological University Dublin
Technological University Dublin
Technological University Dublin
Zurich Institute of Forensic Medicine
Adhiyamaan College of Engineering
The Economist Intelligence Unit
Australian Institute of Criminology
Auckland University of Technology
Humboldt Universitat zu Berlin
Nalla Malla Engineering College, Galgotias University, Vellore Institute of Technology
Uskudar University Medical Faculty, Istanbul, Turkey
University of Edinburgh and George Mason University
The Economist Intelligence Unit
ITU/UNESCO Broadband Commission for Sustainable Development
University of New Haven / Digital Forensic Research Workshop
Institute of Electrical and Electronics Engineers (IEEE) and Mississippi State University
Department of Psychology, University of Gothenburg