🚨

Detection

All reports

NameOrganizationDescriptionFileTagsYearTypeLinkStatusSelf referenceCrime PhasePublication TypeTechnology

Cornell University

This paper proposes an approach to detection of online sexual predatory chats and abusive language using the open-source pretrained Llama 2 7B-parameter model, recently released by Meta GenAI. We fine-tune the LLM using datasets with different sizes, imbalance degrees, and languages (i.e., English, Roman Urdu and Urdu). Based on the power of LLMs, our approach is generic and automated without a manual search for a synergy between feature extraction and classifier design steps like conventional methods in this domain. Experimental results show a strong performance of the proposed approach, which performs proficiently and consistently across three distinct datasets with five sets of experiments. This study's outcomes indicate that the proposed method can be implemented in real-world applications (even with non-English languages) for flagging sexual predators, offensive or toxic content, hate speech, and discriminatory language in online discussions and comments to maintain respectful internet or digital communities.

Machine learningNatural Language Processing
2023
Research (peer reviewed)

King Faisal Specialist Hospital and Research Centre Princess Nora bint Abdul Rahman University Mississippi State University

Child abuse is a major problem in most of the developing and developed countries. Medical practitioners and law enforcement authorities have often tried to tackle the problem using several conventional approaches. Nevertheless, there are other modern methods to screen, detect, and predict child abuse using artificial intelligence (AI). Therefore, this article aimed to critically review the currently available AI tools including data mining, computer-aided drawing systems, self-drawing tools, and neural networks used in child abuse screening.

DetectionChild-focusedNeural networks
2023
Research (peer reviewed)

Norwegian University of Science and Technology

The aim of this research is to provide techniques that increase children’s security on online chat platforms. The research project divides the online grooming detection problem into several subproblems, including author profiling, predatory conversation detection, predatory identification, and data limitations issues. The present article presents a literature review of available data sets and grooming detection techniques.

https://pdf.sciencedirectassets.com/271505/1-s2.0-S0950705122X0021X/1-s2.0-S0950705122011327/main.pdf?X-Amz-Security-Token=IQoJb3JpZ2luX2VjEJ7%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaCXVzLWVhc3QtMSJGMEQCIGNA4ye9KfLwdHkP0SQx6ZoIsX10JVp7HP7S%2BbXXAb%2BmAiAP88wcI2HShJjio7YKr%2Bh7YmDXxLETTeiAc9Fc1HwxBiq8BQjW%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F8BEAUaDDA1OTAwMzU0Njg2NSIMgy%2BFqE9Y26ap6QD8KpAFHoALfAAx59NamEQ2v7ZnmEkN9xfRdaj6KQUVjDvmVkRQdqhHLWQh115xBN%2BvdZnTGAuegrn3AxVmatiXoS1Daf3Hv1uC3imoKCSQt5PxnYpWU7V0mQtkElpL0DKe93vY%2FJuiyT%2BEsoQDQySTSoSPsE5UJFZzPtiDLIjdf64URNQbBvAiQk6%2B73xewgWOLmr6TdnKeMmw2xNfAM2MFNVxuZILDqtrN1Anq7jTSFvhwAv0R2aXyL7oUchVoM1brULNYRRr%2B8LtDvE6DbC1UijVn%2FiMZfndDR2uD%2FWxQwQAkvoRvtRII1Qbx55xCM8l4xRXoQdM%2B1XUM0bqwuiMMFS%2BMSWF4aZhosx6Ugcgt1Uw1Q2b605dyp98PYLQ0FS209Wezjk8XX6ElaA7Vv0c6ARuP%2BaAQf0opDAKd7BoW%2FdqO3ujyWEv0x0laowJ8aw5xCGnfmr7zW86BbojFS3MRK5qAZtA0ZITeF%2B0ehsz0Z22uU4he1znrgmSUIkQ7M4EarSHr%2BeuRy4ufWAG%2FQOjHgihqh9iPvxLMfD1YHiuQFOpZn2fU1%2Frq22HOkDf7NUkcG55k6u%2BmM4pNzoWGsE3YdcKA%2Fea1N%2B1086Jr7gc09B4Mg%2BG%2FfTDBcO6pPuacZF6geRjMWYH0IcpvHZeAm5UmzaYv2AXvH%2BCRfBuzCXBTCDWUimWAPrRz%2BkQraos623S7QA1DBzFx9cgKty4O9oysaqTySBQCQ9nUA0ET0PrVB0ucmbjuiThS4%2F0rXyy%2BSMit1t%2BK%2FshEHJqKaXauOCh9Z22h2gImaz8nEw8zNUb2pjHY%2BdMuZiwcHwfNdTsKw7CL4bilcLDs1FK1eERZl6o6Bwu%2BJasJ18cqOE6WTSb0ZVAhigwgrezqgY6sgFy4WYJT2NIS0eIWUCK2HvcJbcGDsEWL55THUACYxBWHrXDH4q6zTDGJdGEi6Qt8Hup9sz27Rdy4WaLOqRBBlLlCutEzNqHyoER3aYyI4m1nw5PHY8Ma01WOuFYmpBgZruWeoaTYulccSYiRryxQGxR%2BZGJNpGw7YPtopZclJWMmG%2FWtJ9JcyUJWSAFaQ5vgPjHWl0uxQwd0gNuxgzut%2BSNWQe2Z2E73evvCIoTSFVV8G9P&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20231109T144124Z&X-Amz-SignedHeaders=host&X-Amz-Expires=300&X-Amz-Credential=ASIAQ3PHCVTYWKCOS3NH%2F20231109%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=0f54f9cc9c96e629840f564ae3f4ebd81a783dde4d9e913477973ea6a7cad7c4&hash=77556b71a385be4ab4ac40af4e9c2d405bb79c583571005bbf9a4d55c6d1444e&host=68042c943591013ac2b2430a89b270f6af2c76d8dfd086a07176afe7c76c2c61&pii=S0950705122011327&tid=spdf-81f503d4-6620-48ce-b726-0c9b517dad07&sid=2dae1c961858f6402a4891e-602acf561236gxrqb&type=client&tsoh=d3d3LnNjaWVuY2VkaXJlY3QuY29t&ua=09035e555d590651050303&rr=8236d0fbbeff0d36&cc=se
DetectionClustering/Classification
2022
Research (peer reviewed)

Australian Institute of Criminology

Report by the Australian Institute of Criminology that analyses child sexual abuse (CSA) and financial transactions through machine learning in order to identify characteristics of offenders who live stream CSA in high volumes. The analysis showed that factors such as frequency and monetary value are important and have implications for identifying these crimes among financial transaction data. Furthermore, offenders did not appear to have engaged in violent offending, but rather had a criminal history of low-harm offences.

Financial transactionsArtificial intelligenceSupervised learningMachine learningCriminal investigationChild Sexual Abuse Material (CSAM)
2021
Research (peer reviewed)

Humboldt Universitat zu Berlin

A report on how deep learning transform models can classify grooming attempts. The authors of the report created a dataset that was then used by Viktor Bowallius and David Eklund in the report Grooming detection of chat segments using transformer models, where an f1 score of 0.98 was achieved.

Natural Language ProcessingClustering/Classification
2021
Research (peer reviewed)

Singidunum University

The article looks at how AI can analyse activity on mobile screens and audio ports, to detect bullying, porn and sexual harassment. Unlike previous experiments, this AI can see all activity as the user sees it, and not just see input in the form of texts or images that are retrieved from the screen and then processed. The model achieves an average accuracy of 88% when classifying texts, such as classifying sexism and racism. Furthermore, the model achieves 95% accuracy in detecting pornography.

PreventionClustering/ClassificationNeural networks
2021
Research (peer reviewed)

Nalla Malla Engineering College, Galgotias University, Vellore Institute of Technology

The report evaluates how well an AI can detect child sexual abuse via surveillance cameras.

Child Sexual Abuse Material (CSAM)Neural networks
2021
Research (peer reviewed)