Projects where advanced AI is central
The AviaTor project is funded by the European Union and is currently in its final phase. The project started in 2019 and will end in 2024. The project aimed to build a prioritisation tool / database for law enforcement processing NCMEC reports (also known as industry reports or Cyber tipline referrals). The AviaTor database provides law enforcement with the tooling to prioritise these reports. AviaTor stands for Augmented Visual Intelligence and Targeted Online Research β meaning that the AviaTor database has the functionality to use visual intelligence as well as OSINT and hash matching to de-duplicate and prioritise reports. AviaTor is currently used by 19 national law enforcement agencies.
ZiuZ Forensic, Web-IQ, Timelex,Β DFKI, INHOPE
Camera Vision is a machine learning tool that identifies images that contain both nudity and a child.
CameraForensics is an online portal allowing users to search for other images on the internet taken with a particular camera to support victim identification investigations. This tool also supports searching geographically for similar and identical images and for various metadata elements. All with the intent of helping investigators link covert and overt (public) personas on the internet. The tool is available via an online user interface and also via a number of products such as Griffeye Analyze, or as a fully integrated solution for agency or country-level databases (via API).
CameraForensics
Clearview AI acts as a search engine of publicly available images β now more than ten billion -- to support investigative and identification processes by providing for highly accurate facial recognition across all demographic groups. Similar to other search engines, which pull and compile publicly available data from across the Internet into an easily searchable universe, Clearview AI compiles only publicly available images from across the Internet into a proprietary image database to be used in combination with Clearview AI's facial recognition technology. When a Clearview AI user uploads an image, Clearview AIβs proprietary technology processes the image and returns links to publicly available images that contain faces similar to the person pictured in the uploaded image.
Clearview AI
The Content Safety API sorts through many images and prioritizes the most likely child sexual abuse material (CSAM) content for review. The classifier can target content that has not been previously confined as CSAM.
Krunam provides breakthrough technology that identifies and classifies previously unknown CSAM images and video at scale. Our CSAM classifier protects your online community, your employees, your brandβs reputation, and, last but not least, millions of children around the world.
Krunam
Luxand's technology detects facial features quickly and reliably. The SOK processes an image, detect-s human faces within it, and returns the coordinates of 70 facial feature points including eyes, eye contours, eyebrows, Iip contours, nose tip, and so on.
Luxan
Instead of logging in to separate data systems, agents, detectives and investigators can conduct a single search for a suspect, target, or location and return data from all relevant systems. Palantir Gotham featres intuitive, user-friendly interface backed by powerful data integration software providing access to art available information related to an investigation in one place. Data is secured at a granular level (down to individual attributes describing each piece of data), ensuring that users can only see information for which they are authorized. Users can also collaborate on investigations and share information. Palantir Gotham can integrate data in any format, including existing case management systems, evidence management systems, arrest records, warrant data, subpoenaed data, record managements system (RSS) or other crime-reporting data, Computer- Aided Dispatch (CAD) data, federal repositories, gang intelligence, suspicious activity reports, Automated License Plate Reader (ALPR) data, and unstructured dat such as'document repositories and emails. Access restrictions can be applied broadly based on the data source, or granularly, down to individually securing attributes and metadata (e.g., a building's address or the timestamp associated with a photograph) These permissions govern how users access data All interactions, including searching, viking, and editing data, are recorded in a tamper-evident audit log.
Palantir
PredPol uses a machine-learning algorithm to calculate its predictions on where crimes will occur. Historical event datasets are used to train the algorithm for each new city (ideally 2 to 5 years of data). It then updates the algorithm each day with new events as they are received from the department. This information comes from the agencyβs records management system (RMS). PredPol uses ONLY 3 data points β crime type, crime location, and crime date/time β to create its predictions. No personally identifiable information is ever used. No demographic, ethnic or socio-economic information is ever used. This eliminates the possibility for privacy or civil rights violations seen with other intelligence-led policing models.
Predpol
Qumodo Discover is a search engine for evidence. This technology uses AI to intelligently find connections within digital data at a web scale where users can query the system with an image, and it will find other images containing the same places, people, faces and objects. It has been tested with more than 100 million images and validated in a victim ID environment, with child sexual exploitation and abuse (CSEA). content
Qumodo
Rekognition Image is an image recognition service that detects objects, scenes, and faces; extracts text; recognizes celebrities; and identifies inappropriate content in images. It also allows you to search and compare faces. Rekognition Image is based on the same deep learning technology developed by Amazon's computer vision scientists to analyze billions of images daily for Prime Photos. This service only requires users to pay for the number of images, or minutes of video, being analyzed and the face data being stored for facial recognition_ There are no minimum fees or upfront commitmenim. Rekognition Video is a video recognition service that extracts motion-based context from stored or live-stream videos and allows them to be analysed. It detects activities; understands the movement of people in frame: and recognizes objects, celebrities, and inappropriate content in videos stored in Amazon S3 and live video streams from Acuity. Rekognition Video also detects persons and tracks them through the video even when their faces are not visible, or as the whole person might go in and out of the scene_ For example r this could be used in an application that sends a real time notification when someone delivers a package to your door. Rekognition Video allows you also to index metadata, like objects, activities, scene, celebrities, and faces to simply video search. With Amazon Rekognition, you only pay for the number of images, or minutes of video, you analyze and the face data you store for facial recognition. There are no minimum fees or upfront commitments.
Amazon
Safer is a child sexual abuse material (CSAM) detection, review, removal and reporting pipeline. It allows small and medium sized companies to have the same CSAM fighting tools as the largest ones.
Thorn
TraffickCam allows anyone with a smartphone to fight sex trafficking when they travel by uploading photos of hotel rooms to a law enforcement database. Photos uploaded to the free TraffickCam app are added to an enormous database of hotel room images. Federal, state and local law enforcement securely submit photos of hotel rooms used in the advertisement of sex trafficking victims to TraffickCam. Features such as patterns in the carpeting, furniture, room accessories and window views are matched against the database of traveler images to provide law enforcement with a list of potential hotels where the photo may have been taken to identify location of victims.
Exchange Initiative
Vigil At automatically detects, evaluates and categorizes child sexual abuse imagery. Thu system is capable of determining the severity of the sexual act in the image (Using the legacy UK 1-5 Category SAP Scale or the current UK 3 Categories A-C). The tool is available as part of Qumodo Classify, via a Cloud API or via a stand alone API for tools such as Griffeye Analyze. The tool scales linearly and can categorize millions of images per hour
Qumodo