Our tools have complementary capabilities. They can be used jointly, and with other solutions, to meet different needs.
Content Safety APIClassifying previously unseen images and videos
CSAI MatchMatching known abusive video segments
Content Safety APIUsed for: Classifying previously unseen images and videos
The Content Safety API classifier uses programmatic access and artificial intelligence to help our partners classify and prioritize billions of images and videos for review. The higher the priority given by the classifier, the more likely the media file contains abusive material, which can help partners prioritize their human review and make their own determination of the content. Content Safety API issues a prioritization recommendation on content sent to it. Partners must conduct their own review in order to determine whether they should take action on the content.
Operationally, we recommend organizations use the Content Safety API right before the manual review process, to classify, prioritize and help them to organize their queue. The Content Safety API can be used in parallel with other solutions, like YouTubeâs CSAI Match video hashing tool, or Microsoftâs PhotoDNA, each of which address different needs.
How it works? 1. File retrievalFiles are retrieved by the partner in multiple forms, for example reported by a user, or identified by crawlers or filters that the partner has created to moderate content on their platform.
Partner User reported images or videos Crawlers Pre-filters(porn/other classifiers)
2. API reviewThe media files are then sent to the Content Safety API via a simple API call. They are run through classifiers to determine the review priority, and the priority value for each of the pieces of content is then sent back to the partner.
Google Content Safety API Classifier technology 3. Manual reviewPartners use the priority value to prioritize the files that need attention first for manual reviews.
4. Take actionOnce image and video files have been manually reviewed, the partner can then take action on the content in accordance with local laws and regulations.
CSAI MatchUsed for: Matching known abusive video segments
CSAI Match is YouTubeâs proprietary technology for combating CSAI (Child Sexual Abuse Imagery) videos online. This technology was the first to use hash-matching to identify known violative content and allows us to identify this type of violative content amid a high volume of non-violative video content. When a match of violative content is found, it is then flagged to partners to review, confirm, and responsibly report in accordance with local laws and regulations. YouTube makes CSAI Match available to partners in industry and NGOs. We give access to fingerprinting software and an API to identify matches against our database of known abusive content.
Online platforms can prevent violative content from being displayed and shared on their sites by using CSAI Match to compare their content against one of the largest indices of known CSAI content. CSAI Match is simple for partners to integrate into their system, allowing them to better scale challenging content management.
How it works? 1. Video fingerprintingA video is uploaded to the partnerâs platform.The CSAI Match Fingerprinter, which is run on the partnerâs platform, creates a Fingerprint file of the video, a digital ID that uniquely represents the content of the video file.
Partner Video file Fingerprinter Fingerprinter file 2. API reviewThe partner sends the Fingerprint file via the CSAI Match API to be compared with the other files in YouTubeâs Fingerprint repository. The repository contains Fingerprints of known abusive content detected by YouTube and Google.
Youtube CSAI Match API CSAI Match Technology Shared CSAIFingerprinter repository
3. Manual reviewA positive or negative match is given back to the partner once the call to the API is complete. Based on the match information, the partner manually reviews the video to verify that it is CSAI.
4. Take actionOnce the images have been reviewed, the partner can action the content in accordance with local laws and regulations.
Content Safety APIUsed for: Classifying previously unseen images and videos
The Content Safety API classifier uses programmatic access and artificial intelligence to help our partners classify and prioritize billions of images and videos for review. The higher the priority given by the classifier, the more likely the media file contains abusive material, which can help partners prioritize their human review and make their own determination of the content. Content Safety API issues a prioritization recommendation on content sent to it. Partners must conduct their own review in order to determine whether they should take action on the content.
Operationally, we recommend organizations use the Content Safety API right before the manual review process, to classify, prioritize and help them to organize their queue. The Content Safety API can be used in parallel with other solutions, like YouTubeâs CSAI Match video hashing tool, or Microsoftâs PhotoDNA, each of which address different needs.
Content Safety APIUsed for: Classifying previously unseen images and videos
How it works?
1. File retrieval
Files are retrieved by the partner in multiple forms, for example reported by a user, or identified by crawlers or filters that the partner has created to moderate content on their platform.
Partner User reported images or videos Crawlers Pre-filters(porn/other classifiers)
expand_more2. API review
The media files are then sent to the Content Safety API via a simple API call. They are run through classifiers to determine the review priority, and the priority value for each of the pieces of content is then sent back to the partner.
Google Content Safety API expand_more Classifier technology expand_more3. Manual review
Partners use the priority value to prioritize the files that need attention first for manual reviews.
Partner Manual review expand_more4. Take action
Once image and video files have been manually reviewed, the partner can then take action on the content in accordance with local laws and regulations.
Partner Action accordingly Interested in using our toolkit? CSAI MatchUsed for: Matching known abusive video segments
CSAI Match is YouTubeâs proprietary technology for combating CSAI (Child Sexual Abuse Imagery) videos online. This technology was the first to use hash-matching to identify known violative content and allows us to identify this type of violative content amid a high volume of non-violative video content. When a match of violative content is found, it is then flagged to partners to review, confirm, and responsibly report in accordance with local laws and regulations. YouTube makes CSAI Match available to partners in industry and NGOs. We give access to fingerprinting software and an API to identify matches against our database of known abusive content.
Online platforms can prevent violative content from being displayed and shared on their sites by using CSAI Match to compare their content against one of the largest indices of known CSAI content. CSAI Match is simple for partners to integrate into their system, allowing them to better scale challenging content management.
CSAI MatchUsed for: Matching known abusive video segments
How it works?
1. Video fingerprinting
A video is uploaded to the partnerâs platform.The CSAI Match Fingerprinter, which is run on the partnerâs platform, creates a Fingerprint file of the video, a digital ID that uniquely represents the content of the video file.
Partner Video file expand_more Fingerprinter expand_more Fingerprinter file expand_more2. API review
The partner sends the Fingerprint file via the CSAI Match API to be compared with the other files in YouTubeâs Fingerprint repository. The repository contains Fingerprints of known abusive content detected by YouTube and Google.
Youtube CSAI Match API expand_less expand_more CSAI Match Technology expand_less expand_more Shared CSAIFingerprinter repository
expand_more3. Manual review
A positive or negative match is given back to the partner once the call to the API is complete. Based on the match information, the partner manually reviews the video to verify that it is CSAI.
Partner Manual review expand_more4. Take action
Once the images have been reviewed, the partner can action the content in accordance with local laws and regulations.
Partner Action accordingly Interested in using our toolkit? Interested in using our toolkit? TestimonialsâGoogle's hash matching solution has revolutionized our workflows and led to better, faster results. Because of Googleâs significant contribution to the National Center for Missing & Exploited Children, automated processes reduce the need for human review of previously reported CSAM, which minimizes our staffâs exposure. It's critical that these images are taken down as quickly as possible because every time a child's photo is re-shared, they are re-victimized all over again. This also allows us to focus our work on new and unknown child victims and survivors.â - National Center for Missing & Exploited Children
âImplementing Google's Content Safety API has been a huge win for our platform, which hosts hundreds of millions of user-uploaded documents. The API efficiently scans large volumes of images for potentially offensive material, helping to ensure a safe digital environment for our users. Its high throughput and accuracy allowed us to focus on review, intervention, and support.â - Scribd, Inc.
âGoogleâs Child Safety tools have enabled us to validate that the safety measures we have in place are working, and we continue to see record low single-digit incidences of child sexual abuse material on Nextdoor.â - Nextdoor
âAs Substack continues to expand its video capabilities, ensuring the safety of our communityâespecially protecting minorsâremains a top Trust and Safety priority. Leveraging advanced Child Sexual Abuse Imagery (CSAI) detection technologies has been instrumental in meeting these critical safety challenges. Google's tools enable us to swiftly and accurately identify and remove harmful content, ensuring that our platform remains a safe space for creators and audiences alike.â - Substack
âCSAI Match remains an important pillar of Adobeâs child safety program. Leveraging this technology has proved to be an effective means to help detect child sexual abuse videos more efficiently at scale.â - Adobe
âCombatting the rise of video-based child sexual abuse material presents new challenges and requires new technology. CSAI Match has proven to be part of the solution, enabling us to better detect and remove video CSAM from our platforms.â - Yahoo!
âThe Content Safety API helps analysts working on identifying images depicting sexual abuse on minors with prioritization. Given the ever-increasing volume of reports to process, successfully prioritizing those reports for review is a real challenge. This technology enables analysts to see reports that include content showing sexual abuse on minors faster, and therefore lets them act faster to protect victims and remove the content.â - Point de Contact
âEasy to integrate, fast to respond, stable and accurate, the Content Safety API has significantly reduced the amount of time spent by the content analysts reviewing CSAM images, optimized the analysis process and had a positive impact on their well-being.â - Safernet Brasil
We have options to support both raw content bytes and embeddings derived from media files. Get in touch for more details.
Industry and civil society third parties seeking to protect their platform against abuse can sign up to access the Content Safety API. Applications are subject to approval.
We believe the best approach to tackling online child exploitation is to collaborate with other companies and NGOs. We have long worked across industry and with NGOs to support the development of new data-driven tools, boost technical capacity, and raise awareness. We believe making these tools widely available, so our partners can use AI to better review content at scale, is an important part of this fight.
CSAI Match is designed for video, but through Googleâs Content Safety API, a collection of tools are available to industry and NGO partners, offering machine learning-powered classification for images. Learn more.
The match will identify which portion of the video matches known CSAI, as well as a standardized categorization of the type of content that was matched.
CSAI Match detects near-duplicate segments of known CSAI content. This includes full duplicates that MD5 hash matching would get, as well as near-duplicates which might be re-encodings, obfuscation, truncations or scaling of CSAI videos â even if a video contains only a small portion of CSAI possibly mixed with non-CSAI content. Partners run a fingerprinting binary to produce a âfingerprintâ of the video, a byte-sequence similar to a MD5 hash. This is then sent to Googleâs CSAI Match service, which is specifically designed for efficiency when scanning a video against YouTubeâs corpus of known CSAI references.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.3