Facebook has open-sourced its photo and video matching technologies for people to identify harmful content such as child exploitation, terrorist propaganda or graphic violence.
The two Facebook technologies can detect identical and nearly identical photos and videos.
“These algorithms will be open-sourced on GitHub so our industry partners, smaller developers and non-profits can use them to more easily identify abusive content and share hashes or digital fingerprints of different types of harmful content,” Guy Rosen, Vice President of Integrity at Facebook, said in a statement late Thursday.
“For those who already use their own or other content matching technology, these technologies are another layer of defence and allow hash-sharing systems to talk to each other, making the systems that much more powerful,” added Antigone Davis, Global Head of Safety.
According to John Clark, President and CEO of the National Center for Missing and Exploited Children (NCMEC) in the US, in just one year, they witnessed a 541 percent increase in the number of child sexual abuse videos reported by the tech industry to the CyberTipline.
“We are confident that Facebook’s generous contribution of this open-source technology will ultimately lead to the identification and rescue of more child sexual abuse victims,” said Clark.
Building on Microsoft’s generous contribution of PhotoDNA to fight child exploitation 10 years ago and the more recent launch of Google “Content Safety API”, the Facebook’s announcement is part of an industry-wide commitment to building a safer internet.
Known as “PDQ” and “TMK+PDQF”, these technologies are part of a suite of tools used at Facebook to detect harmful content.
The technologies create an efficient way to store files as short digital hashes that can determine whether two files are the same or similar, even without the original image or video.
Hashes can also be more easily shared with other companies and non-profits, said Facebook.