Google plans to make it harder for terrorists to exploit its platform by introducing new measures that are designed to help the search giant remove extremist content quicker and more efficiently.
The measures — announced in The Financial Times (FT) on Sunday — come after the UK government raised questions about whether social media platforms have become breeding grounds and safe havens for terrorists.
Google counsel Kent Walker wrote in the FT that Google already has thousands of people around the world reviewing content, in addition to image-matching technology that prevents videos from being re-uploaded once they have already been removed.
But Walker admitted that Google and others in the tech industry need to do more to combat the rise of extremism. “While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done,” he wrote. “Now.”
In a bid to address the problem, Walker said that Google intends to:
put more engineering resources into training software that uses artificial intelligence to identify videos promoting extremism — “We have used video analysis models to find and assess more than 50% of the terrorism-related content we have removed over the past six months. We will now devote more engineering resources to apply our most advanced machine learning research to train new “content classifiers” to help us more quickly identify and remove such content.”
hire more people to flag inappropriate YouTube videos — “Machines can help identify problematic videos, but human experts still play a role in nuanced decisions about the line between violent propaganda and religious or newsworthy speech.”
take a tougher stance on videos that do not clearly violate YouTube’s policies — “Videos that contain inflammatory religious or supremacist content; in …read more
Source:: Business Insider