The Australian eSafety Commissioner is set to gain the power to force the nation’s telcos to block certain content during crisis events.
In a statement released on Sunday, Prime Minister Scott Morrison said blocks would be used to “protect Australians from exposure to violent events online like the Christchurch terrorist attacks”.
“The shocking events that took place in Christchurch demonstrated how digital platforms and websites can be exploited to host extreme violent and terrorist content,” Morrison said.
“That type of abhorrent material has no place in Australia and we are doing everything we can to deny terrorists the opportunity to glorify their crimes, including taking action locally and globally.”
The eSafety Commissioner is talking to telcos for options to “block access to specific domains hosting terrorist or extreme violent material”, the statement said.
Alongside the rules, the government would also be establishing a 24/7 Crisis Coordination Centre to inform government agencies of “online crisis events” and aid the eSafety Commissioner to make a “rapid assessment” during such situations.
“This new protocol will better equip our agencies to rapidly detect and shut down the sharing of dangerous material online, even as a crisis may still be unfolding,” Home Affairs Minister Peter Dutton said.
The new rules were recommended by the Taskforce to Combat Terrorist and Extreme Violent Material Online, which includes Facebook, Google, Amazon, Microsoft, Twitter, Telstra, Vodafone, Optus, and TPG.
The government said the taskforce will provide a a detailed implementation plan to government by the end of September.
“Legislative options will be used if digital platforms fail to improve the safety of their services, and address the gaps laid bare by the Christchurch terrorist attacks,” the statement added.
A video of the terror attack in Christchurch was viewed around 4,000 times on Facebook and took 29 minutes before it was finally reported, Facebook said previously.
Despite its removal, approximately 1.5 million copies of the video sprung up on the network in the first 24 hours after the attack. However, only approximately 300,000 copies were published as over 1.2 million videos were blocked at upload.
On YouTube, a copy of the video was uploaded once every second during the first 24 hours following the terrorist attack.
“Those tech companies, I do not for a moment believe they wanted to see their platforms used for such a vile, heinous act, but I don’t think it’s enough for us to say, well in accepting that we all want an open and a free and secure internet that we have to accept that these kinds of activities will happen as a by-product — we don’t have to accept that, but we do have to put our minds collectively to the solution,” New Zealand Prime Minister Jacinda Ardern said previously.
“I’m not willing to sit back and say it can’t be done.”
In May, 18 nations Australia, Canada, European Commission, France, Germany, Indonesia, India, Ireland, Italy, Japan, Jordan, Netherlands, New Zealand, Norway, Senegal, Spain, Sweden, and the United Kingdom — and eight tech companies — Amazon, Daily Motion, Facebook, Google, Microsoft, Qwant, Twitter, and YouTube — signed up to the Christchurch Call to attempt to eliminate terrorist and violent extremist content online, and stop the internet from being used as a tool for terrorists.
The United States did not sign up to the Call.
On Monday, Morrison announced a partnership with the OECD aimed at increasing the transparency of the tech giants.
“I’m very pleased to say that Australia, together with New Zealand and the OECD, is funding a project to develop Voluntary Transparency Reporting Protocols on preventing, detecting, and removing terrorist and violent extremist content from online platforms,” Morrison said.
“I am determined to keep driving global support, building on our G20 Statement calling on internet companies to step up and take action. We know the internet has no borders, and terrorist and violent extremist exploitation of the internet is a global problem.”