Australia Targets Apple and Microsoft Over Online Child Safety
Flickr
Article audio sponsored by The John Birch Society

SINGAPORE — An Australian regulator, after employing new powers mandating that technology giants disclose details about their methods to stop child exploitation, lambasted Apple and Microsoft for not doing enough to halt child exploitation content on their platforms.

The e-Safety Commissioner, an office established to safeguard internet users, claimed that after issuing legal demands for information to some of the world’s largest internet firms, Apple and Microsoft had failed to proactively filter out child-abuse material in their storage services, iCloud and OneDrive.

The two firms affirmed they did not use any technology to filter out any livestreaming of child sexual abuse on video services Skype and Microsoft Teams, which are owned by Microsoft, and FaceTime, which is owned by Apple, the commissioner said in a report.

Decisions over which firms received the first series of notices were dependent on factors like the number of complaints to the government, the firm’s influence, and how much information is already public. More orders are likely to be given to companies to provide more information.

A Microsoft spokesperson said the company is dedicated to halting the proliferation of abuse material. However, “as threats to children’s safety continue to evolve and bad actors become more sophisticated in their tactics, we continue to challenge ourselves to adapt our response.”

Apple did not immediately respond to a request for comment.

The commissioner indicated that the replies reveal gaps in the child-protection measures of some of the world’s biggest tech firms, increasing public pressure on them to do more.

Meta Platforms, which owns Facebook, Instagram, and WhatsApp, and Snapchat owner Snap also received demands for information.

The responses were generally “alarming” and heightened worries of “clearly inadequate and inconsistent use of widely available technology to detect child abuse material and grooming,” commissioner Julie Inman Grant said in a statement.

Microsoft and Apple “do not even attempt to proactively detect previously confirmed child abuse material” on their storage services, despite law enforcement bodies using a Microsoft-developed detection product.

A recent Apple declaration that it would halt scanning iCloud accounts for child abuse to pander to privacy advocates was “a major step backwards from their responsibilities to help keep children safe,” Grant said. The negligence of both firms in looking out for livestreamed abuse amounted to “some of the biggest and richest technology companies in the world turning a blind eye and failing to take appropriate steps to protect the most vulnerable from the most predatory.”

“No one has yet put the companies’ feet to the fire, saying, ‘what are you doing?’” she said. “I don’t think we know the true scale and scope of the problem. We can’t know the scale of child exploitation until we know what the platforms are doing to detect abuse. We can’t be an effective regulator if we’re constantly trying to regulate with blind spots.”

In August this year, Australia demanded that U.S. technology giants including Apple Inc., Meta Platforms Inc., and Microsoft Corp. provide details on their methods used to clamp down on child-abuse material.

It is the first such request under new laws introduced last year, according to a statement.

Upon receiving the government request in August, the firms had 28 days to report back on their methods to prevent the proliferation of child-exploitation images, with any delay leading to daily fines of as much as US$383,000.

“We’ve received these notices and are currently reviewing them,” a spokesperson for Meta said in an emailed statement. “The safety of our users is a top priority and we continue to proactively engage with the eSafety Commissioner on these important issues.”

A spokeswoman for Microsoft said the firm would respond to the notice.

The responses would guide government policies about “what needs to be done to protect Australians online,” Minister for Communications Michelle Rowland said in the government statement. She encouraged the industry to accede to the commissioner’s requests.

Although last February Facebook issued a blackout on all news content on its Australian site after Canberra implemented laws mandating that social media platforms pay local publishers for using their content, the firm eventually relented and agreed to pay some local news organizations to access their stories.

Australia’s Online Safety Bill passed in June 2021 put the responsibility on internet service companies, instead of government officials, to halt toxic behavior on their online platforms. The act also raised the penalty for online abuse and harassment to as much as five years in prison.

The Online Safety Act 2021 is new legislation that makes Australia’s existing laws for online safety more expansive and much stronger.

According to the Australian government, the act has major ramifications for online service providers because it increases their accountability for the online safety of their users.

Moreover, the act gives the government considerable new powers to “protect all Australians – adults now as well as children – across most online platforms and forums where people can experience harm.” For example, the government can mandate that internet service providers ban access to content depicting “abhorrent violent conduct such as terrorist acts.”

Furthermore, the act also requires firms in the industry to establish new codes to regulate unlawful and restricted material. This alludes to the “most seriously harmful material, such as videos showing sexual abuse of children or acts of terrorism, through to content that is inappropriate for children, such as high impact violence and nudity.” The 2021 legislation is just one in a spate of clashes between Australia and U.S. tech giants, as bitter disputes have arisen, particularly in the United States, over whether child security should be prioritized over privacy.