• Proteams Information Tech

Apple’s new Child Safety features receives backlash, Apple aren’t backing down

Many are disputing the changes and feel as though they could do more harm than good


Apple has been working with child safety experts to ensure a new level of safety for children who are using i-devices. Three areas of apple devices will be configured for new safety options. These are: First, new communication tools which help parents to be more informed when it comes to their children navigating online content and communicating on the internet, the Messages app will use machine learning to warn about sensitive content whilst communications will remain unreadable by Apple. Secondly, a new application of cryptography will be used by iOS and iPadOS to limit the spread of CSAM (Child Sexual Abuse Material) online, which will help Apple provide information and data to law enforcement when CSAM has been in iCloud Photos. Third, there will be updates to Siri and Search to provide children and parents help and information if they have encountered unsafe situations online, also intervening when users are attempting to search CSAM-related topics.


The features will be available on the new iOS 15, iPadOS 15, watchOS 8 and macOS Monterey later this year. Whilst many people are gladly welcoming this new approach to keeping children safe there have been several people disputing this and feel as though it could do more harm than good.


The new features which were announced last Thursday involved taking hashes of images that are uploaded to iCloud and comparing them with a database that contains hashes of known CSAM images, the way they are doing this allows it to keep user data encrypted and run the analysis on the device whilst allowing it to report users who are found to be sharing or downloading harmful material to the authorities. Along with these features, there will be an optional feature that allows the parent/guardian to chose if they would like to receive warnings if their child under 13 has sent, receives or views images containing sexually explicit content. One of the main companies who are highly against this is the EFF (Electronic Frontier Foundation), who made a press release stating how it believes Apple’s new Child Safety measures could be abused by governments and how they decrease and diminish user privacy. The head of WhatsApp, Edward Snowden, an instructor at Harvard’s Cyberlaw Clinic and others have all chimed in to say basically what an awful idea this is.


Will Cathcart, head of WhatsApp mentioned in a thread on Twitter that his company won’t be taking on the same safety measures and named the approach by Apple “very concerning”. WhatsApp’s system that is in place to assist the fight against child exploitation partly utilizes user reports, which preserves encryption like Apple’s and has resulted in over 400,000 cases being reported by the company to the Nation Center for Missing and Exploited Children in 2020. Matthew Green, an associate professor at Johns Hopkins University, had blasted the feature even before it was publicly announced, tweeting about how the hashing system could be abused by government and malicious actors. Kendra Albert, an instructor at Harvard’s Cyberlaw Clinic, has a thread on the potential dangers towards LGBTQ children and how originally Apple didn’t specify the ages of the parental notifications feature.


“Even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor,” the Electronic Frontier Foundation wrote. “We’ve already seen this mission creep in action. One of the technologies originally built to scan and hash child sexual abuse imagery has been repurposed to create a database of ‘terrorist’ content that companies can contribute to and access for the purpose of banning such content.”


In an FAQ, Apple has made an attempt to alleviate concerns about how it’s new anti-child abuse features could be used as malicious surveillance tools by authoritarian governments, possibly putting children and people at risk if they are in an area of those such as an anti-LGBTQ regime. “Let us be clear, this technology is limited to detecting CSAM [child sexual abuse material] stored in iCloud and we will not accede to any government’s request to expand it,” the company writes. In the FAQ it doesn’t address similar concerns surrounding the feature that scans messages for sexually explicit content and doesn’t mention how it’ll ensure that the tool’s only focus will be on that type of content and not everything else such as personal information stored on the phone.


Although it appears that Apple is trying to do thing right thing surrounding the subject of CSAM and child exploitation, there are possibly loopholes where the wrong types of information could get in the wrong people’s hands, which creates a whole new issue altogether.




Keep up-to-date with the latest tech industry insights, trends as well as information technologies, app development, and small business content with the Proteams Blog


Follow us on LinkedIn for updates on the latest tech news here





2 views0 comments

Recent Posts

See All