Home APPLE Is Apple planning to peep into your photos? Here’s everything you need...

Is Apple planning to peep into your photos? Here’s everything you need to know

0

Apple announced their new OS update, iOS 15 earlier this year with a lot of exciting new features. However, there is going to be another big update to the feature list of iOS 15 which the Cupertino tech giant announced yesterday. Yeah, it’s true what you heard about the news that Apple is gonna read your photos. But that isn’t completely true per se. There’s a lot of ifs and buts in the announcement.

We’ll help you understand what the new feature is all about and why you shouldn’t really worry about it much.

What’s the new news about Apple reading the photos?

As a part of Apple’s new iOS 15 update, Apple is planning on contributing it’s share of fight towards identification of child pornography and exploitation worldwide. To prevent misuse and help children being exploited, Apple will automatically scan your photos on iCloud and other Apple devices and report to the government authorities in case of a suspectful image encounter.

So Apple has access to all my photos and everyday clicks?

No, Apple doesn’t actually read the content of the photos, but uses an encryption technology called “hashing”. So, rest easy! There’s not even a minute of doubt towards your privacy. Also, the process is totally done on-device and never uploaded to any third-party or Apple servers (unless suspected). So, your data stays totally within your permissions.

In simpler terms, hashing is a process where the images are transformed to a unique number code which doesn’t explicitly indicate the content of the image, but can be used to identify similar images.

What exactly does Apple look for and how does this work?

Apple has in collaboration with National Center for Missing and Exploited Children (NCMEC), a nonprofit clearinghouse for information regarding online child sexual exploitation, will look for any files matching the database via ‘private set intersection’ technology.

Usually, the results of this process is not published to the user and when over a certain number of matches (the threshold value will be confidential) is encountered, Apple will create a cryptographic safety voucher that encodes the data about the image and uploads to Apple iCloud server along with the image.

Thereafter, Apple will manually review and confirm the image sensitivity, disable the user’s account, post which the law enforcements will be handling it.

How trustable is this model?

Apple’s model, neuralMatch has been powered by AI and is trained with 200,000 images from the National Center for Missing & Exploited Children. The system will only detect images that are categorized Child Sexual Abuse Material (CSAM) and to ensure no mistakes, a threshold is set to flag and report the content. Apple has cautiously set the threshold value in a way that less than one in a trillion chances may be mistakenly flagged incorrectly.

Also, the users can appeal to Apple in case they feel they’re incorrectly flagged.

When is this feature rolling out and for whom does it concern?

Apple announced the feature on 05th August 2021 and has planned to roll out to the Apple users in the USA along with the launch of Apple iOS 15,iPadOS 15, watchOS 8, and macOS Monterey later this year.

NO COMMENTS

Exit mobile version