Apple advances user security with powerful new data protections





App privacy protections require more than new policies


With the expected Supreme Court opinion to overturn Roe v. Wade on its way, some consumers are rethinking how much of their health data they want to share with mobile apps.

Several period-tracking apps have reassured users that the companies won’t sell or share their details. But multiple types of apps and programs, even internet searches, generate data like location tracking — data that could be used to implicate people seeking abortions.

Jessica Lee, a partner with the law firm Loeb & Loeb, helps companies craft their privacy policies. She says even robust standards can only do so much when it comes to user privacy. The following is an edited transcript of our conversation.

Jessica Lee: Well, I mean, the privacy policy, it’s not an agreement, necessarily. It’s more of a notice or disclosure about what a company’s privacy practices are. So, in terms of what the notice can do, it can only tell the consumer what a company is doing, and then the consumer has to make a decision about whether or not they’re comfortable with those practices.

Jessica Lee smiles in front of a grey background wearing a navy blazer.
Jessica Lee (Courtesy Loeb & Loeb).

Kimberly Adams: As somebody who writes privacy policies for companies, have you seen companies, especially tech companies or any of these apps, changing their privacy policies in light of this news?

Lee: Not yet, but I think that those conversations are happening. Companies are likely going back to their privacy policies, but I’ll say the privacy policy is last, right? They need to kind of go to their practices first, understand what they’re doing, who they’re sharing information with, what type of information they’re sharing, and then identify: Are there updates that they need to make to protect their consumers?

Adams: What can an app do, though, if they are presented with a warrant demanding user data?

Lee: That’s a difficult question. Because if law enforcement has gone to court and obtained a lawful warrant for information, a company could try to challenge that warrant. We’ve seen other companies do this. Apple, for example, challenged an effort to get a backdoor into its devices. But depending on the size of the company, the resources and the policies of a company, they might not want to challenge a warrant. I think that becomes a harder conversation. And consumers really need to understand that there are — even in states or jurisdictions where there are broad privacy protections — there are usually exceptions for law enforcement.

Adams: And how much legal protection is there on the user side for this kind of data?

Lee: Very little. This data doesn’t fall under HIPAA, which is a kind of a federal law that protects certain health care information. And unless you’re in a state like California, where you might have [the] right in 2023, you’ll have rights to limit how sensitive health information is used, or in Colorado or Virginia where companies will be required to ask you and get consent to collect your health information. But there aren’t laws on the books in many states that will protect this type of data in the way I think most expect.

Adams: So how do you anticipate privacy policies within mobile apps changing in the months to come, if at all?

Lee: Part of that is going to depend on how the apps decide to update or change their practices. For example, Apple made moves recently to require apps to have a nutrition label that has more kind of high-level, easy-to-digest information about what an app does. And you might see some of these apps looking to try to make the information about their practices more accessible so that consumers can make a different decision about how they use or interact with that app.

Related Links: More insight from Kimberly Adams

As Jessica pointed out, privacy policies can be strengthened, but they must also be accurate.

Last year, fertility-tracking app Flo Health reached a settlement with the Federal Trade Commission after the company claimed it wasn’t sharing user information with third parties like Google and Facebook. As it turns out, it was.

Here’s the FTC press release about the settlement, which requires Flo to get affirmative consent from users before sharing their data with third parties.

Flo was among several period tracking apps which, as I mentioned at the top, addressed worried users on social media this week.

In a Twitter thread, the company said: “We have heard concerns surrounding data privacy should Roe v. Wade be overturned. We understand these concerns and want to assure you that your data is safe with Flo.”

Rival period-tracking app Clue had a thread of its own, saying, “We completely understand this anxiety, and we want to reassure you that any health data you track in Clue about pregnancy or abortion is private and safe.”

Clue also highlighted that it’s based in Berlin, and that Europe’s laws grant users additional privacy protections.

Finally, Recode has a piece from last year detailing how police used Facebook to gather evidence against the January 6 Capitol insurrectionists.

Those are the same methods abortion rights supporters are now worried might be used to implicate those seeking abortions in states where it’s illegal.





DuckDuckGo Is Bringing App Tracking Protections to Android


DuckDuckGo has revealed its plan to protect Android users from third-party trackers, which silently collect information to share with companies like Google and Facebook, with a feature modeled after the App Tracking and Transparency update Apple released in January.

That feature is called App Tracking Protection. DuckDuckGo says it utilizes an on-device VPN to automatically detect third-party trackers and prevent them from sharing information with their intended recipient. The “on-device” distinction is important, because even though Android recognizes the feature as a VPN, no information is supposed to be sent to an external server.

“Across all your apps, your personal data is being sent to dozens of third-party companies, thousands of times per week,” the company says. “This data enables tracking networks like Facebook and Google to create even more detailed digital profiles on you. With those profiles, tracking networks can manipulate what you see online, target you with ads based on your behavior, and even sell your data to other companies like data brokers, advertisers, and governments.”

This is a pervasive issue: DuckDuckGo says that “over 96% of the popular free Android apps we tested (based on AndroidRank.org rankings) contained hidden third-party trackers,” with 87% of them sending data to Google and 68% sending data to Facebook. Trackers can send information to other companies, too, many of which the average person has probably never heard of.

App Tracking Protection will be built into DuckDuckGo’s existing Android app, which will allow users to view information about what trackers the feature has blocked, and in what apps, so they can have more insight into how certain developers are treating their information. People can also receive notifications containing summaries of App Tracking Protection’s activity if they so desire.

Recommended by Our Editors

DuckDuckGo says it’s releasing App Tracking Protection in a private beta so it can fine-tune the experience. Blocking trackers can lead to problems in certain software, so the company has excluded some apps from App Tracking Protection, and it says people who encounter issues in other applications can disable the feature. (Ideally while notifying the company of the problem.)

Get Our Best Stories!

Sign up for What’s New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.





Report Details ‘Digital’ Border Wall Protections


This summary was featured in Documented’s Early Arrival newsletter. You can subscribe to receive it in your inbox three times per week here.

Three activist groups compiled a report detailing the surveillance tools the U.S. Department of Homeland Security has implemented at the U.S.-Mexico border under the Biden administration. The so-called digital border wall is secured with technology, including surveillance towers, drones, cameras and automated license plate readers. Biometrics, such as DNA, facial and voice recognition and iris scans, can be used to watch individuals. Phone and vehicle surveillance technologies are also in place. And CBP One, a new mobile application, may also be used to gather personal information and biometrics on asylum seekers before they come to the U.S. Deanna Garcia for Documented.

In other federal immigration news…

Officials Expand Where Immigration Agents Can’t Make Arrests

On Wednesday, the Biden administration issued a new policy that expands the list of locations, labeled as “sensitive,” in which ICE officers and border agents can’t make arrests. These places include domestic violence shelters, homeless shelters, and playgrounds. This is the Biden administration’s latest effort to reform immigration enforcement focusing on “serious” targets instead of causing fear by administering unfocused arrests and detentions within the U.S. The new policy comes months after a memo restricted arrests in courthouses and weeks after another policy that barred worksite raids, which became common in the Trump era. BuzzFeed News 

Biden Spending $100 Billion on Immigration

The Biden administration announced Thursday it plans on setting aside roughly $100 billion for immigration issues in its $1.75 trillion social spending package. The House meanwhile released its own plan for providing legal status to undocumented individuals. In a statement, the White House said the framework would “reform our broken immigration system” and includes provisions aimed at “reducing backlogs, expanding legal representation, and making the asylum system and border processing more efficient and humane.” The House released a draft text of its bill, which included an arrangement to allow undocumented immigrants, who arrived in the U.S. by 2020, to apply for legal status. The bill also includes a plan to recover about 226,000 unused visas. The Hill



Facebook launches new initiative to fight against iOS 14 ad tracking protections


Facebook is continuing its media blitz to try to force Apple to stop its plans to limit ad tracking across Apple platforms, with the launch of a new sub-site talking to small business owners railing against ad tracking protection in iOS 14.

A new blog post on Facebook’s business page asks readers to “tell your story.” Small business owners are expected to download a toolkit containing frames and social media image tools to help voice their opinion of Apple’s ad tracking decision.

The new feature introduced with iOS 14 will go into effect in early 2021 and Apple says developers must implement the change or face expulsion from the App Store. Users will be opted out of ad tracking automatically then prompted with a dialog asking if they’d like to opt in.

Switching to opt-in and placing a dialog in front of users could cause significant adoption of the ad tracking limiting feature. Facebook has aired concerns that this will directly impact its revenue going forward.

An example of a social media post provided by Facebook

An example of a social media post provided by Facebook

Facebook’s small business blog suggests that it is up to small business owners to voice their discontent and impact to revenue to social media. The post says to use personal stories and spread the word to their partners in order to fight this together.

The initiative says to directly call out Apple and “speak up for the millions of small businesses affected by Apple’s update. This echoes the full-page newspaper ads Facebook took out against Apple.

Facebook tracks users across dozens of metrics on its website, in apps, and across the web. Facebook even partners with retail companies to track its users in the real world, all for the sake of “targeted advertising.” Rather than posting a generalized ad aimed at an audience, it seeks to use the mountains of data it has on users to show a very specific ad instead.

Apple wants to stop this level of sophisticated tracking. This kind of invasive business practice is exactly what Apple’s privacy campaigns and iOS features are meant to protect users agains.

Facebook says that limiting data collection will limit their targeting abilities, thus it will limit revenue. Facebook CEO Mark Zuckerberg has even said that Apple’s initiatives will “impair COVID-19 recovery.”

Apple’s ad tracking protection is expected to launch in early 2021 despite Facebook’s protests. Apple has also introduced a new privacy label for the App Store that shows exactly what data an app will ask for. Facebook owned Whatsapp has spoken out against the privacy labels, calling them anticompetitive.



40% of COVID-19 contact tracing apps lack basic protections


Guardsquare announced the release of a report which reassesses the levels of security protections and privacy risks of COVID-19 contact tracing apps. The report found that of the 95 mobile apps analyzed, 60% use the official API for secure exposure notifications. For the remaining 40% of the contact tracing apps, the majority of which gather GPS location data, security is paramount ‒ yet lags.

OPIS

“It is always important to follow security best practices during the development of any application which handles sensitive user data, and that is even more true when that app is a vital tool in the worldwide fight against the pandemic.

Contact tracing apps gathering user location data and personally identifiable information are especially attractive targets for exploitation, further reinforcing the need for developers to implement essential security protections,” said Grant Goodes, Chief Scientist at Guardsquare.

Majority of apps lacked basic security protections

Contact tracing apps have been commissioned and distributed by governments around the world to track and notify individuals of exposure to COVID-19 so they can take appropriate action in order to prevent the spread of the virus.

Government-sponsored COVID-19 contact tracing Android mobile apps have been analyzed in June 2020, uncovering that the vast majority lacked even basic security protections. For this report, the original Android apps (with the exception of those no longer in use) were reanalyzed, new apps that have since emerged were added, and iOS mobile apps were included to derive insights into the two market-leading mobile operating systems.

Prevalent use of Exposure Notification API

In the updated analysis, it was found that use of the Exposure Notification API developed by Apple and Google is much more prevalent than in the June report. Notably, 62% of the Android apps and 58% of the iOS apps are using the API.

However, contact tracing apps not using the Exposure Notification API have applied either a minimal level of fundamental security protection techniques or no security protection techniques.

The research reveals that although progress has been made, security and privacy issues among contact tracing apps persist. In particular, the analysis found that apps using GPS, Bluetooth, or a combination of the two, to collect sensitive data are operating in a manner endangering the security and privacy of users.

Key findings of COVID-19 contact tracing apps

  • 33% of iOS and 20% of Android apps had no protection
  • 61% of iOS and 75% of Android apps had one or two security protections
  • 6% of iOS and 5% of Android apps had three or four security protections
  • 0% of iOS and Android apps had five or more security protections

According to the assessment, the apps based on the Exposure Notification API have minimal security concerns. Alternate routes to detecting exposure via proximity to infected individuals‒employing GPS, building custom Bluetooth proximity detection, or both‒raise significant security and privacy concerns.

Unprotected mobile applications that gather GPS data and require sensitive identity credentials risk exploitation and potentially flagrant violations of user data privacy.

“Apps, especially applications downloaded by users on mobile devices requiring personal or location data, should always incorporate proper security protections and code hardening techniques to ensure that the privacy of the data they are collecting is sufficiently protected,” Goodes said.

“To successfully combat the spread of COVID-19, contact tracing app security should be at the forefront for developers, public health authorities, and governments.”



Improving Siri’s privacy protections – Apple


At Apple, we believe privacy is a fundamental human right. We design our products to protect users’ personal data, and we are constantly working to strengthen those protections. This is true for our services as well. Our goal with Siri, the pioneering intelligent assistant, is to provide the best experience for our customers while vigilantly protecting their privacy.

We know that customers have been concerned by recent reports of people listening to audio Siri recordings as part of our Siri quality evaluation process — which we call grading. We heard their concerns, immediately suspended human grading of Siri requests and began a thorough review of our practices and policies. We’ve decided to make some changes to Siri as a result.

How Siri Protects Your Privacy

Siri has been engineered to protect user privacy from the beginning. We focus on doing as much on device as possible, minimizing the amount of data we collect with Siri. When we store Siri data on our servers, we don’t use it to build a marketing profile and we never sell it to anyone. We use Siri data only to improve Siri, and we are constantly developing technologies to make Siri even more private. 

Siri uses as little data as possible to deliver an accurate result. When you ask a question about a sporting event, for example, Siri uses your general location to provide suitable results. But if you ask for the nearest grocery store, more specific location data is used.

If you ask Siri to read your unread messages, Siri simply instructs your device to read aloud your unread messages. The contents of your messages aren’t transmitted to Siri’s servers, because that isn’t necessary to fulfill your request.

Siri uses a random identifier — a long string of letters and numbers associated with a single device — to keep track of data while it’s being processed, rather than tying it to your identity through your Apple ID or phone number — a process that we believe is unique among the digital assistants in use today. For further protection, after six months, the device’s data is disassociated from the random identifier.

In iOS, we offer details on the data Siri accesses, and how we protect your information in the process, in Settings > Siri & Search > About Ask Siri & Privacy.

How Your Data Makes Siri Better

In order for Siri to more accurately complete personalized tasks, it collects and stores certain information from your device. For instance, when Siri encounters an uncommon name, it may use names from your Contacts to make sure it recognizes the name correctly.

Siri also relies on data from your interactions with it. This includes the audio of your request and a computer-generated transcription of it. Apple sometimes uses the audio recording of a request, as well as the transcript, in a machine learning process that “trains” Siri to improve.

Before we suspended grading, our process involved reviewing a small sample of audio from Siri requests — less than 0.2 percent — and their computer-generated transcripts, to measure how well Siri was responding and to improve its reliability. For example, did the user intend to wake Siri? Did Siri hear the request accurately? And did Siri respond appropriately to the request?

Changes We’re Making

As a result of our review, we realize we haven’t been fully living up to our high ideals, and for that we apologize. As we previously announced, we halted the Siri grading program. We plan to resume later this fall when software updates are released to our users — but only after making the following changes:

  • First, by default, we will no longer retain audio recordings of Siri interactions. We will continue to use computer-generated transcripts to help Siri improve. 
  • Second, users will be able to opt in to help Siri improve by learning from the audio samples of their requests. We hope that many people will choose to help Siri get better, knowing that Apple respects their data and has strong privacy controls in place. Those who choose to participate will be able to opt out at any time. 
  • Third, when customers opt in, only Apple employees will be allowed to listen to audio samples of the Siri interactions. Our team will work to delete any recording which is determined to be an inadvertent trigger of Siri.

Apple is committed to putting the customer at the center of everything we do, which includes protecting their privacy. We created Siri to help them get things done, faster and easier, without compromising their right to privacy. We are grateful to our users for their passion for Siri, and for pushing us to constantly improve.

For more information: Siri Privacy and Grading