Apple on Thursday suspended its Siri grading program, which seeks to make the virtual assistant more accurate by having workers review snippets of recorded audio, after a contractor raised privacy concerns about the quality control process.
Now, Apple’s competitors in the space, namely Google and Amazon, are making similar moves to address criticism about their own audio review policies.
Shortly after Apple’s announcement, Google in a statement to Ars Technica on Friday said it, too, halted a global initiative to review Google Assistant audio. Like Siri grading, Google’s process runs audio clips by human operators to enhance system accuracy.
Unlike Apple’s Siri situation, however, a contractor at one of Google’s international review centers leaked 1,000 recordings to VRT NWS, a news organization in Belgium. In a subsequent report in July, the publication claimed it was able to identify people from the audio clips, adding that a number of snippets were of “conversations that should never have been recorded and during which the command ‘OK Google’ was clearly not given.”
The VRT leak prompted German authorities to investigate Google’s review program and level a three-month ban on voice recording transcripts.
“Shortly after we learned about the leaking of confidential Dutch audio data, we paused language reviews of the Assistant to investigate. This paused reviews globally,” Google told Ars Technica.
Google did not divulge the halt to global reviews until Friday.
Amazon is also taking steps to temper negative press about its privacy practices and on Friday rolled out a new Alexa option that allows users to opt out of human reviews of audio recordings, Bloomberg reports. Enabling the feature in the Alexa app excludes recorded audio snippets from analysis.
“We take customer privacy seriously and continuously review our practices and procedures,” an Amazon spokeswoman said. “We’ll also be updating information we provide to customers to make our practices more clear.”
Amazon came under fire in April after a report revealed the company records, transcribes and annotates audio clips recorded by Echo devices in an effort to train its Alexa assistant.
While it may come as a surprise to some, human analysis of voice assistant accuracy is common practice in the industry; it is up to tech companies to anonymize and protect that data to preserve customer privacy.
Apple’s method is outlined in a security white paper (PDF link) that notes the company ingests voice recordings, strips them of identifiable information, assigns a random device identifier and saves the data for six months, over which time the system can tap into the information for learning purposes. Following the six-month period, the identifier is erased and the clip is saved “for use by Apple in improving and developing Siri for up to two years.”
Apple does not explicitly mention the possibility of manual review by human contractors or employees, nor does it currently offer an option for Siri users to opt out of the program. The company will address the latter issue in a future software update.