A coalition of musicians and human rights groups urged music streaming company Spotify on Tuesday to rule out possible use of a speech recognition tool it recently developed to suggest songs - describing the technology as “creepy” and “invasive”.
In January, Sweden-based Spotify patented a technology that analyses users’ speech and background noise to suggest tracks based on their mood, gender, age, accent or surroundings.
The company did not immediately reply to a request for comment, pointing instead to a letter it published in April in which it said it has never implemented the tool in its products and does not plan to do so in the future.
But in an open letter, more than 180 artists and activists called on the firm to abandon the project altogether and make a public commitment to never use, license, sell, or monetise it.
“This recommendation technology is dangerous, a violation of privacy and other human rights, and should not be implemented by Spotify or any other company,” the letter said.
“Any use of this technology is unacceptable.”
Signatories included American guitarist Tom Morello of Rage Against the Machine, rapper Talib Kweli, Laura Jane Grace of rock band Against Me!, and advocacy groups Amnesty International and Access Now.
“You can’t rock out when you’re under constant corporate surveillance,” Morello said in a statement.
In the patent application first filed in 2018, Spotify, which has 356 million active users, said it was common for a media streaming application to include features that provide personalized recommendations to users.
But tailoring suggestions around someone’s taste usually requires them to “tediously input answers to multiple queries”, it said.
The technology aimed to streamline the process for suggesting songs that fit people’s mood or setting, with background noise that could be used to infer whether someone is listening to music alone, in a car or in a group.
But the letter’s signatories said that raised privacy concerns as devices could take in private information and make inferences about other people in the room who might not be aware that they were being listened to.
Using artificial intelligence to recommend music could also exacerbate existing disparities in the music industry, they said.
“Claiming to be able to infer someone’s taste in music based on their accent or detect their gender based on the sound of their voice is racist, transphobic, and just plain creepy,” musician Evan Greer said in statement.
Voice recognition software is increasingly being used in a range of sectors from customer services to automatic translations and digital assistants.
But the technology suffers from some of the same issues as facial recognition in terms of potential discrimination, inaccuracy, and surveillance, said Daniel Leufer, Europe policy analyst at Access Now.
“When designing voice recognition systems, certain languages, dialects, and even accents are prioritised over others,” Leufer told the Thomson Reuters Foundation.
“This ends up effectively either excluding people who don’t speak those languages, dialects, or with those accents, or forcing them to adapt their speech to what is hardcoded into these systems as ‘normal’,” he said in an emailed statement.