Shoshana Zuboff, the author of “The Age of Surveillance Capitalism,” said, the mining of our personal data for profit is “changing the nature of society” leading to the “wholesale destruction of privacy.”
Today companies know so much about us and hold the power to target and manipulate us not only with relevant digital ads, but also with “subliminal clues, real time rewards and punishments, algorithmic recommendation tools, psychological micro-targeting, and engineered social comparison dynamics,” she said.
In today’s data-driven economy, privacy is fast becoming a luxury. Today, we are no longer paying for our mail and news sources with money—instead, we are paying for these services with our personal data.
Sign up for your weekly dose of what's up in emerging technology.
Google, for instance, scans your Gmail account to allow advertisers a chance to promote items based on what you write in your personal communication with others. The personal data of individuals is also used to give them different search results based on their political leanings, and is used by governments to identify possible criminal and terrorist suspects—undermining both democracy and our autonomy.
Today, the only way to get privacy is to buy it, which is neither cheap nor convenient. For instance, Apple introduced a software update last year that lets its users decide whether an app should be allowed to track their movement across other apps and sites they use—but not everyone can buy expensive Apple products.
Download our Mobile App
Is privacy soon going to be available to only those who can afford it?
Right to be Forgotten
Last year, Ashutosh Kaushik, former winner of the reality tv shows Bigg Boss (2008) and MTV Roadies 5.0, approached the Delhi High Court to remove all the content about him be removed from the internet. He cited his Right to be Forgotten, arguing that posts and videos of him on the internet have made it impossible for him to move on from his past mistakes (he is referring to an incident in 2009 in which he was held by the Mumbai police for driving under the influence of alcohol).
The Right to Privacy was determined a fundamental right by the Supreme Court in 2017, under Article 21 of the Constitution. Meanwhile, the Right to be Forgotten, which refers to an individual’s right to privacy, is yet to be tabled in the parliament. The Chapter V of the Personal Data Protection Bill—introduced in the Lok Sabha in 2019— states that the “data principal (the person to whom the data is related) shall have the right to restrict or prevent the continuing disclosure of his personal data by a data fiduciary.” Therefore, under the Right to be Forgotten, users should be able to de-link, restrict, erase, or amend the disclosure of their personal data held by third parties.
While some people are petitioning the government to allow their data to be deleted from the internet to protect their privacy, companies like Twitter and Facebook are receiving demands from governments and law enforcement agencies to delete content from their websites.
Last year, nearly 200 verified accounts of journalists on Twitter faced 361 legal demands from governments and law enforcement agencies to have their content removed; India ranked top for these requests, followed by the United States.
Twitter submitted to 29% of the 38,524 legal demands it received from governments to remove content from its website.
Is your data even being deleted?
While “right to be forgotten” laws are increasingly popular, user data doesn’t simply exist in its raw form on the database, but can also be contained in models trained on their data, which is more difficult to expunge.
Since bits of data are intricately embedded in ML models, it can be difficult to guarantee that a user has been completely forgotten without significantly altering the model.
The closest solution found to this dilemma is approximate deletion, which allows for the removal of most of the user’s data from the model. The method works at removing specific and easily identifiable information regarding an individual, replacing it with synthetic data, allowing the models to continue working as planned.
The question is whether data should be considered “deleted” even if it has not actually been completely removed so long as it doesn’t have any personally identifiable information?