Just In
- 9 hrs ago Wordle 1005 Answer for Today, March 20: Check Hints, Answer, How to Play
- 10 hrs ago Lenovo Tab M11 India Launch on March 26; Check Expected Price, Features & Sale Details
- 10 hrs ago OnePlus Nord CE4 Launch on April 1; Price Could Start at Rs 26,999
- 10 hrs ago Quordle 786 Answer for Today, March 20: Check Clues, Solutions
Don't Miss
- Movies Ae Watan Mere Watan Review: Sara Ali Khan Is A Miscast In This Film Which Fails To Do Justice To Usha Mehta
- News Is India Embracing Gender Equality In Adoption? Here's What Hindu Adoption Act Data Reveals
- Education HPSCB Junior Clerk Recruitment 2024; Application process, Salary details and more
- Sports IPL 2024: GT rope in Sandeep Warrier as Mohd Shami's replacement; MI bring in SA U19 star Kwena Maphaka
- Finance Tata's NBFC Drops 34% In 8 Days; Tata Power-To-Tata Sons, 6 Group's Companies Lose Up To Rs 11,369 Crore
- Lifestyle World Poetry Day 2024: Wishes, Messages, Quotes, Texts, Greetings, Images, WhatsApp And FB Status
- Automobiles Nissan Magnite Facelift Spotted Testing In India – Check Out All The Details Here
- Travel Why Do Foreigners Flock to India for Holi? Top Holi Destinations for Locals and Tourists
How To Confuse AI Algorithm Tech Firms Use To Spy On You
Every time you send an email, shop online or stream your favorite show, you leave a digital trail behind. That's how you contribute to the billions in revenues big tech firms make every year. These firms extract valuable data from your digital footprint and put it through a sophisticated machine-learning algorithm. The system creates your digital profile, enabling tech firms to understand your preferences and push targeted ads.
While it's tough to get off this train, it's not impossible. Researchers at the Northwestern University have suggested some new methods to make these fancy algorithms obsolete. It's possible if we avoid feeding the right data to these algorithms used to train them. In a recent paper, researchers including PhD students Nicholas Vincent and Hanlin Li have suggested three methods to disrupt the algorithm.
How To Play With The Algorithm?
Data strikes involve deleting your data to prevent tech firms from using it. It's feasible if you leave a platform altogether or use privacy tools while surfing the web. Data poisoning is another way to mess up the algorithm. This involves providing useless or harmful data. Several browser extensions can do this by confusing ad-targeting algorithms by clicking on every single ad served to the user.
Finally, the conscious data contribution method, which involves providing meaningful data to the competition of the platform you stand against. For instance, you can upload your pictures on Tumblr rather than popular platforms such as Instagram.
Many of these methods are quotidian among users to keep their privacy intact. Using an adblocker extension also qualifies as data striking, where you are not allowing certain websites to track your digital footprint. But the harsh reality is that individual efforts like these will barely force the big tech firms to change their behavior.
Individual Efforts Might Not Suffice
But it could turn the tables if millions of users coordinate to use these tactics on a tech giant. This could force these firms to make changes to their business models. Not long ago, WhatsApp had to delay its policy changes after millions of users deleted their WhatsApp accounts and moved to other services like Telegram and Signal. Recently, Google also announced that it won't track individuals and push targeted ads at them. But it remains unclear that this is not just a marketing tactic from the company.
But there is still a lot of work to be done to reclaim some agency over Big Tech. These data poisoning campaigns should be more widespread, and there should be more tools that confuse the algorithms.
Data strikes can be more useful if backed by privacy laws, giving users the right to delete their data whenever they want. Without these laws, it would be unclear if a tech company has deleted your data, even if the user no longer has an account on the platform.
How Effective Are These Tactics?
The research also answers questions like the number of people required for a successful data strike and what kind of data would prove more effective for data poisoning. The researchers found out, for a movie recommendation algorithm, if 30 percent went on strike, the system's accuracy can go down by 50 percent.
But that would not be the case with other machine-learning systems, as big tech firms keep updating them regularly. To make this more effective, more people from the machine-learning community need to run simulations of different firms' systems and look for loopholes.
While these tactics could have some consequences that need to be addressed beforehand, the researchers are hopeful that these methods will change the tactics Big Tech uses to handle user data.
-
99,999
-
1,29,999
-
69,999
-
41,999
-
64,999
-
99,999
-
29,999
-
63,999
-
39,999
-
1,56,900
-
1,39,900
-
1,29,900
-
79,900
-
65,900
-
12,999
-
96,949
-
16,499
-
38,999
-
49,999
-
30,700
-
36,999
-
38,999
-
1,17,840
-
23,960
-
82,510
-
14,999
-
25,999
-
26,999
-
19,999
-
17,970