- 3 hrs ago Oppo Reno 8 Pro Indian Variant Details Out; Coming Next Month Alongside Reno 8
- 3 hrs ago TATA IPL 2022: RCB Vs LSG Eliminator 1 Live Streaming On Mobile and TV
- 4 hrs ago Redmi Note 11T Pro Series India Launch Details Out; To Arrive As Redmi k50i Series
- 4 hrs ago Boat Airdopes 175 With 35 Hours Of Battery Life Launched; Price & Features
- Finance Tea Stocks Settled Mixed After Tea Prices Fall On Record Output
- News Rahul Gandhi travelled to London without political clearance
- Automobiles Kia EV6 Earns 5-Star Rating At Australasian NCAP Tests - Safety First
- Movies Kate Moss Testifies Against Amber Heard; Says Johnny Depp Never Hurt Her
- Sports French Open: Kerber wins seven in a row on clay for the first time
- Education BSEB Class 12 Result 2022 Released For Compartment, Special Exam At results.biharboardonline.com
- Travel Adventurous Road Trips During Monsoon In India 2022
- Lifestyle Managing Diabetes To Reducing Weight, Some Amazing Health Benefits Of Jicama
How To Confuse AI Algorithm Tech Firms Use To Spy On You
Every time you send an email, shop online or stream your favorite show, you leave a digital trail behind. That's how you contribute to the billions in revenues big tech firms make every year. These firms extract valuable data from your digital footprint and put it through a sophisticated machine-learning algorithm. The system creates your digital profile, enabling tech firms to understand your preferences and push targeted ads.
While it's tough to get off this train, it's not impossible. Researchers at the Northwestern University have suggested some new methods to make these fancy algorithms obsolete. It's possible if we avoid feeding the right data to these algorithms used to train them. In a recent paper, researchers including PhD students Nicholas Vincent and Hanlin Li have suggested three methods to disrupt the algorithm.
How To Play With The Algorithm?
Data strikes involve deleting your data to prevent tech firms from using it. It's feasible if you leave a platform altogether or use privacy tools while surfing the web. Data poisoning is another way to mess up the algorithm. This involves providing useless or harmful data. Several browser extensions can do this by confusing ad-targeting algorithms by clicking on every single ad served to the user.
Finally, the conscious data contribution method, which involves providing meaningful data to the competition of the platform you stand against. For instance, you can upload your pictures on Tumblr rather than popular platforms such as Instagram.
Many of these methods are quotidian among users to keep their privacy intact. Using an adblocker extension also qualifies as data striking, where you are not allowing certain websites to track your digital footprint. But the harsh reality is that individual efforts like these will barely force the big tech firms to change their behavior.
Individual Efforts Might Not Suffice
But it could turn the tables if millions of users coordinate to use these tactics on a tech giant. This could force these firms to make changes to their business models. Not long ago, WhatsApp had to delay its policy changes after millions of users deleted their WhatsApp accounts and moved to other services like Telegram and Signal. Recently, Google also announced that it won't track individuals and push targeted ads at them. But it remains unclear that this is not just a marketing tactic from the company.
But there is still a lot of work to be done to reclaim some agency over Big Tech. These data poisoning campaigns should be more widespread, and there should be more tools that confuse the algorithms.
Data strikes can be more useful if backed by privacy laws, giving users the right to delete their data whenever they want. Without these laws, it would be unclear if a tech company has deleted your data, even if the user no longer has an account on the platform.
How Effective Are These Tactics?
The research also answers questions like the number of people required for a successful data strike and what kind of data would prove more effective for data poisoning. The researchers found out, for a movie recommendation algorithm, if 30 percent went on strike, the system's accuracy can go down by 50 percent.
But that would not be the case with other machine-learning systems, as big tech firms keep updating them regularly. To make this more effective, more people from the machine-learning community need to run simulations of different firms' systems and look for loopholes.
While these tactics could have some consequences that need to be addressed beforehand, the researchers are hopeful that these methods will change the tactics Big Tech uses to handle user data.