- 36 min ago OpenAI Testing ChatGPT Professional Plan; Subscription Could Cost $42
- 57 min ago Samsung Galaxy S24 To Follow Suit; Will Pack Exclusive Snapdragon Processor
- 1 hr ago Samsung Galaxy F62 vs Samsung Galaxy M53: Design, Display, Battery Compared
- 4 hrs ago Vivo Y75 5G Vs Samsung A33 5G: Specs, Display, Features, Compared
- News ‘Attempts to sow hatred’: Indian High Commission in Australia condemns temple vandalisation
- Movies Bigg Boss 16: Ex-BB Marathi Contestant REACTS To Shiv-Priyanka’s Fight; Says ‘It Goes Beyond Logic’
- Automobiles Mahindra XUV400 Booking Starts Today – Check Out All Details
- Sports UFC Vegas 68: Heavyweight Finishers to Headline Event in February
- Finance Donald Trump is Reinstated To 'Facebook' After 2-Year Ban
- Lifestyle Understanding Attention Deficit Hyperactivity Disorder: How To Support Children With ADHD
- Education 21 largest Unnamed Islands of Andaman & Nicobar Islands to be Named after 21 Param Vir Chakra Awardees
- Travel Top 5 Spellbinding Infinity Pools in Karnataka
How To Confuse AI Algorithm Tech Firms Use To Spy On You
Every time you send an email, shop online or stream your favorite show, you leave a digital trail behind. That's how you contribute to the billions in revenues big tech firms make every year. These firms extract valuable data from your digital footprint and put it through a sophisticated machine-learning algorithm. The system creates your digital profile, enabling tech firms to understand your preferences and push targeted ads.
While it's tough to get off this train, it's not impossible. Researchers at the Northwestern University have suggested some new methods to make these fancy algorithms obsolete. It's possible if we avoid feeding the right data to these algorithms used to train them. In a recent paper, researchers including PhD students Nicholas Vincent and Hanlin Li have suggested three methods to disrupt the algorithm.
How To Play With The Algorithm?
Data strikes involve deleting your data to prevent tech firms from using it. It's feasible if you leave a platform altogether or use privacy tools while surfing the web. Data poisoning is another way to mess up the algorithm. This involves providing useless or harmful data. Several browser extensions can do this by confusing ad-targeting algorithms by clicking on every single ad served to the user.
Finally, the conscious data contribution method, which involves providing meaningful data to the competition of the platform you stand against. For instance, you can upload your pictures on Tumblr rather than popular platforms such as Instagram.
Many of these methods are quotidian among users to keep their privacy intact. Using an adblocker extension also qualifies as data striking, where you are not allowing certain websites to track your digital footprint. But the harsh reality is that individual efforts like these will barely force the big tech firms to change their behavior.
Individual Efforts Might Not Suffice
But it could turn the tables if millions of users coordinate to use these tactics on a tech giant. This could force these firms to make changes to their business models. Not long ago, WhatsApp had to delay its policy changes after millions of users deleted their WhatsApp accounts and moved to other services like Telegram and Signal. Recently, Google also announced that it won't track individuals and push targeted ads at them. But it remains unclear that this is not just a marketing tactic from the company.
But there is still a lot of work to be done to reclaim some agency over Big Tech. These data poisoning campaigns should be more widespread, and there should be more tools that confuse the algorithms.
Data strikes can be more useful if backed by privacy laws, giving users the right to delete their data whenever they want. Without these laws, it would be unclear if a tech company has deleted your data, even if the user no longer has an account on the platform.
How Effective Are These Tactics?
The research also answers questions like the number of people required for a successful data strike and what kind of data would prove more effective for data poisoning. The researchers found out, for a movie recommendation algorithm, if 30 percent went on strike, the system's accuracy can go down by 50 percent.
But that would not be the case with other machine-learning systems, as big tech firms keep updating them regularly. To make this more effective, more people from the machine-learning community need to run simulations of different firms' systems and look for loopholes.
While these tactics could have some consequences that need to be addressed beforehand, the researchers are hopeful that these methods will change the tactics Big Tech uses to handle user data.