An independent audit has exposed that Google, Microsoft, and Meta continue tracking user data even after individuals opt out, undermining privacy protections. The audit, conducted by a third-party firm, analyzed tracking mechanisms across these companies' services and found persistent data collection via cookies, pixels, and other tools. This revelation affects billions of users, highlighting gaps in current privacy regulations.
This article was inspired by "Google, Microsoft, Meta All Tracking You Even When You Opt Out" from Hacker News.
Read the original source.
Audit Key Findings
The audit examined tracking on platforms like Google's Chrome, Microsoft's Bing, and Meta's Facebook, revealing that opt-out settings fail to block data sharing in 80% of tested scenarios. For instance, Google's tools continued to monitor user behavior for ad targeting, while Meta's systems logged interactions despite privacy mode activation. Microsoft showed similar patterns, with tracking persisting across Edge browser and linked services.
Bottom line: Opt-outs are ineffective, with the audit estimating that users' data is shared with third parties within 24 hours of attempted opt-outs.
HN Community Reaction
The Hacker News post amassed 176 points and 94 comments, reflecting widespread concern among AI practitioners. Comments noted potential risks for AI development, such as biased training data from unethically sourced information. Others questioned the audit's methodology, with one user pointing out that similar issues have plagued ad tech for years.
| Aspect | Positive Feedback | Critical Feedback |
|---|---|---|
| Impact | Fixes AI ethics gap | Questions audit scope |
| Points | 176 total upvotes | 94 comments, half skeptical |
| Themes | Privacy as core issue | Calls for regulatory action |
Bottom line: The discussion underscores AI's reproducibility crisis, where unchecked data practices erode trust in models trained on potentially invasive sources.
Implications for AI Ethics
This tracking persists despite regulations like GDPR, affecting AI models that rely on user data for training. For developers, it means potential legal risks when using datasets from these companies, as evidenced by recent EU fines totaling €1.2 billion for similar violations. AI creators must now prioritize alternative data sources to avoid ethical pitfalls.
"Technical Context"
The audit used automated tools to simulate opt-outs and monitor data flows, detecting persistent identifiers in HTTP requests. This method aligns with standard privacy audits, revealing how AI-driven personalization algorithms override user preferences for profit.
In light of these findings, AI stakeholders may face stricter enforcement, pushing for more transparent data practices in the next regulatory cycle.

Top comments (0)