Artificial intelligence has rapidly expanded the capacity of governments, corporations, and political actors to monitor, predict, and influence citizens at scale. While AI can drive social benefits, critics warn that the technology increasingly functions as a “force multiplier” for established power—deepening surveillance, automating discrimination, and reshaping political processes in ways that entrench existing hierarchies12. This report traces the main mechanisms through which AI serves establishment interests, examines emerging regulatory responses, and outlines practical safeguards.
AI-Enabled Surveillance
Biometric Identification and Continuous Tracking
-
China operates the world’s densest network of AI-driven cameras, combining facial and gait recognition to track dissenters in real time23.
-
Clearview AI scraped more than3 billion images from social media and sold access to police forces until Canadian and European regulators ruled the practice unlawful45.
-
In Hong Kong, authorities leveraged facial recognition during the2019 protests to deter participation, prompting demonstrators to adopt laser pointers and masks as counter-measures6.
Predictive Policing and Hot-Spot Mapping
-
LAPD’s LASER and PredPol systems analyze historic crime data to forecast where offenses will occur, steering patrols into lower-income neighborhoods—critics note feedback loops that amplify racial bias789.
-
A National Institute of Justice symposium concluded that predictive policing “moves law enforcement from focusing on what happened to focusing on what will happen,” effectively shifting strategy from reactive to pre-emptive control7.
-
The NAACP reports that Black communities face disproportionate surveillance when departments adopt AI crime-forecasting, driving calls for moratoria10.
| Selected AI Surveillance Deployments | Jurisdiction | Core Technology | Reported Purpose | Key Concerns |
|---|---|---|---|---|
| Xinjiang, China | Face+Gait recognition | Track Uyghur population | Ethnic profiling2 | |
| Los Angeles, USA | PredPol hot-spot model | Property-crime prediction | Racial bias feedback8 | |
| Buenos Aires, Argentina | Citywide face recognition | Warrant matching | Court-ordered suspension after rights abuse11 | |
| London, UK | Live facial recognition vans | Terrorism detection | Court ruling on privacy breach1112 | |
| Toronto, Canada | Clearview AI pilot | Suspect identification | Federal privacy violation finding4 |
Political Influence, Microtargeting, and Disinformation
Cambridge Analytica and Psychographic Targeting
Harvested Facebook data on up to87 million users to tailor political ads in the2016 US election, pioneering large-scale voter profiling1314.
Generative Propaganda and Deepfakes
-
A Russian-backed outlet doubled its output after integrating large language models, with AI-written articles proving as persuasive as human propaganda1516.
-
GPT-3 can craft convincing political essays; editing prompts or curating outputs boosts persuasiveness to match original state-backed text17.
-
In2024, robocall deepfakes mimicking President Biden urged New Hampshire Democrats to skip the primary, prompting a$6 million FCC fine18.
Scalable Microtargeting
Automated personality-tailored ads persuaded roughly2.5% of a100,000-person sample—enough to swing tight races when deployed at scale19.
GPT-4 personalizations showed no statistically significant advantage over broad messages, suggesting diminishing returns yet continued risk20.
Algorithmic Decision-Making and Structural Inequality
Credit and Insurance
-
Minority borrowers receive less precise risk scores because historical datasets lack representative payment histories, leading to higher rejection rates—bias mitigation alone cannot fix the gap2122.
-
EU regulators classify credit scoring AI as “high-risk,” triggering mandatory transparency and human oversight under the AI Act12.
Hiring and Workplace Screening
-
Amazon’s résumé-ranking engine downgraded applicants who mentioned “women’s” clubs, reflecting historic staffing patterns; the project was abandoned after bias surfaced232425.
-
HireVue’s video analytics favored certain accents and facial expressions, drawing a U.S. privacy complaint for “deceptive hiring”25.
| Sector | AI Tool | Documented Effect | Establishment Advantage |
|---|---|---|---|
| Lending | FICO-adjacent ML models | Higher false-negatives for non-white applicants21 | Risk offloading onto marginalized borrowers |
| Employment | Automated video interview scoring | Penalizes female candidates and some dialects2625 | Efficient sorting of large pools without liability transparency |
| Social Services | Benefit-fraud prediction engines | Incorrect flagging of disabled claimants in the Netherlands27 | Budget cuts framed as “objective” algorithmic necessity |
Algorithmic Management and Labor Control
Gig-Economy Platforms
Uber’s app dictates ride assignments, pay rates, and even restroom breaks through opaque “algorithmic controls,” creating deep information asymmetries282930.
Food-delivery drivers link algorithmic ratings to both job survival and family stress; low scores trigger pay cuts or de-activation without recourse31.
Warehousing and Retail
Amazon tracks pickers by the second, issuing automatic warnings for slow rates; sensors can terminate employment with minimal human review3233.
Nordic union surveys find algorithmic scheduling now standard in logistics, eroding collective bargaining leverage34.
Governance, Regulation, and the Struggle for Accountability
European Union
The 2024 AI Act bans public live facial recognition save for narrow law-enforcement exemptions, prohibits social scoring, and mandates risk audits for high-impact systems12.
Critics note loopholes and the absence of a full ban on mass surveillance, warning of a “devastating global precedent”355.
United States
-
Executive Order 14110 (2023) directed federal agencies to address civil-rights risks in AI but was rescinded in 2025, signaling regulatory whiplash3637.
-
The re-introduced Algorithmic Accountability Act would compel large firms to conduct bias impact assessments, yet remains stalled in Congress3839.
AI Auditing and Corporate Self-Regulation
-
NIST, OECD, and IBM frameworks promote fairness metrics, drift monitoring, and explainability dashboards4041.
-
Scholars argue for mandatory third-party audits covering data, model, and deployment layers to close the “accountability gap”4227.
Civil Society Pushback
-
Advocacy coalitions like Access Now and Reclaim Your Face campaign for bans on biometric mass surveillance across Europe11.
-
Stadium protests in New York targeted face-scan ticketing, raising public awareness of commercial biometric databases43.
-
Litigation against predictive policing in U.S. cities cites disparate-impact under civil-rights law to halt deployments10.
Future Trajectories and Safeguards
Technical
-
Privacy-preserving ML (federated learning, differential privacy) can reduce centralized data hoarding, though deployment remains limited40.
-
Explainable AI tools that expose feature weights help auditors identify proxy discrimination before rollout4445.
Policy
-
Expand statutory rights to meaningful human review in all high-stakes algorithmic decisions, mirroring GDPR Article 22 benchmarks27.
-
Condition public procurement funds on vendors passing independent bias and security audits, promoting a market for trustworthy AI42.
Democratic Oversight
-
Establish multi-stakeholder audit boards with subpoena power to inspect training data and model logic underlying government AI44.
-
Guarantee whistle-blower protections for engineers exposing unethical AI uses within corporations or agencies46.
Conclusion
From intelligent camera networks to psychographic ad engines, AI increasingly extends the reach and subtlety of establishment power. Absent robust guardrails, the same attributes that make AI efficient—speed, scale, and opacity—allow elites to observe, categorize, and influence populations with unprecedented granularity115. Regulatory momentum is building, yet gaps between lofty principles and on-the-ground enforcement remain wide. Ensuring that AI serves the public rather than entrenched interests will require legally binding audits, transparent design norms, and vibrant civic engagement to keep the most powerful technology of our era accountable.
- https://www.brookings.edu/articles/how-ai-can-enable-public-surveillance/
- https://www.lawfaremedia.org/article/the-authoritarian-risks-of-ai-surveillance
- https://thebulletin.org/2024/06/how-ai-surveillance-threatens-democracy-everywhere/
- https://www.priv.gc.ca/en/opc-news/news-and-announcements/2021/nr-c_210203/
- https://www.amnesty.org/en/latest/news/2023/12/eu-blocs-decision-to-not-ban-public-mass-surveillance-in-ai-act-sets-a-devastating-global-precedent/
- https://sites.uab.edu/humanrights/2025/02/13/the-abuse-of-facial-recognition-technology-in-the-hong-kong-protests/
- https://www.ojp.gov/pdffiles1/nij/230414.pdf
- https://stpp.fordschool.umich.edu/sites/stpp/files/2024-06/stpp-predictive-policing-memo.pdf
- https://algorithmwatch.org/en/algorithmic-policing-explained/
- https://naacp.org/resources/artificial-intelligence-predictive-policing-issue-brief
- https://lens.civicus.org/facial-recognition-the-latest-weapon-against-civil-society/
- https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law
- https://www.lathropgpm.com/insights/artificial-intelligence-and-algorithmic-disgorgement/
- https://en.wikipedia.org/wiki/Facebook%E2%80%93Cambridge_Analytica_data_scandal
- https://academic.oup.com/pnasnexus/article/4/4/pgaf083/8097936
- https://pubmed.ncbi.nlm.nih.gov/40171239/
- https://academic.oup.com/pnasnexus/article/3/2/pgae034/7610937
- https://www.npr.org/2024/12/21/nx-s1-5220301/deepfakes-memes-artificial-intelligence-elections
- https://academic.oup.com/pnasnexus/article/3/2/pgae035/7591134
- https://www.pnas.org/doi/10.1073/pnas.2403116121
- https://www.technologyreview.com/2021/06/17/1026519/racial-bias-noisy-data-credit-scores-mortgage-loans-fairness-machine-learning/
- https://blogs.law.ox.ac.uk/oblb/blog-post/2023/01/economic-and-normative-implications-algorithmic-credit-scoring
- https://www.cangrade.com/blog/hr-strategy/hiring-bias-gone-wrong-amazon-recruiting-case-study/
- https://globalnews.ca/news/4532172/amazon-jobs-ai-bias/
- https://theconversation.com/when-ai-plays-favourites-how-algorithmic-bias-shapes-the-hiring-process-239471
- https://www.bbc.com/worklife/article/20240214-ai-recruiting-hiring-software-bias-discrimination
- https://scholar.law.colorado.edu/faculty-articles/1265/
- https://just-tech.ssrc.org/citation/algorithmic-labor-and-information-asymmetries-a-case-study-of-ubers-drivers/
- https://ijoc.org/index.php/ijoc/article/view/4892
- https://www.businessthink.unsw.edu.au/articles/uber-algorithmic-management
- https://pmc.ncbi.nlm.nih.gov/articles/PMC10631696/
- https://www.bbc.com/future/article/20150818-how-algorithms-run-amazons-warehouses
- https://journals.sagepub.com/doi/10.1177/23780231251318389
- https://www.socialeurope.eu/algorithmic-management-a-codetermination-challenge
- https://www.elgaronline.com/edcollchap-oa/book/9781035323036/chapter6.xml
- https://en.wikipedia.org/wiki/Executive_Order_14110
- https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence/
- https://www.nightfall.ai/ai-security-101/algorithmic-accountability-act
- https://www.mccarthy.ca/en/insights/blogs/techlex/us-house-and-senate-reintroduce-algorithmic-accountability-act-intended-regulate-ai
- https://www.dotnitron.com/insights/auditing-ai-systems-best-practices
- https://www.ibm.com/think/topics/ai-governance
- https://jolt.law.harvard.edu/assets/digestImages/Farley-Lansang-AI-Auditing-publication-2.13.2025.pdf
- https://www.nbcnews.com/tech/security/facial-recognition-technology-use-stadiums-us-sparks-protests-rcna167410
- https://www.essendgroup.com/post/the-role-of-ai-auditing-in-ensuring-transparency-accountability
- https://www.techtarget.com/searchenterpriseai/tip/How-to-audit-AI-systems-for-transparency-and-compliance
- https://www.euronews.com/next/2025/05/05/we-are-less-protected-due-to-ai-says-cambridge-analytica-whistleblower-on-protecting-our-d
- https://pmc.ncbi.nlm.nih.gov/articles/PMC8435155/
- https://ignesa.com/predictive-policing-examples/
- https://www.npr.org/2023/06/13/1181868277/how-ai-is-revolutionizing-how-governments-conduct-surveillance
- https://news.mit.edu/2024/study-ai-inconsistent-outcomes-home-surveillance-0919
- https://www.ibm.com/think/topics/algorithmic-bias
- https://www.brookings.edu/articles/geopolitical-implications-of-ai-and-digital-surveillance-adoption/
- https://www.timesofisrael.com/groups-using-facial-recognition-to-unmask-anti-israel-campus-protesters-for-deportation/
- https://eucpn.org/sites/default/files/document/files/PP%20(2).pdf
- https://sdgs.un.org/sites/default/files/2024-05/Francis_Navigating%20the%20Intersection%20of%20AI,%20Surveillance,%20and%20Privacy.pdf
- https://gulfnews.com/special-reports/ais-quiet-impact-on-2024-campaign-micro-targetting-explained-1.1730823570363
- https://www.heinz.cmu.edu/media/2024/October/voters-heres-how-to-spot-ai-deepfakes-that-spread-election-related-misinformation1
- https://ivado.ca/wp-content/uploads/2025/01/IVADOCEIMIA_AIDemocracy_Final.pdf
- https://www.mcgill.ca/newsroom/channels/news/aengus-bridgman-rise-deepfake-scams-and-political-disinformation-canadas-election-guardian-365131
- https://home.barclays/content/dam/home-barclays/documents/citizenship/our-reporting-and-policy-positions/policy-positions/20190614-CDEI-CP-Bias-in-Algorithmic-Decision-making-Barclays-Response-FINAL.pdf
- https://knightcolumbia.org/blog/we-looked-at-78-election-deepfakes-political-misinformation-is-not-an-ai-problem
- https://www.npr.org/2024/10/18/nx-s1-5153741/ai-images-hurricanes-disasters-propaganda
- https://www.accessiblelaw.untdallas.edu/post/when-algorithms-judge-your-credit-understanding-ai-bias-in-lending-decisions
- https://bipartisanpolicy.org/blog/cambridge-analytica-controversy/
- https://www.forbes.com/sites/cindygordon/2023/12/31/ai-recruiting-tools-are-rich-with-data-bias-and-chros-must-wake-up/
- https://su.diva-portal.org/smash/get/diva2:1935963/FULLTEXT01.pdf
- https://www.generixgroup.com/en/blog/warehouse-storage-when-algorithms-make-optimizing-easy
- https://www.imd.org/research-knowledge/digital/articles/amazons-sexist-hiring-algorithm-could-still-be-better-than-a-human/
- https://vidcruiter.com/interview/intelligence/ai-bias/
- https://onlinelibrary.wiley.com/doi/full/10.1002/job.2831
- https://uwaterloo.ca/school-of-accounting-and-finance/news/algorithm-control-double-edged-sword-uber-drivers-positive
- https://genderpolicyreport.umn.edu/algorithmic-bias-in-job-hiring/
- https://journals.sagepub.com/doi/10.1177/14748851221082078
- https://www.biometricupdate.com/202502/eu-ban-on-unacceptable-ai-comes-into-force-with-crucial-details-unresolved
- https://www.brookings.edu/articles/one-year-later-how-has-the-white-house-ai-executive-order-delivered-on-its-promises/
- https://www.paloaltonetworks.ca/cyberpedia/ai-governance
- https://www.dlapiper.com/en-ca/insights/publications/2025/01/white-house-ai-executive-order-sets-its-sights-on-free-market-innovation
- https://en.wikipedia.org/wiki/Algorithmic_accountability
- https://dualitytech.com/blog/ai-governance-framework/
- https://www.bdo.ca/insights/responsible-ai-guide-a-comprehensive-road-map-to-an-ai-governance-framework

No comments:
Post a Comment