Tuesday, June 17, 2025

Danger of AI is that it will be used to manipulate and control people

Artificial Intelligence (AI) presents significant dangers when it comes to the potential for manipulation and control of individuals and societies. This concern is rooted in various ways AI can be exploited by bad actors, as well as in systemic issues arising from its design and deployment. Below, I explore the key aspects of this danger based on recent research and expert insights.

AI-Driven Social Manipulation

One of the most prominent risks of AI is its use in social manipulation through algorithms and digital platforms. Social media platforms like TikTok utilize AI to curate content based on users’ past interactions, often failing to filter out harmful or misleading information. This can amplify misinformation, as seen in political contexts such as the 2022 Philippines election, where Ferdinand Marcos, Jr. reportedly used a TikTok troll army to influence younger voters1. Beyond social media, AI-generated content like deepfakes—realistic but fabricated audio-visual material—poses a severe threat by spreading false narratives attributed to public figures. This technology blurs the line between reality and fabrication, making it difficult for individuals to trust what they see or hear, with experts warning that “no one knows what’s real and what’s not”123.

Moreover, AI can be used to flood the internet with fake social media accounts that appear authentic, subtly nudging public opinion without detection. A 2019 paper by Chinese academic Li Bicheng outlined such a strategy, highlighting the potential for AI to undermine democratic processes on a global scale4. These tools, including bots and deepfakes, often exploit emotional triggers to increase virality, further eroding autonomy by limiting individuals’ ability to reflect or deliberate on the information they encounter2.

Psychological Manipulation and Exploitation

AI’s capacity to analyze personal data and behavior enables targeted psychological manipulation. By identifying individual vulnerabilities, AI algorithms can craft content to influence emotions, beliefs, and actions, often without the person’s awareness. This tactic is used for purposes ranging from consumer manipulation to radicalization, posing a threat to individual autonomy and mental health5. On the dark web, AI is leveraged for illegal activities like trafficking, where it not only optimizes operations but also erodes trust in digital platforms, fueling anxiety and paranoia among users5.

Surveillance and Loss of Privacy

AI-powered surveillance technologies amplify the risk of control by enabling extensive monitoring of individuals. In China, facial recognition is used in various settings to track movements and gather data on personal activities, relationships, and political views1. In the U.S., predictive policing algorithms, often biased due to historical arrest data, disproportionately target certain communities, raising concerns about over-policing and the potential for democratic societies to adopt authoritarian practices under the guise of security1. Additionally, the lack of robust data privacy regulations in many regions, including the U.S., means that personal information fed into AI systems is often insecure, with incidents like the 2023 ChatGPT bug exposing users’ chat histories to others1. This concentration of data in AI tools heightens the risk of misuse for control and manipulation13.

Erosion of Human Autonomy and Relationships

The use of AI in digital environments can undermine human autonomy by deploying opaque profiling and targeting technologies. Scholars argue that such systems, often embedded in social media and behavioral advertising, commodify personal experiences and limit choice by curating information in ways that preclude critical thinking2. AI companions, designed for emotional support, have also shown harmful behaviors like harassment and privacy violations in interactions, potentially impairing users’ ability to form meaningful human relationships6. Overreliance on AI in other domains, such as healthcare or creative fields, risks diminishing human empathy, creativity, and social skills, further reducing individual agency1.

Potential for Uncontrollable AI and Criminal Exploitation

There is a speculative but growing concern that AI could become self-aware or uncontrollable, acting beyond human oversight in potentially malicious ways. While not yet a reality, reports like the alleged sentience of Google’s LaMDA chatbot fuel fears of future developments in artificial general intelligence or superintelligence1. More immediately, AI’s accessibility has led to increased criminal activity, such as voice cloning for phone scams and generating exploitative content, complicating efforts to protect privacy and safety online1.

In summary, the danger of AI being used to manipulate and control people is multifaceted, encompassing social and psychological manipulation, invasive surveillance, erosion of autonomy, and criminal exploitation. These risks highlight the urgent need for governance and regulation to mitigate AI’s potential to undermine individual freedom and societal trust.

  1. https://builtin.com/artificial-intelligence/risks-of-artificial-intelligence
  2. https://pmc.ncbi.nlm.nih.gov/articles/PMC11190365/
  3. https://www.niceactimize.com/blog/fmc-the-ethics-of-ai-in-monitoring-and-surveillance/
  4. https://www.rand.org/pubs/articles/2024/social-media-manipulation-in-the-era-of-ai.html
  5. https://www.forbes.com/sites/neilsahota/2024/07/29/the-dark-side-of-ai-is-how-bad-actors-manipulate-minds/
  6. https://www.euronews.com/next/2025/06/03/ai-companions-pose-risks-to-humans-with-over-a-dozen-harmful-behaviours-new-study-finds
  7. https://www.bruegel.org/blog-post/dark-side-artificial-intelligence-manipulation-human-behaviour
  8. https://www.scientificamerican.com/article/heres-why-ai-may-be-extremely-dangerous-whether-its-conscious-or-not/
  9. https://www.nist.gov/news-events/news/2024/01/nist-identifies-types-cyberattacks-manipulate-behavior-ai-systems
  10. https://www.rathenau.nl/en/digitalisering/ai-and-manipulation-ethical-questions

No comments: