Thursday, December 4, 2025

Evidence linking AI personalization to teen addiction rates

AI-driven personalization is strongly associated with higher risk of addictive use and mental health problems in teens, but the evidence is a mix of correlational studies, lab experiments, and emerging work on AI chatbots rather than a single causal smoking gun. The strongest links are around recommendation algorithms and personalized feeds that increase time-on-platform, expose vulnerable adolescents to more extreme or harmful content, and interact with developmental brain sensitivities.pmc.ncbi.nlm.nih+2

Personalized feeds and addiction-like use

A 2025 NIH-linked review on social media algorithms and teen addiction describes how AI systems continuously adjust content to each teen’s behavior, creating a loop in which personalized, highly engaging posts trigger dopamine-driven reward responses and make it harder to log off. The review connects this personalization–reward loop with increased symptoms of social media addiction in adolescents, especially given developmental changes in prefrontal control and heightened reward sensitivity.pmc.ncbi.nlm.nih

Legal and medical analyses note that algorithmic “For You”-style feeds can rapidly flood teen accounts with narrow themes (e.g., extreme dieting, self-harm, or appearance-focused content), which is associated with poorer body image, eating disorders, and suicidality. A recent scoping review on youth and social media emphasizes that curated, highly personalized feeds designed to maximize engagement can amplify exposure to harmful content and increase mental health risks in young people aged 12–25.cambridge+2

Recommender systems and vulnerable groups

Work on recommender systems and children shows that algorithmic personalization accelerates exposure to harmful material and can overwhelm young users’ ability to curate their own feeds, though authors stress that more longitudinal data is needed for precise causal estimates. Population studies find that adolescents with internalizing conditions (anxiety, depression) spend more time on social media, engage more in social comparison, and are more affected by online feedback than peers without such conditions, suggesting that personalized, feedback-heavy environments may hit them hardest.papers.ssrn+1

A broader law-and-medicine review argues that image‑based platforms with personalized recommendation loops intersect with adolescent neurodevelopment to increase the risk of addiction-like behaviors and mental health harm, and calls for regulation that explicitly targets these algorithmic practices. Guidance from pediatric and youth-mental-health organizations likewise flags personalized social feeds as a major factor in rising distress, while cautioning that responses must account for individual differences and context.yalemedicine+2

Emerging evidence on AI chatbots

Newer studies extend the concern from feeds to AI companions. Survey and clinical reports describe adolescents developing dependence on AI tools and chatbots, with one longitudinal study finding that about 17–24% of adolescents met criteria for AI dependence across two time points, and that mental health problems predicted stronger dependence over time. Qualitative work with teens using AI companion chatbots maps their experiences onto classic behavioral addiction components (salience, mood modification, tolerance, withdrawal, conflict, relapse), with reported consequences including sleep loss, academic decline, and strained offline relationships.arxiv+1

Policy and ethics papers warn that real‑time, hyper‑personalized generative AI and companion bots could become a next wave of addictive media because they adjust style, emotional tone, and content to each user moment‑to‑moment. Early clinical and prevention resources already list adolescents as a high‑risk group for “AI addiction,” particularly when chatbots are used to regulate mood or substitute for human relationships.papers.ssrn+1

Across this literature, the most solid findings show that personalized, engagement-optimizing systems are associated with more time online, more exposure to harmful content, and higher rates of addiction-like symptoms and mental health problems in adolescents. However, researchers repeatedly emphasize limitations: many studies are cross‑sectional, self‑report based, and confounded by pre‑existing vulnerabilities, so they cannot cleanly prove that AI personalization alone causes addiction rather than intensifying risk in already susceptible teens.jmir+4

The emerging consensus is that personalization acts as a risk amplifier: it interacts with adolescent brain development, existing mental health issues, and social context to deepen problematic use and make disengagement harder. That is why recent recommendations focus less on banning AI outright and more on constraining personalization objectives (away from pure engagement), adding friction and choice around recommender systems, and strengthening safeguards for youth in both social media and AI chatbot environments.nature+5

  1. https://pmc.ncbi.nlm.nih.gov/articles/PMC11804976/
  2. https://www.cambridge.org/core/journals/american-journal-of-law-and-medicine/article/algorithms-addiction-and-adolescent-mental-health-an-interdisciplinary-study-to-inform-statelevel-policy-action-to-protect-youth-from-the-dangers-of-social-media/EC9754B533553BDD56827CD9E34DFC25
  3. https://www.jmir.org/2025/1/e72061
  4. https://pmc.ncbi.nlm.nih.gov/articles/PMC10476631/
  5. https://papers.ssrn.com/sol3/Delivery.cfm/4978809.pdf?abstractid=4978809&mirid=1
  6. https://www.nature.com/articles/s41562-025-02134-4
  7. https://www.yalemedicine.org/news/social-media-teen-mental-health-a-parents-guide
  8. https://www.aap.org/en/patient-care/media-and-children/center-of-excellence-on-social-media-and-youth-mental-health/
  9. https://arxiv.org/html/2507.15783
  10. https://pmc.ncbi.nlm.nih.gov/articles/PMC10944174/
  11. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5666192
  12. https://www.addictioncenter.com/behavioral-addictions/ai-addiction/
  13. https://www.apa.org/monitor/2025/10/technology-youth-friendships
  14. https://www.sciencedirect.com/science/article/abs/pii/S1876201825001194
  15. https://www.edweek.org/technology/researchers-posed-as-a-teen-in-crisis-ai-gave-them-harmful-advice-half-the-time/2025/08
  16. https://www.mobicip.com/blog/teen-ai-chatbot-addiction
  17. https://www.icthealth.org/news/teens-increasingly-turn-to-ai-chatbots-for-mental-health-help
  18. https://www.tandfonline.com/doi/full/10.1080/1369118X.2025.2470227
  19. https://www.sciencedirect.com/science/article/pii/S026840122300124X
  20. https://scholar.dsu.edu/cgi/viewcontent.cgi?article=1222&context=ccspapers

No comments: