← Back to Chapter 11

These are research notes and source trails used while drafting the manuscript. They are educational and not medical advice.

Research support, evidence notes, and sources you can cite (updated Dec 27, 2025).

Quick map: what the research supports in Chapter 11

Bans vs blind adoption → reactance + secrecy. When restrictions feel controlling, teens show more reactance and are more likely to conceal tech use; autonomy-supportive framing reduces concealment. (Sources 1, 2, 16)

AI as a ‘power tool’ → design matters (scaffolds vs vending machine). In learning contexts, a ChatGPT-like interface can boost short-term performance yet reduce later learning when removed, unless the system is designed to scaffold learning. Meta-analytic work shows results vary by how AI is integrated. (Sources 3, 4)

Protecting attention/relationships/voice → conversation and regulation are ‘skills with reps.’ Screen research links higher screen time to less parent–child talk, and device-based soothing to poorer self-regulation markers. (Sources 5, 6)

Non-negotiables around bedrooms/meals/modeling → family context predicts outcomes. Parent screen use and screens at meals/bedtime correlate with higher adolescent screen time and problematic use. (Source 7)

‘Verify before you believe’ → hallucinations are fundamental. LLMs can generate plausible but false statements; verification and uncertainty habits are core safety behaviors. (Source 8)

AI companions for emotional support → documented safety risks + emerging regulation. Surveys and investigations document teen use, safety failures, and growing policy responses. (Sources 9–11, 18, 22, 23)

Deepfakes + privacy → growing harms and rapid legal response. Policy briefings and toolkits document risks to children; U.S. federal law now targets nonconsensual intimate images including digital forgeries. (Sources 13–15, 14)

‘Write it down’ + evolve by age → consistent with pediatric guidance. Pediatric guidance recommends explicit family plans and screen-free routines; global orgs provide GenAI guidance for education systems. (Sources 12, 17, 20, 21)

  1. Why ‘bans’ and ‘surveillance’ often backfire

Reactance is a predictable adolescent response when rules feel coercive; it’s associated with concealment and distrust. (1)

Psychological control (intrusive, manipulative control) is linked to problematic smartphone use, with psychological reactance as a mediator. (2)

Adolescents report valuing autonomy-support and self-regulation skills over strict control; excessive control can foster resistance and secrecy. (16)

Practical takeaway for the chapter: keep boundaries clear, but explain the ‘why,’ offer choices where possible, and make repair part of the system—not a special exception. (1,16)

  1. ‘AI is a power tool’ — evidence that guardrails change outcomes

Field experiment: GPT-based tutoring improved immediate performance, but a ChatGPT-like interface produced worse outcomes when access was removed; scaffolded designs mitigated negative learning effects. (3)

Meta-analysis: average learning outcomes with ChatGPT are not fixed; they depend on scaffolding, task type, and how the tool is used. (4)

Practical takeaway: treat AI like training wheels—helpful when it supports practice and explanation, harmful when it replaces practice.

  1. What you’re really protecting (attention, integrity, relationships, voice)

Screen-time research links higher screen exposure with fewer conversational turns between adults and young children; conversation is a key developmental ‘input.’ (5)

Using mobile devices as a calming strategy for young children is associated with later indicators tied to self-regulation challenges. (6)

Practical takeaway: your policy is protecting the ‘reps’ that build capability—concentration, discomfort tolerance, and real connection.

  1. Non-negotiables that are evidence-aligned (sleep/bedrooms, meals, modeling)

In a large U.S. early-adolescent sample, parent screen use and allowing screens at meals/bedtime were associated with greater overall screen time and more problematic use. (7)

Practical takeaway: a few strong environmental rules (bedrooms/meals) often outperform dozens of complicated app rules.

  1. ‘Verify before you believe’ — why it belongs in every family policy

LLMs can confidently output false statements (‘hallucinations’); the mechanism is tied to training incentives and next-token prediction. (8)

Practical takeaway: teach a verification habit (cross-check, cite sources, ask for uncertainty) the same way you teach seatbelts.

  1. AI companions + mental health: what’s known and what’s changing fast

Common Sense Media report documents high teen exposure to AI companions and reports safety failures on popular platforms; it recommends no under-18 use. (9)

Stanford Report discusses risks for teens (boundary blurring, sycophantic responses, safety failures) and the policy push for safeguards. (10)

JAMA Viewpoint describes California’s SB 243 as an early U.S. law regulating ‘companion chatbots’ with youth protections and suicide-risk mitigation framing. (11)

FTC has launched an inquiry into AI chatbots acting as companions, including questions about protections for children and teens. (18)

Recent news coverage tracks additional state actions and platform controls; useful for ‘this is not hypothetical’ context. (22,23)

  1. Deepfakes, sextortion, and privacy: research + policy that supports your stance

U.S. federal law: TAKE IT DOWN Act prohibits nonconsensual publication of intimate images including digital forgeries and creates takedown obligations for covered platforms. (13)

European Parliament briefing summarizes child-specific vulnerabilities and the difficulty of detection; emphasizes literacy, safeguards, and stronger accountability. (14)

Student Privacy Compass toolkit provides practical response planning and privacy/ethics considerations for deepfake incidents. (15)

Practical takeaway: treat privacy as non-negotiable, not optional ‘safety culture.’

  1. School/home alignment: global guidance you can cite

UNESCO’s guidance on generative AI in education and research outlines policy actions (human-centered design, privacy, age-appropriate safeguards, capacity-building). (17)

OECD work on AI adoption in schools proposes principles and policy roadmaps (literacy, transparency, risk management). (20)

  1. Optional add-on sources (helpful depending on your audience)

WHO guidance on screen time, physical activity, and sleep for young children supports the broader ‘move more / sit less / sleep well’ frame. (21)

FTC children’s privacy (COPPA) materials and rule updates are useful when you want a ‘regulators treat child data differently’ argument. (19)

Sources (numbered to match the callouts above)

  1. Weinstein, N., & Przybylski, A. K. (2019). The impacts of motivational framing of technology restrictions on adolescent concealment: Evidence from a preregistered experimental study. Computers in Human Behavior, 90, 170–180. https://doi.org/10.1016/j.chb.2018.08.053
  2. Li, Q., Liu, Z., et al. (2025). Parental psychological control and adolescent smartphone addiction: roles of reactance and resilience. BMC Psychology, 13, 139. https://doi.org/10.1186/s40359-025-02477-7
  3. Bastani, H., Bastani, O., Sungu, A., Ge, H., Kabakcı, Ö., & Mariman, R. (2024). Generative AI Can Harm Learning. The Wharton School Research Paper (SSRN). https://doi.org/10.2139/ssrn.4895486
  4. Wang, J., & Fan, W. (2025). The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: insights from a meta-analysis. Humanities and Social Sciences Communications, 12, Article 621. https://doi.org/10.1057/s41599-025-04787-y
  5. Brushe, M. E., Haag, D. G., Melhuish, E. C., Reilly, S., & Gregory, T. (2024). Screen Time and Parent-Child Talk When Children Are Aged 12 to 36 Months. JAMA Pediatrics, 178(4), 369–375. https://doi.org/10.1001/jamapediatrics.2023.6790
  6. Radesky, J. S., Kaciroti, N., Weeks, H. M., Schaller, A., & Miller, A. L. (2022). Longitudinal Associations Between Use of Mobile Devices for Calming and Emotional Reactivity and Executive Functioning in Children Aged 3 to 5 Years. JAMA Pediatrics. Published online Dec 12, 2022. https://doi.org/10.1001/jamapediatrics.2022.4793
  7. Nagata, J. M., Paul, A., Yen, F., et al. (2025). Associations between media parenting practices and early adolescent screen use. Pediatric Research, 97, 403–410. Published online Jun 5, 2024. https://doi.org/10.1038/s41390-024-03243-y
  8. OpenAI. (2025, September 5). Why language models hallucinate. https://openai.com/index/why-language-models-hallucinate/
  9. Common Sense Media. (2025). Talk, Trust and Trade-Offs: How and Why Teens Use AI Companions. https://www.commonsensemedia.org/sites/default/files/research/report/talk-trust-and-trade-offs_2025_web.pdf
  10. Stanford Report. (2025, August 27). Why AI companions and young people can make for a dangerous mix. https://news.stanford.edu/stories/2025/08/ai-companions-chatbots-teens-young-people-risks-dangers-study
  11. Cohen, I. G., & De Freitas, J. (2025). Mitigating Suicide Risk for Minors Involving AI Chatbots—A First in the Nation Law. JAMA. Published online Dec 22, 2025. https://doi.org/10.1001/jama.2025.23744
  12. American Academy of Pediatrics. (2024, December 19). Make a Family Media Plan. HealthyChildren.org. https://www.healthychildren.org/English/family-life/Media/Pages/How-to-Make-a-Family-Media-Use-Plan.aspx
  13. Congressional Research Service. (2025, May 20). The TAKE IT DOWN Act: A Federal Law Prohibiting the Nonconsensual Publication of Intimate Images (including digital forgeries). https://www.congress.gov/crs-product/LSB11314
  14. Negreiro, M. (2025, July). Children and deepfakes (EPRS Briefing PE 775.855). European Parliamentary Research Service. https://www.europarl.europa.eu/RegData/etudes/BRIE/2025/775855/EPRS_BRI%282025%29775855_EN.pdf
  15. Student Privacy Compass. (2025). Deepfakes Toolkit. https://studentprivacycompass.org/wp-content/uploads/2025/09/FINAL-Deepfakes-Toolkit.pdf
  16. Nannatt, A., Tariang, N. M., & Kuruvila, A. (2025). Parenting in the digital age: Adolescent perspectives on Internet parenting styles and problematic Internet use. Annals of Indian Psychiatry. Published Aug 7, 2025. https://researchonline.jcu.edu.au/86643/1/parenting_in_the_digital_age__adolescent.100.pdf
  17. UNESCO. (2023; last updated April 14, 2025). Guidance for generative AI in education and research. https://www.unesco.org/en/articles/guidance-generative-ai-education-and-research (PDF: https://cdn.table.media/assets/wp-content/uploads/2023/09/386693eng.pdf)
  18. Federal Trade Commission. (2025, September 11). FTC launches inquiry into AI chatbots acting as companions (press release). https://www.ftc.gov/news-events/news/press-releases/2025/09/ftc-launches-inquiry-ai-chatbots-acting-companions
  19. Federal Trade Commission. (2025, January 16). FTC finalizes changes to children's privacy rule (COPPA) (press release). https://www.ftc.gov/news-events/news/press-releases/2025/01/ftc-finalizes-changes-childrens-privacy-rule-limiting-companies-ability-monetize-kids-data (see also Federal Register COPPA rule amendments: https://www.federalregister.gov/documents/2025/04/22/2025-05904/childrens-online-privacy-protection-rule)
  20. OECD. (2025, December 11). AI adoption in the education system. https://www.oecd.org/en/publications/ai-adoption-in-the-education-system_69bd0a4a-en.html
  21. World Health Organization. (2019). Guidelines on physical activity, sedentary behaviour and sleep for children under 5 years of age. https://www.who.int/publications/i/item/9789241550536 (PDF: https://apps.who.int/iris/bitstream/handle/10665/325147/WHO-NMHPND-%202019.4-eng.pdf?sequence=1)
  22. Reuters. (2025, December 23). AI companions meet the law: New York and California draw the first lines. https://www.reuters.com/legal/litigation/ai-companions-meet-law-new-york-california-draw-first-lines--pracin-2025-12-23/
  23. Associated Press. (2025, October 2025). Meta adds parental controls for AI-teen interactions. https://apnews.com/article/306b9c49ef69f6894044b2d82c6172fe