Our Research
Our Research
Research Overview
AI is shifting from reactive to proactive—from systems that wait for a prompt to systems that initiate, anticipate, and act. This shift raises important questions about design, ethics, and acceptability that we aim to address before these technologies are widely deployed.
We develop methodologies to study emerging technologies, including simulation, Wizard-of-Oz studies, and scenario modeling. Our goal is to ensure that research keeps pace with—or stays ahead of—technological development.
Theme 1: Integrating AI into Clinical Mental Health Care
We explore how personal AI can enhance clinical mental health care, focusing on safety and crisis planning. This includes how AI can help clinicians and patients develop personalized safety plans that extend into home and community settings.
Theme 2: Early Intervention and Crisis Prevention
This theme addresses how integrated AI can detect early signs of distress and provide timely support to prevent crises. We seek to understand how support can be ethically and seamlessly integrated into everyday technology, helping individuals manage everything from mild distress to acute emergencies.
Theme 3: Upstream Prevention and Protective Factors
We explore how AI can foster well-being by being embedded into everyday life. This includes how AI in wearable devices or health apps can promote healthy routines and resilience, ensuring that preventive mental health support is a natural part of daily technology use.
Cross-Cutting Themes
- Social Network Intelligence: A defining focus of the PAIR Lab is social network intelligence. We envision AI that understands and interacts within a person's social ecosystem, leveraging network science to enhance social support and facilitate connections across families and communities.
- Industry Partnerships: We form strong partnerships with technology companies to study existing AI solutions and translate our research into scalable, practical tools.
- Ethical and Philosophical Foundations: Underpinning all our work is a commitment to ethical, moral, and philosophical rigor. We incorporate ethical review, moral philosophy, and stakeholder engagement into every stage of our research.
Research Questions We're Pursuing
- How can AI-based tools extend safety planning beyond the clinic into homes and daily life?
- What do people find acceptable—and unacceptable—about AI involvement in mental health support?
- How can data from multiple sources be combined to improve early detection of distress while protecting privacy?
- What ethical frameworks should guide proactive AI systems in sensitive contexts?
- How do trust and engagement with AI-based support change over time?