Active Projects
Designing the Future of AI-Infused Mental Healthcare:
Towards Designing AI Systems, Services, and Policies Simultaneously
From detecting and mitigating people's suicidal thoughts using social media, to tracking and alleviating depression using Apple Watch, commercial health sensing devices, and AI software hold exciting promises in addressing the national mental health crisis that plagues the U.S. However, such applications also have the potential to not only fail in improving patient outcome, but could harm patient experience, harm patient data privacy, and exacerbate the already alarming health inequality. This project brings together clinical machine learning/MLP, mental health, human-centered AI, law/policy, and business/economics experts, deliberating how these disciplines can join forces in realizing the transformative promises of everyday sensing and AI in mental health care, while preventing unintended harms. 
This project is supported by the NIH and Weill Cornell Medicine. Angel Hwang (HCI) and Tony Wong (HCI, NLP) are student leads. Example publications include "Societal-Scale Human-AI Interaction Design? How Hospitals and Companies are Integrating Pervasive Sensing into Mental Healthcare" (CHI'24), 
"Designing Technology and Policy Simultaneously: Towards A Research Agenda and New Practice" (CHI'23 workshop), and "The Future of HCI-Policy Collaboration" (CHI'24).
Supported by the Center for Data Science for Enterprise and Society, we will host a week-long, invite-only workshop on the topic in 2025 summer! Contact us if you are interested in joining.
Social Media Co-Pilot:
Conversational Agents That Teach Teenagers Digital Literacy and Cybersafety In-Situ
This project addresses the urgent need for cyber safety education for teenagers. Specifically, this project creates conversational agents, “Social Media Co-Pilot,” for a social media simulation platform called Social Media TestDrive. These Co-Pilots will help teenagers understand cyberbullying situations, support victims in a considerate manner, and assist in composing positive messages to bullies. To achieve this, this project will create accessible tools for teachers to create custom chatbots for their educational needs by combining the fluency and adaptivity of pre-trained Large Language Models (LLMs) and the controllability of dialogue trees.
This project is supported by the NSF  (#2302977 and #2313077). Nader Akoury (NLP) is the student lead.  Alumni Michael Hedderich (NLP) was the previous student lead. Example publications include A Piece of Theatre (CHI'24).
Harnessing Large Language Models (LLMs) for Knowledge Dissemination
This project helps lay people with varying levels of medical literacy to comprehend medical texts that they hope to read, for example, a patient reading discharge notes from their doctors, a caregiver reading a clinical trial report to understand better a sick family member’s prognostics, a biomedical engineer reading medical literature for their own engineering needs, and even a physician reading another medical specialty's latest discoveries. To achieve this goal, this project creates a medical reading tool that interactively simplifies and elaborates the parts of texts that users struggle to understand. Using purpose-built Augmented Language Models, this tool can adapt the explanations to the user’s medical literacy and reading needs and support medical texts across disease areas.
Using this methodology and these Large Language Models (LLMs) techniques, we are expanding this work to law and policy contexts, exploring LLMs' potential to enhance citizens' legal literacy and encourage public policy commenting.
This project is supported by Schmidt Futures and Cornell's Digital and AI Literacy InitiativeMichael Hedderich (NLP) and Chandrayee Basu (ML) are student leads on the medical front. Example publications include Med-EASi (AAAI'23) and explaining AI predictions with medical literature (CHI'23). Yeonju Jang (HCI) is the student lead on the legal front. 
Harnessing Large Language Models (LLMs) for Knowledge Dissemination in Classrooms
Generative Artificial Intelligence (genAI) models such as chatGPT have started reshaping knowledge work, turbocharging many intellectual tasks while automating and eliminating others. How can we prepare students and citizens across disciplines for such a future? This project will develop the first Experiential Learning module that enables students across majors to examine and harness genAI capabilities for their respective disciplines. Essential to this module is GenAI Explorer, a new GPT-based software that enables students to experiment systematically with genAI capabilities without writing code. Teachers from different domains (e.g., journalism, law, literary justice) can adapt the software to their respective domains and teaching needs.
This project is supported by Schmidt Futures and Cornell Center for Teaching Innovation. Talia Wise (HCI) and Khonzoda Umarova (NLP) are student leads. Example publications include CoAuthor (CHI'22.)
Back to Top