My research builds on the fields of human-computer interaction, security and privacy, and computational social science to study and remedy the inequities in access to digital security, privacy, and safety.
Read about some of my recent projects below.
As conversational AI becomes increasingly prevalent, understanding its impact on young users is crucial—particularly how these systems influence teenagers' health and wellness decisions through both linguistic and paralinguistic cues. This study explores how AI chatbots use such cues to influence decision-making awareness among teens and investigates ways to design these cues to support informed decision-making rather than manipulate behavior.
Our mixed-methods approach began with 40-minute exploratory interviews to understand teenagers' current practices and attitudes regarding AI interactions. These interviews were followed by a 15-minute quantitative experiment to analyze how linguistic nudges and paralinguistic cues influence teens' decision-making processes.
Our research addresses several key questions: How do linguistic and paralinguistic nudges influence teenagers' decision-making awareness in everyday health and wellness contexts? How can these nudges be designed to support awareness and reduce manipulation in Conversational User Interfaces (CUIs)?
Threat modeling is a well-established practice in software security, where systems are assessed for potential vulnerabilities that could be exploited. However, in the field of usable security and privacy, a new form of threat modeling has emerged—one that focuses on people, particularly at-risk populations, rather than software systems. Despite its growing importance, this human-centered threat modeling is often conducted in an ad hoc manner, lacking a formalized structure.
To address this gap, we have conducted a Systematization of Knowledge (SoK), analyzing a corpus of papers that explore human-centered threat modeling. Our goal was to identify common characteristics and approaches across these studies and systematize them into a comprehensive framework. This framework is designed to guide researchers working with specific populations, helping them better understand the unique online threats those individuals face and the strategies they use to protect themselves.
Our work aims to formalize the process of human-centered threat modeling, offering a structured methodology that can be applied consistently across studies of online harms. We believe that this framework has the potential to become a valuable tool for researchers investigating how different groups experience and mitigate digital risks, enabling more effective, targeted security interventions in the future.
For more details, see read our SoK, here.
[pdf]
This study explored how first- and second-generation Pakistani immigrants navigated security and privacy challenges in their daily technology use, focusing on the dynamics between parents and children.
Through in-depth interviews, we examined how first-generation immigrants (parents) and second-generation immigrants (children) approached issues like online privacy, data security, and technology use, both individually and within their families. We aimed to understand how these practices evolved across generations, considering cultural, social, and technological factors that influenced behavior.
Our findings highlighted the unique security and privacy concerns faced by immigrant families and provided recommendations for designing technology platforms to better address their needs,
such as enhancing privacy controls and supporting communication across generational and cultural divides. In the paper, we advocate for considering diverse cultural contexts when designing secure and privacy-focused technology solutions for immigrant communities.
For more details, see read our paper, here
[pdf]
This ongoing study investigates the nuanced ways in which young adults in Pakistan define and experience harassment, both online and offline, with a focus on gender-specific differences. Through semi-structured interviews, we explore how individuals perceive and respond to various forms of harassment and how these perceptions shape their threat models and safety practices. The study includes participants from different socio-economic statuses, aiming to capture a broad spectrum of experiences and understand the social and cultural factors influencing these definitions.
Our research seeks to provide deeper insights into how harassment is experienced and managed across genders, offering practical recommendations for designing interventions, policies, and technologies that better support survivors.
If you are an undergrad at BYU and would like to join this study as a researcher, reach out to me at warda97@byu.edu.
We want to interview researchers that are actively conducting studies that explore technology-related threats to individuals,
focusing on understanding threats as perceived by the individuals, and primarily utilizing qualitative research methods. Sign up and read more here
This study conducts in-depth interviews with leading researchers in the field of usable security and privacy to
explore their approaches to studying harms and mitigations. By examining their methodologies,
research challenges, and long-term goals. We aim to provide practical guidelines for conducting rigorous, human-centered threat modeling.
End-to-end encrypted secure email systems are the most effective means of ensuring privacy and security in email communication. Despite these clear advantages, adoption rates remain low, with prior research attributing this mainly to the complexity and inconvenience of secure email. However, the perspectives of those who have voluntarily adopted encrypted email systems are often overlooked.
To explore these perspectives, we conducted a semi-structured interview study aimed at understanding the mindsets driving the adoption and use of secure email services. Our participants were diverse, coming from various countries, with differences in how long they had used secure email, how frequently they used it, and whether they used it as their primary account.
Our findings reveal that a key motivator for adopting secure email is avoiding surveillance by large tech companies.
Yet, despite varying levels of mental model complexity, participants rarely send or receive encrypted emails,
limiting their potential privacy benefits. This suggests that while privacy features may drive greater adoption,
the full privacy potential will remain untapped until a critical mass of users can easily exchange encrypted emails.
For more details, see our SOUPS 2023 paper, Distrust of Big Tech and a Desire for Privacy: Understanding the Motivations of People Who Have Voluntarily Adopted Secure Email.
[pdf]