About me
I am a PhD student at NYU, advised by Prof. Danny Huang. I am also fortunate to get guidance from Prof. Jessica Staddon, and Prof. Rahul Chatterjee in various capacity.
My research focuses on responsible and human-centered AI, where I reduce end-user security, privacy, and safety risks in LLMs, with particular attention to vulnerable population such as individuals experiencing technology-facilitated intimate partner abuse (TFIPA). I use empirical methods to uncover risks in generative AI applications, and design interventions that address these failures. My work contributes to safer AI systems and offers evidence-based insights for policy and practice.
Research interests
- Emperical assessment of human-centered application of generative AI in security, privacy, and safety domain for diverse group of end-users, such as consumers, technology professionals, and at-risk populations.
- Emperical measurements of network security, privacy, and safety.
- Algorithmic auditing of “social AI”: Examining safety, security, and moderation challenges as generative AI is used to create content in social networks. Interested inmeasuring whether platform protections are effective and equitably applied across users of different social backgrounds and age groups.
Recent Updates
- February 2026, presented poster on our onging work Responsibly Assisting Technology-Facilitated Abuse Survivor Support Ecosystem Stakeholders with Generative AI at ABSURD.
- February 2026, gave a talk to Prof. Elissa Redmiles’ group at Georgetown about our evaluation of LLMs responses to security questions from consumers and technology-faciliated abuse survivors.
- January 2026, our paper Assessing LLM Response Quality in the Context of Technology-Facilitated Abuse is accepted at USENIX 2026.
- December 2025, presented my work Learned, Lagged, LLM-splained: LLM Responses to End User Security Questions at ACSAC 2025.
- December 2025, presented my work Experimental Aspects of Evaluation of LLM Responses to End-user Security Questions in the LASER workshop cohosted at ACSAC.
- November 2025, gave talk at Cornell-Tech security seminar about my work on evaluation of LLMs ability to help TFA surviors with their technology questions and end-users with security questions.
2023
- Presented our work In the Room Where It Happens: Characterizing Local Communication and Threats in Smart Homes at IMC '23.
2022
- Presented our work Inferring Software Update Practices on Smart Home IoT Devices Through User Agent Analysis at SCORED workshop colocated at ACM CCS '22.
2021
- Sept 2021 - I am excited to join NYU and be a part of CCS as a PhD fellow under the supervision of Prof. Danny Huang.
- Sept 2021 - My bug report in Android Bluetooth stack was assigned CVE-2021-0968 by Google.
