About me
I am a PhD student at NYU, advised by Prof. Danny Huang. I am also fortunate to get guidance from Prof. Jessica Staddon, and Prof. Rahul Chatterjee in various capacity.
Research interests
My research focuses on responsible and human-centered AI, where I reduce end-user security, privacy, and safety risks in LLMs, with particular attention to vulnerable users such as individuals experiencing technology-facilitated intimate partner abuse (TFIPA). I use empirical methods to uncover risks in generative AI applications, and design interventions that address these failures. My work contributes to safer AI systems and offers evidence-based insights for policy and practice.
Future research directions
- Algorthmic auditing of “social AI” (use of generative AI in social networks) from safety and security point of view to measure its societal impact.
- Emperical measurements in security, privacy, and safety, as well as human-centered studies.
- Emperical evaluuation of LLMs for diverse group of end-users, such as consumers, developers, and at-risk population.
Recent Updates
- December 2025, presented my work Learned, Lagged, LLM-splained: LLM Responses to End User Security Questions at ACSAC 2025.
- December 2025, presented my work Experimental Aspects of Evaluation of LLM Responses to End-user Security Questions in the LASER workshop cohosted at ACSAC.
- November 2025, gave talk at Cornell-Tech security seminar about my work on evaluation of LLMs ability to help TFA surviors with their technology questions and end-users with security questions.
2023
- Presented our work In the Room Where It Happens: Characterizing Local Communication and Threats in Smart Homes at IMC '23.
2022
- Presented our work Inferring Software Update Practices on Smart Home IoT Devices Through User Agent Analysis at SCORED workshop colocated at ACM CCS '22.
2021
- Sept 2021 - I am excited to join NYU and be a part of CCS as a PhD fellow under the supervision of Prof. Danny Huang.
- Sept 2021 - My bug report in Android Bluetooth stack was assigned CVE-2021-0968 by Google.
