About me

I am a PhD student at NYU, advised by Prof. Danny Huang. I am also fortunate to get guidance from Prof. Jessica Staddon, and Prof. Rahul Chatterjee in various capacity.

Research interests

My research focuses on responsible and human-centered AI, where I reduce end-user security, privacy, and safety risks in LLMs, with particular attention to vulnerable users such as individuals experiencing technology-facilitated intimate partner abuse (TFIPA). I use empirical methods to uncover risks in generative AI applications, and design interventions that address these failures. My work contributes to safer AI systems and offers evidence-based insights for policy and practice.

Future research directions
 - Algorthmic auditing of “social AI” (use of generative AI in social networks) from safety and security point of view to measure its societal impact.
 - Emperical measurements in security, privacy, and safety, as well as human-centered studies.
 - Emperical evaluuation of LLMs for diverse group of end-users, such as consumers, developers, and at-risk population.

Recent Updates

2023
  • Presented our work In the Room Where It Happens: Characterizing Local Communication and Threats in Smart Homes at IMC '23.
2022
  • Presented our work Inferring Software Update Practices on Smart Home IoT Devices Through User Agent Analysis at SCORED workshop colocated at ACM CCS '22.
2021
  • Sept 2021 - I am excited to join NYU and be a part of CCS as a PhD fellow under the supervision of Prof. Danny Huang.
  • Sept 2021 - My bug report in Android Bluetooth stack was assigned CVE-2021-0968 by Google.