About Me
Hello! I am a BSMS student at Georgia Tech where I work on Natural Language Processing (NLP) and am advised by Prof. Alan Ritter. This summer, I was also an intern at Center for Human-Compatible Artificial Intelligence (CHAI) at UC Berkeley working in Prof. Stuart Russell’s group.
Current Research Interests: I am interested in building and evaluating efficient human-in-the-loop (HITL) systems and investigating applications of HITL learning that can be deployed to critical domains (e.g. misinformation). Additionally, I am interested in large language model (LLM) robustness, specifically as it relates to privacy-preservation and model hijacking.
Areas of Future Research Interest: There are a few areas that I hope to break into in the future: building LLM agents, model interpretability (especially for adversarial detection), neural program synthesis (especially with HITL learning)
I am applying to ML/NLP Ph.D. programs in Fall 2023.
Feel free to reach out at emendes3[at]gatech[dot]edu.
Publications
Human-in-the-loop Evaluation for Early Misinformation Detection: A Case Study of COVID-19 Treatments
Ethan Mendes, Yang Chen, Wei Xu, Alan Ritter
ACL 2023
[paper] [data]Tensor Trust: Interpretable Prompt Injection Attacks from an Online Game
Sam Toyer, Olivia Watkins, Ethan Mendes, Justin Svegliato, Luke Bailey, Tiffany Wang, Isaac Ong, Karim Elmaaroufi, Pieter Abbeel, Trevor Darrell, Alan Ritter, Stuart Russell
R0-FoMo: Robustness of Few-shot and Zero-shot Learning in Foundation Models at NeurIPS 2023 (Spotlight) and
Workshop on Instruction Tuning and Instruction Following at NeurIPS 2023
[paper] [data] [code] [project page]
Under Submission
- Can Language Models be Instructed to Protect Personal Information?
Yang Chen*, Ethan Mendes*, Sauvik Das, Wei Xu, Alan Ritter
[paper] [data] [project page]
Preprints
- Defending Against Imperceptible Audio Adversarial Examples Using Proportional Additive Gaussian Noise
Ethan Mendes, Kyle Hogan
[paper]