My Research
The abilities of ChatGPT, released in late 2022, were a shock for many in the machine learning community (including me). This breakthrough ignited an arms race focused on training large generative models with increasing capabilities. In particular, we can foresee a not-so-far future with ML systems empowered with interaction and agency capabilities.
In light of these emerging phenomena, my research seeks to explore the forthcoming challenges of the transition of sophisticated ML systems into agents:
- Adversarial Robustness of LLMs' Safety Alignment: How can we build safer LLMs and improve the evaluation of their robustness?
- Cooperation and Negotiation in Multi-Agent Contexts: How can we design algorithms that can learn the long-term benefit of cooperation while still maintaining a high value for their objective? In a world where rational agents first serve their interests, negotiation protocols and robustness are essential to reaching a high level of cooperation.
- Principled Learning Method for (Multi-Agent) RL: Can we design training methods that tackle the challenges of learning in non-stationary environments?
- Risk and Benefit of Interaction with Synthetic Data: When and to what extent can synthetic data improve (or deteriorate) the performance of models trained on these data? How do generative models affect each other's learning when deployed in shared environments?
I identify to the fields of ML (
JMLR,
NeurIPS,
ICML,
AISTATS,
COLT, and
ICLR) and optimization (
SIAM OP)