Securing models against adversarial manipulation is table stakes today for real-world GenAI/LLM deployments. In our new position paper with
@BanghuaZ
,
@JiantaoJ
, and David Wagner we outline current challenges and promising directions for future work in GenAI security