Assistant Professor
@UZH_en
and group leader of the ALPI Lab
🌼
Working on RL, multi-agent, imitation learning, and other sequential decision-making stuff
🔥 I'm thrilled to announce that I'll be joining the University of Zurich
@UZH_ch
as an Assistant Professor in Machine Learning next year!
I'll be supervising PhD students, so stay tuned for updates on my website, or feel free to reach out via email.
Exciting opportunity! Join my new research group at the University of Zurich as a PhD candidate in Reinforcement Learning.
Contribute to cutting-edge AI research and shape the future of the field 🔥
To apply follow the instructions on my website:
#UZH
#RL
🎉Exciting news! I am currently seeking candidates for PhD/Post-Doc positions at the
@ETH_AI_Center
.
Ready to dive into the exciting world of RL and its interdisciplinary applications? You can apply by Nov 22, 2023, don't miss this opportunity! 🤖🌟
⏰ Deadline Approaching!
🚀Apply by Nov 22 to become an ETH AI Center
#PhD
or
#Postdoc
!
We're hosting an EXTRA Online Q&A Session
📅Monday, Nov 20, 16:15 - 17:00 CET.
Zoom link on our website 🔗
Don't miss this chance to get the info you need!
Interested in learning how Reinforcement Learning can be applied in the real world? Come and join us at the
#LaunchAIXSummit2023
tomorrow (14:30)!
At our panel, we will explore the transformative impact of RL on real-world problems.
@ETH_AI_Center
#RL
#AI
How to measure the closeness between policies in
#RL
?
In our
#neurips
paper, we use Optimal Transport to define Trust Regions for efficient and stable Policy Optimization... and it works!
with
@antonio_terpin
, Nicolas Lanzetti and
@florian_dorfler
Our workshop on 'Aligning Reinforcement Learning Experimentalists and Theorists' will take place at
#ICML2024
!
We have one call for papers and one for ideas/problems/positions... Looking forward to reading your thoughts on bridging the gap between experimental and theoretical RL!
Hi everyone!! I just arrived in Honolulu for
#ICML2023
Looking forward to chat about RL, Multi-agent RL, RLHF, Imitation Learning and so on in the next days
Have you submitted an
#RL
paper at
#NeurIPS2024
?Then consider also submitting to our ICML workshop on "Aligning Reinforcement Learning Experimentalists and Theorists"
Deadline on May 29th!
@arlet_workshop
Almost two weeks ago I defended my PhD and I still can not believe it.
I would like to thank the members of the tribunal (
@KP_twitt_llo
,
@Joffreypelleti1
,
@narita_lab
) for the amazing discussion and the great suggestions.
Thanks to everyone that has been part of this journey.
I am not at
#ICLR2024
but
@mirco_mutti
and
@desariky
will present our work (w/
@dr_amarx
) tomorrow at 16:30! We show how to do posterior sampling in RL starting from a prior specified through a partial causal graph ➡️ more edges mean better statistical efficiency 🔥
🆕 Pointer[203]: Reinforcement Learning - con Giorgia Ramponi (
@gio_ramponi
)
🤖 Il Reinforcement Learning è, insieme al Supervised e all'Unsupervised Learning uno dei tre paradigmi basilari del Machine Learning.
Interested in learning about Imitation Learning in Mean-field games? Unfortunately, I am unable to attend
#NeurIPS2023
, but my co-author Pavel Kolev will be there to present our poster at 18:00 tomorrow!
Paper link:
Excited about this new preprint with Daniil and
@Kristo
We show a Ω(log T) lower bound for Differentially Private Online learning, even for finite Littlestone classes. This shows a separation between DP and non-DP online learning in mistake bound model.
How to measure the closeness between policies in
#RL
?
In our
#neurips
paper, we use Optimal Transport to define Trust Regions for efficient and stable Policy Optimization... and it works!
with
@antonio_terpin
, Nicolas Lanzetti and
@florian_dorfler
@AmartyaSanyal
From my point of view, you need a "clear and strong interest in research", but the problem can change (and will change) many times (from personal experience!)
I am excited to present "Provably Learning Nash Policies in Constrained Markov Potential Games" on Thursday at 11:00 at
@AAMASconf
instead of my colleagues from
@ETH
, who were unable to attend:
@pragnya_a
,
@gio_ramponi
, Niao He and
@arkrause
.