Today, we presented our
#MathVista
() at
#ICLR2024
in Vienna! 🌟
We are thrilled by the tremendous progress in math reasoning in the era of LLMs and VLMs. MathVista has become one of the most reliable benchmarks for probing their abilities in visual math…
🚀Excited to release our 112-page study on math reasoning in visual contexts via
#MathVista
. For the first time, we provide both quantitative and qualitative evaluations of
#GPT4V
,
#Bard
, & 10 other models.
📄✨Full paper:
🔗Proj: …
🔥Excited to release LLaMA-Adapter! With only 1.2M learnable parameters and 52K instruction data, LLaMA-Adapter turns a
#LLaMA
into an instruction-following model within ONE hour, delivering high-quality responses!
🚀Paper:
🚀Code:
🔥Thrilled to release LLaMa-Adapter Multimodal!
🎯Now supporting text, image, audio, and video inputs powered by
#ImageBind
. 🧵6
💻Codes for inference, pretraining, and finetuning ➕ checkpoints:
demo:
abs:
🎉Exciting news: LLaMA-Adapter is now fully unlocked! 🧵6
1⃣ As a general-purpose
#multimodal
foundation model, it integrates various inputs like images, audio, text, video, and 3D point clouds, while providing image, text-based, and detection outputs. It uniquely accepts the…
🚀Introducing
#LLaMA2
-Accessory - an advanced open-source toolkit for large language models.
Evolved from LLaMA-Adapter, we now support more datasets, tasks, visual encoders, and efficient optimization methods.🧠
🔗Code:
💡Key Features:
🎯 Pre-training…
🎉New paper! The survey of deep learning for mathematical reasoning (
#DL4MATH
) is now available. We've seen tremendous growth in this community since 2018, and this review covers the tasks, datasets, and methods from the past decade.
Check it out now:
LLaMA-Adapter V2, the next-gen multi-modal instruction model, boasts a model size multiple times larger than 7B! 🌟🔥
Chatbot systems, get ready for a major upgrade! 🤖💬
Stay tuned! Technical report & models coming soon. 📄🔜Keep up to date!
🔗
🚀Excited to release our 112-page study on math reasoning in visual contexts via
#MathVista
. For the first time, we provide both quantitative and qualitative evaluations of
#GPT4V
,
#Bard
, & 10 other models.
📄✨Full paper:
🔗Proj: …
🚀Meet Chameleon! An innovative plug-and-play framework enhancing
#GPT4
and
#ChatGPT
like
#AutoGPT
for compositional reasoning, blending off-the-shelf tools with tailored LLM models 🔧✨🧠. New SOTA on
#ScienceQA
and TabMWP! 📈
🔗
📜
🚀 Introducing the LLaMA-Adapter, now available on
@huggingface
!
🔗
🎉 Feel free to explore and experiment with our LLaMA-Adapter. We're eager to hear your feedback!
💥 Stay tuned for the upcoming second version - even more powerful and feature-packed!
🎉 Thrilled to have our MathVista work accepted at
#ICLR2024
as an Oral presentation!
Explore our work:
🔍 Project:
🤗
@huggingface
Dataset
@_akhaliq
:
💻 Code:
Deepest gratitude to our shining team: 👏🌟…
🚀Excited to release our 112-page study on math reasoning in visual contexts via
#MathVista
. For the first time, we provide both quantitative and qualitative evaluations of
#GPT4V
,
#Bard
, & 10 other models.
📄✨Full paper:
🔗Proj: …
I am thrilled to defend my PhD and finally earn the title of Doctor🧑🎓. It's been a truly rewarding journey at
@UCLAComSci
. I'm so fortunate and grateful for the invaluable mentorship from Prof.
@kaiwei_chang
@uclanlp
. He has always been incredibly encouraging, helpful, and…
Congrats 🎉 to the newly titled Dr. Lu
@lupantech
on defending his thesis about mathematical reasoning with language models"! 🧮 Pan has published a series of works on quantifying and improving math and scientific reasoning ability in LLMs. Some highlights:
🔥Boost your GPT-3 with our ICLR-23 paper on PromptPG! The first of its kind, PromptPG uses RL to select optimal examples for GPT-3, leading to a 5.31% gain on the TabMWP dataset of math word problems. Don't miss out on this game-changing solution!
👉 🧵1/7
🔍 Does Multi-modal LLMs Truly Understand Diagrams in Visual Math Problems?
🧐 Interest in visual math reasoning has surged in the era of Multi-modal LLMs (
#MLLMs
). Although showing promising potential, it remains uncertain whether MLLMs utilize visual or textual shortcuts to…
MathVerse
Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?
The remarkable progress of Multi-modal Large Language Models (MLLMs) has garnered unparalleled attention, due to their superior performance in visual contexts. However, their capabilities in
🔥 Introducing
#SPHINX
🦁: an all-in-one multimodal LLM with a unified interface that seamlessly integrates domains, tasks, & embeddings. 🧵N
👋 Explore the
@Gradio
demo
@_akhaliq
:
Dive into the open resources!
🤗 Model
@huggingface
:…
🎉 Just reached 1000 citations on Google Scholar! Grateful to be part of a community that values and engages with my research. Here's to continued curiosity and exploration! 🔍
🤔 Ever wondered why foundation models like LLMs & LMMs are only tested on textual math reasoning benchmarks?
🔍 Dive into our
#MathVista
for a fresh perspective: !
🌟 Introducing
#MathVista
: A groundbreaking benchmark for visual mathematical reasoning –…
🌟Last week, I am honored to present our latest work
#Chameleon
to the Reasoning Team at Google Brain
@DeepMind
. It's encouraging to witness tool-augmented LLMs like Transformer Agents
@huggingface
and Chameleon garnering significant attention. 🧵6
Slides:
Model editing has been an effective way to reduce hallucinations in LLMs, instead of undergoing resource-intensive retraining.
🤯However, our study, led by
@JasonForJoy
,
@kaiwei_chang
, &
@VioletNPeng
, reveals that current methods inadvertently impair the general skills of LLMs.…
🚨Struggling to select examples for GPT-3? Try our PromptPG, the first work that applies RL to select in-context examples for GPT-3! PromptPG achieves a gain of 5.31% on TabMWP, a new dataset of tabular math word problems! Check out data and codes:👇
🧵1/7
🚨Thrilled to have one paper accepted to
#NeurIPS2022
! We construct a new benchmark, ScienceQA, and design language models to learn to generate lectures and explanations as the chain of thought to mimic the multi-hop reasoning process. Data and code will be coming soon!
📢📢Excited to have one paper accepted to
#NeurIPS2022
! We present a new dataset, ScienceQA, and develop large language models to learn to generate lectures and explanations as the chain of thought (CoT). Data and code are public now! Please check👇👇
🔥 Exciting Update! We've manually evaluated
#GPT4V
using the playground chatbot on
#MathVista
, our newest benchmark for visual mathematical reasoning.
🚀
#GPT4V
soared with a 15.1%⬆️ improvement over
#Bard
, setting a new record at 49.9%! 🎉
🌐
Yet,…
Our
#Chameleon
ranked
#1
among 1682 AI papers last week by
@alphasignalai
, emphasizing the significant impact our work has made.
#Chameleon
is a plug-and-play reasoning framework, enabling LLMs to utilize diverse tools.
🔗
🎉 More:
🤖 Could
#LLMs
develop emotional intelligence to undestand human social interactions?
Introducing KokoMind 🦍: a benchmark to evaluate how
#gpt4
,
#chatgpt
, &
#claude
interpret conversations and relations, and contribute with insightful advices.
💥 Demo:
Put ChatGPT at a cocktail party🥂.
Can it
- understand people's conversations, gestures
- figure out their relations,
- and even chime in with social advice?
🦍Announce KokoMind.
🌟Check out this demo! More at
#AI
#GPT4
#ChatGPT
#OpenAI
#Shrinking
🧵
🚀🎉 Introducing X-Accessory's new member: Large Diffusion Transformer (Large-DiT)! 🎆✨
🔗
💪 We're pushing boundaries by expanding diffusion transformers to 7B parameters. Here are our features: 🧵6
1⃣ Model Scaling-up 📈: Scale to 3B and 7B by merging…
Can machines answer multi-modal math word problems? We proposed a new task, Icon Question Answering
#IconQA
, to deal with it!
Details are available below:
Paper:
Project:
Code:
Excited to announce the AI for Math Workshop at
#ICML2024
@icmlconf
! Join us for groundbreaking discussions on the intersection of AI and mathematics. 🤖🧮
📅 Workshop details:
📜 Submit your pioneering work:
🏆 Take on our…
🤖In sciences and finance, we often engage in statistical and causal reasoning with structured data. Ever dreamed of
#LLMs
doing the heavy lifting, clearing the path from the maze of complex and error-prone tasks? 🤯
Hold that thought! 🛑 Our findings reveal that even GPT-4…
Are LLMs Capable of Data-based Statistical and Causal Reasoning?
In this work, we propose a benchmark QRData (Quantitative Reasoning with Data) to evaluate models' capability in statistical and causal reasoning with real-world data.
🌐:
I am honored to win the
@Qualcomm
Innovation Fellowship! A heartfelt thank you to
@kaiwei_chang
for your kind words and encouragement. I am grateful to our team, including
@liujc1998
and Professor
@HannaHajishirzi
. This achievement wouldn't have been possible without you all! ❤️
🔥Thrilled to announce that our LLaMA-Adapter has been featured in Lit-LLaMA by
@LightningAI
🦙🦙
🚀 Check out our LLaMA-Adapter here:
⚡️ Explore Lit-LLaMA on GitHub:
Progress update!🦙🔥🤓
Lit-LLaMA now implements the LLaMA-Adapter method for efficient fine-tuning 🔧⚡️
The core idea can be implemented in about 11 lines of code🤯 (see screenshot)
Link to repo👉
Link to Adapter paper👉
💥💥Update Alert! Radar graphs & leaderboard on
#MathVista
now feature detailed scores for the
#Gemini
family models. 🚀
🔍 Insight: Gemini Ultra leads the pack, outperforming GPT-4V by 3.1%! Yet, each model shines uniquely in various math reasoning & visual contexts.
🙏 Big…
Privileged to have the opportunity to guest lecture on
#NLP
course
@CS_UCLA
, instructed by Prof.
@kaiwei_chang
. I really enjoyed it and am so glad to share recent advancements in mathematical reasoning and commonsense reasoning.🧵3
🔗Check out the slides:
Hey Friends! 🎉 Excited to be at
#NeurIPS2023
! 🚀 I’ll be presenting a paper 📄, co-organizing the MATH-AI workshop 🧮, and sharing three collaborative projects. Can't wait to meet you in New Orleans 🎭 and explore the AI advancements in math, science, and more! 🤖🧪
👇1⃣2⃣3⃣4⃣…
Excited to see the release of Gemini!
It is more excited to see that Gemini
@google
features MathVista for evaluating math reasoning in visual contexts and Geometry3K for evaluating geometry reasoning!!
Congratulations and thanks
@GoogleDeepMind
,
@GoogleResearch
, and
@Google
!…
I’m very excited to share our work on Gemini today! Gemini is a family of multimodal models that demonstrate really strong capabilities across the image, audio, video, and text domains. Our most-capable model, Gemini Ultra, advances the state of the art in 30 of 32 benchmarks,…
Spent a fantastic weekend at Lake Arrowhead with the
@uclanlp
group! ❄️🏔️⬆️ Enjoyed scenic drives, delicious meals, engaging conversations, and brainstorming sessions. Truly inspiring! 🚗🥘😋💬 🖼️🧠💡
🌟 Excited about the releases of the
#ChatGPT
App and
#Zelda
game?
🚀 Check out the power of our multimodal LLaMA-
#Adapter
, with a performance that echoes the potential of the visual
#GPT4
.
💥 Stay tuned for the upcoming V2 demo, multimodal Arena, checkpoints, and much more!
🤯So thrilled to have
@AnthropicAI
benchmark their latest, powerful Claude 3 models on our
#MathVista
for visual math reasoning!
It's encouraging to see the rapid progress in (multimodal) LLMs, especially in the math and science fields! 💥
🤗 Our
@huggingface
Data:…
Today, we're announcing Claude 3, our next generation of AI models.
The three state-of-the-art models—Claude 3 Opus, Claude 3 Sonnet, and Claude 3 Haiku—set new industry benchmarks across reasoning, math, coding, multilingual understanding, and vision.
🔥Thrilled to see our
#LLaMA
-Adapter featured in
@HuggingFace
's "Spaces of the Week"! 🎉
Introducing LLaMA-Adapter V2, our cutting-edge multi-modal instruction model! Explore demo examples here: 💡
🚀Stay tuned for the technical report and model release!
It has been a wonderful day at Open House
@allen_ai
🍺🍖🌊. I met a lot of great people and got inspiring advice. Many thanks to the great efforts of the operations team for preparing all of it!
🌟Powered by
#DALLE2
,
#LLM
unveils the potential for Multimodal Procedural Planning (MPP): generating coherent and authentic multimodal plans with multiple steps to reach high-level goals.
Explore our latest work:
abs:
data & code:
🎉 Exciting news! Our
#MathVista
is excelling with the latest advances in vision-language models (VLMs). Grok-1.5V by
@xai
achieves a 52.8% score, surpassing leading models such as GPT-4V, Claude 3 Opus, and Gemini Pro 1.5!
🔗 Visit our project page:
👀…
Congratulations and thanks to
@MistralAI
for releasing the
#MoE
model to the community.
Our LLaMA2-Accessory now features Mixtral-8x7b with a chatbot demo, available on
@Gradio
!
Try the Chatbot:
http://106.14.127.192/
For more implementation details:
📖 Documentation:…
📢 Attention
#NLPoc
community!
Submit and showcase your research at the 4th Southern California Natural Language Symposium (SoCal NLP) 📜
🗓️ Submission Deadline: Oct. 21, 2023, 11:59 PM PT
🔗 More info:
#SoCalNLP
#CallForPapers
LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model
Compared to the original LLaMAAdapter, LLaMA-Adapter V2 can perform open-ended multi-modal instructions by merely introducing 14M parameters over LLaMA
abs:
github:
🚀We've just launched
#SciBench
, a sophisticated, college-level benchmark. It uniquely evaluates the capabilities of LLMs in tackling scientific problem-solving.
SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models
paper page:
Recent advances in large language models (LLMs) have demonstrated notable progress on many mathematical benchmarks. However, most of these…
In 2021, we explored early research in geometry: our Inter-GPS, a neuro-symbolic solver, reached average human-level score for the first time.🎉
Now,
@GoogleDeepMind
's AlphaGeometry marks a historic breakthrough: Olympiad-level skill!🚀
🔎For more:
🔗…
Introducing AlphaGeometry: an AI system that solves Olympiad geometry problems at a level approaching a human gold-medalist. 📐
It was trained solely on synthetic data and marks a breakthrough for AI in mathematical reasoning. 🧵
Happy to receive the NeurIPS 2022 Scholar Award! I really appreciate every support I get from the community, and I will devote myself to making contributions to the community!
@NeurIPSConf
🍻See you in New Orleans!
🚀
@google
is introducing new updates to aid in learning math and science, especially in visual contexts: .
💥 We're proud to spotlight our commitment to math and science over the past years, with projects like
#MathVista
,
#Chameleon
, and
#ScienceQA
.
1️⃣…
🚨 Attention! I'm presenting the 🦎
#Chameleon
paper at Booth 320 from 10:45 to 12:45 at
#NeurIPS23
. You're welcome to stop by for a chat! ☕️😉🤖🧲💡
For more details, check out our project at .
Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models
Chameleon with GPT-4 achieves an 86.54% accuracy on ScienceQA, significantly improving upon the best published few-shot model by 11.37%; using GPT-4 as the underlying LLM, Chameleon achieves a 17.8%…
It is remarkable that Gemini achieves a new SOTA of 53.0% on MathVista (), a challenging benchmark for math reasoning in visual contexts. We are honored that our proposed
#MathVista
is advancing the development of the newest and most capable AI models.
In image understanding, Gemini performs well across all the benchmarks we examined, with the Ultra model setting new state-of-the-art results in every benchmark.
🚀OpenAI is releasing the latest function and tool-calling update for
#GPT4
!
Just two months back, we introduced
#Chameleon
🦎, an innovative compositional reasoning framework. It uses LLMs as a planner to generate diverse programs, integrating various tools including LLMs,…
It was great to attend the
#NeurIPS2022
poster session and present our work
@UCLA
@ASU
@allen_ai
in person🎉. I’m excited that I met many great people and got countless insightful advice and comments. Thanks to everyone for your interest in our work!🍻
🎯It is time to submit your work on mathematical reasoning to the 2nd MATH-AI workshop!
As the workshop is non-archival, papers that are recently published or under review are allowed. ⏰The submission deadline is due on Sep 29⏰.
✅✅More information:
🎉🎉I am really happy that the 2nd MATH-AI workshop ended with such a big success. Very encouraged that so many people are interested in the domain and that the community is growing rapidly. Huge thanks to the speakers, panelists, and organizers! See you all at future events!!🍻
🎉 Exciting News! X-Accessory now welcomes a new addition - Mistral-MoE! 🌟
Discover it here:
🚀 Tap into the power of Mistral-MoE with our X-Accessory's robust framework, with the new features of inference and LoRA fine-tuning via model parallelism.
🌐…
We will be organizing the 1st Tool-Augmented VIsion (TAVI) Workshop at
#CVPR2024
. We are looking forward to having an exciting list of keynote speakers covering various topics about tool-use and retrieval augmented models.
More details at:
We're dedicated to
#OpenSource
, confident that it will profoundly enrich the community.🌟
Thrilled to see our recent work, LLaMA-Adapter, and its subsequent developments positively impacting the community.🚀
Stay updated with continuous improvements: 📌
It was a great month for open source: So many LLMs came out that it's become quite overwhelming to keep track of it all.
So, in this month's Ahead of AI issue, I am sharing resources and research insights on the latest open-source LLMs & datasets!
🚨Call for Papers🚨 Submission to the
#NeurIPS2022
MATH-AI Workshop will be due on Sep 30, 11:59pm PT (2 days after ICLR😆). The page limit is 4 pages (not much workload🤩). Work both in progress and recently published is allowed. Act NOW and see you in
#NewOrleans
!🥳🥳🍻
OneLLM: One Framework to Align All Modalities with Language
paper page:
Multimodal large language models (MLLMs) have gained significant attention due to their strong multimodal understanding capability. However, existing works rely heavily on…
An excellent blog on Controllable Neural Text Generation from
@lilianweng
! It's important to consider ways to reduce the hallucinations of LLMs and better reflect human intentions, especially given their current success and limitations.
👉
#ChatGPT
#LLM
Thrilled to join the live event, thanks to
@LightningAI
's kind invitation! 🌟 Peng and I will share the insights behind the LLaMA-Adapter series.
📅 event:
📚 abs-1:
📚 abs-2:
💻 code:
Excited to be at
#AAAI23
on-site! Can't wait to catch up with old friends and make new ones.
📢I'll give an oral presentation on
#ScienceQA
() at
@knowledgenlp
Workshop on Monday, Feb 13, 2:15-3:15 pm in Room 144B.
If you're around, let's grab a coffee!
📢📢Welcome to the 2nd
#MATH
-AI workshop tomorrow (Sunday, Dec 03) in Rooms 293-294 at
#NeurIPS2022
if you are interested in math reasoning and AI! There are 6 invited talks, 3 contributed talks, 1 poster session, and 1 panel discussion.
🪜Full program:
Excited to see the breakthrough achieved by
@Apple
's MM1 model, as evidenced by our
#MathVista
(), the comprehensive benchmark for math reasoning in visual contexts!
Few-shot mixed-resolution CoT: we can keep the strong few-shot capabilities learned from multimodal pre-training even after instruction-tuning: MM1-30B-Chat achieves 39.4 zero-shot on MathVista, but with eight-shot CoT mixed-resolution prompting we can achieve 44.4.
🧵1/6 Experience the magic of LLaMA-Adapter! Transforming real-world inputs like text, images, videos, audio, and 3D point clouds into engaging text. The reality you know, reimagined through AI.
🖼️📽️🔉🌐➕📝 ➡️➡️🦙➡️➡️ 📝
Can a language model help you with your math homework? Not on its own, but maybe with the help of a Python interpreter!
In our EMNLP paper we present 📜 Līla and 🤖 Bhāskara, a math reasoning benchmark and model.
📄:
🔗:
1/🧵
Absolutely thrilled to share that Tony Xia
@CS_UCLA
has been accepted into
@Stanford
's Computer Science MS program! It was an honor to write his recommendation and have mentored such a talented undergraduate since 2020. Wishing him all the best as he pursues his academic dreams.
Excited to organize the 2nd MATHAI workshop
@NeurIPSConf
with our great team❤️! The workshop will be in New Orleans🏙️ in person, on December 03, 2022. The submission is open now🧲!
#NeurIPS2022
🚨We are organizing the 2nd MATHAI workshop at NeurIPS!
Check it out if you're interested in AI for math, and machine reasoning in general🤯!
We have a great lineup of speakers & panelists!
See more in call for papers: 👇
🥳Trilled in New Orleans for
#NeurIPS
! This year, I will present one paper (ScienceQA) + 2 WS papers (PromptPG, Lila). And I am co-organizing the 2nd MATH-AI workshop!
☕️Excited to meet you! DM me if you want to grab a coffee and chat about MathAI, LLMs, and trustworthy NLP!!👇
An insightful fireside chat by Sam Altman! Looking forward to the potential of generative AI models that facilitate solving the common challenges that all human beings face!
#OpenAI
#GenAI
🎉YES! It is exciting to see the growing community on Math&AI!
Thank the organizing team
@Swarooprm7
@wellecks
@Yuhu_ai_
@HannaHajishirzi
@percyliang
for their great efforts to make this happen! 👏👏
The acceptance notification will be announced on October 20. Stay tuned! 😆
Compared to the 1st MATHAI workshop 1 year ago, the number of submissions this time almost doubled! Glad to see the field is growing rapidly 🙌
Also there are many mind-blowing works 🤯🤯 Stay tuned!
The data visualization page is now here at . You can play with it now to see what ScienceQA looks like🧐. Data and code will also be ready in the next couple of weeks.🥳
🚨Thrilled to have one paper accepted to
#NeurIPS2022
! We construct a new benchmark, ScienceQA, and design language models to learn to generate lectures and explanations as the chain of thought to mimic the multi-hop reasoning process. Data and code will be coming soon!
🗳️🗳️If you've attended
#EMNLP
in the past 3 years, please check your email to vote for the SIGDAT VP-elect by 3/24. Your vote is important to thrive the
#NLP
community!
I am honored to be nominated by SIGDAT (the org that oversees EMNLP) to run for VP-elect with other awesome candidates who share the goal of improving our community. Please check your email to vote by 3/24.🗳️ See details:
With a mere 1.2 million learnable parameters, LLaMA-Adapter demonstrates superior reasoning capacity on
#ScienceQA
, surpassing a diverse range of multi-modal and LLM models, such as fully-finetuned MM-COT and few-shot GPT-3.
🚨Thrilled to have one paper accepted to
#NeurIPS2022
! We construct a new benchmark, ScienceQA, and design language models to learn to generate lectures and explanations as the chain of thought to mimic the multi-hop reasoning process. Data and code will be coming soon!
Thrilled to see OmniQuant – a crucial development for compressing large language models! It's astounding that it can quantize 7B-70B LLaMa-2 models in just 1 to 16 hours using 128 samples, and it even supports mobile phones.
🔗 Code:
Thank AK
@_akhaliq
for the post.
🔥 Excited to introduce OmniQuant - An advanced open-source algorithm for compressing large language models!
📜 Paper:
🔗 Code:
💡 Key Features:
🚀Omnidirectional Calibration: Enables easier weight…
📽️LLaMa-Adapter Multimodal supports [Video] input.
👀From cinematic masterpieces to topical news footage, it's designed to perceive and appreciate the diverse content in videos.
Stay tuned for our live demo:
🧵2/6
Super excited to have
@xinyun_chen_
present the great work on Analogical Reasoning with
@denny_zhou
.
Don't miss the insightful talk this afternoon at the MathAI Workshop at
#NeurIPS2023
.
⏰ 4:00pm - 4:30pm
📍 Room 217-219
A simple yet effective approach to fill the performance gap between zero-shot and few-shot prompting
Xinyun Chen
@xinyun_chen_
is going to present our recent work LLM analogical reasoning () this afternoon in the exciting
#MathAI
workshop of
#NeurIPS2023
.…
Congrats to
@UCLA
Asst. Prof.
@adityagrover_
and incoming Asst. Prof. Saadia Gabriel
@GabrielSaadia
of
@CS_UCLA
on being named to
@Forbes
' 30 Under 30 list in science. Grover and Gabriel were each recognized for their work using artificial intelligence.
💥Congrats to Sean on launching the L3 Lab at CMU! I am honored to have collaborated with him on two papers and co-organized three MathAI workshops. He is definitely the rising star 🚀 in the field, and I have learnt a lot from his great vision and excellent leadership!
Announcing the L3 Lab at CMU!
We focus on Learning, Language, and Logic, including:
- Principles of ML for language
- ML in high-trust areas, such as verifying math and programs
- ML systems that improve over time
Recruiting PhD students for fall 2024!