I graduated and was honored to receive the Outstanding Doctoral Student Award!🎓A huge thank you to my amazing advisor
@yangfeng_ji
!🧡💙Next, I will be excited to join
@RiceCompSci
as a tenure-track Assistant Professor in 2024 after a postdoc
@jhuclsp
working with
@mdredze
!😊
📣Excited to announce that I am on the academic job market this year. My research lies in the intersection of natural language processing (NLP) and machine learning (ML), with a focus on developing interpretation techniques to make AI systems trustworthy and reliable.
How do LLMs perform in realistic clinical cases?
🔥Introducing two challenging medical QA datasets with high-quality explanations
🤖GPT-4 fell short in handling these challenging tasks
🤔Model explanations are promising but challenging for evaluation
📄
A warm welcome to the five new faculty members who recently joined Rice CS! These new hires bring expertise in theoretical computer science, trustworthy AI, and quantum computing — plus decades of teaching experience.
📢Postdoc Position📢
Dr. Xia Hu
@huxia
and I are looking for a Chairman's Postdoctoral Fellow in Efficient and Trustworthy LLMs
@RiceCompSci
If you are interested, please do not hesitate to apply:
My first in-person NLP conference at
#NAACL2022
Cannot say how wonderful it is🤩, I just realized how much I missed in the past pandemic years...I am so glad I got this one and had a great time seeing many old and new friends and chatting with amazing people😄
✈️12/10-12/16, I’ll be at
#NeurIPS2023
and look forward to connecting. Let’s meet up and chat about research and more!🙂☕️
📢I am also looking for PhD students to join my group at Rice CS
@RiceCompSci
in Fall 2024. Please DM if you want to chat at NeurIPS!
Our paper “Adversarial Training for Improving Model Robustness? Look at Both Prediction and Interpretation” was accepted at
#AAAI2022
. Many thanks to my advisor
@yangfeng_ji
❤️ More details are coming soon!
How do LLMs perform in realistic clinical cases?
🔥Introducing two challenging medical QA datasets with high-quality explanations
🤖GPT-4 fell short in handling these challenging tasks
🤔Model explanations are promising but challenging for evaluation
📄
The most exciting thing I did this semester is co-designing and teaching the Interpretable Machine Learning course with my advisor
@yangfeng_ji
It is so rewarding! ❤️ So glad I had this incredible experience during my PhD study. 😄
Super fun project on investigating the prediction behavior of LLMs from a psychological perspective🧠
🚨GPT-4 can fall into a cognitive trap and make mistakes even when it has the correct knowledge, like us humans🤯
✅A simple hint helps
Check out our paper for more details👇
Very excited to share that our paper got accepted at
#NAACL2021
🎉 Many thanks to my advisor
@yangfeng_ji
and co-authors (my internship mentors at IBM last summer)
@chulaka_g
@JatinGanhotra
, Song Feng, Hui Wan, Sachindra Joshi 🥰
It is natural for rationales to inadvertently expose the label. What we care about is the new information in rationales that justifies the label beyond mere leakage.🧐
How do we measure the new information when label leakage exists? Check out our robust evaluation metric, RORA!📢
📢 New Preprint📢 Do you realize your rationale evaluation model might favor "cheating" rationales that merely parrot the answer? Check out RORA, our new information-theoretic metric for the evaluation of rationales that is robust against label leakages.
🥳Excited to share our work at
#EMNLP2022
Findings! We propose a two-phase self-training framework for few-shot MR-to-Text generation.
Fig.1 shows examples of our pseudo-labeled data.
Fig.2 is our framework and Fig.3 compares our model outputs with other models.
UVa ILP group dinner, the first one since 2020.
One of the group members is leaving us for a bright future and one of them will rejoin us for her PhD journey!
@UVA_ILP
📣Excited to announce that I am on the academic job market this year. My research lies in the intersection of natural language processing (NLP) and machine learning (ML), with a focus on developing interpretation techniques to make AI systems trustworthy and reliable.
Our proposed metric REV is able to penalize vacuous rationales, provides finer-grained evaluations on the new, label-relevant information in rationales, and offers deeper insights into models’ reasoning and prediction processes, including chain-of-thought.
Hanjie has done many impressive works on explainable NLP and its applications to model robustness and uncertainty. Besides, she is passionate about both teaching and research. Check out her webpage if you think she may be a good fit for your department
If you are applying to a CS PhD program, check out our new -- we hope it could be a helpful resource for you! If you have gone through this process, we invite you to share your statement too!