Julie Kallini ✨ Profile Banner
Julie Kallini ✨ Profile
Julie Kallini ✨

@JulieKallini

600
Followers
337
Following
7
Media
152
Statuses

CS PhD @StanfordNLP 🌲 Previously: SWE @Meta , Class of '21 @PrincetonCS

Joined January 2018
Don't wanna be here? Send us removal request.
Pinned Tweet
@JulieKallini
Julie Kallini ✨
3 months
Do LLMs learn impossible languages (that humans wouldn’t be able to acquire) just as well as they learn possible human languages? We find evidence that they don’t! Check out our new paper… 💥 Mission: Impossible Language Models 💥 ArXiv: 🧵
Tweet media one
12
114
479
@JulieKallini
Julie Kallini ✨
10 months
Excited to finally announce that I am joining Stanford as a CS PhD student! I feel deeply honored to be joining such an amazing group of researchers @StanfordNLP . 🌲
14
6
150
@JulieKallini
Julie Kallini ✨
9 months
The best part about this famous Barbie meme is that it presents organic evidence of the constituent structure within coordination phrases
Tweet media one
2
12
119
@JulieKallini
Julie Kallini ✨
5 months
ChatGPT: Sorry, I can't draw copyrighted characters like Sonic the Hedgehog. Also ChatGPT: Wow, Sonic the Hedgehog sounds like a fun and original character!
Tweet media one
Tweet media two
3
22
105
@JulieKallini
Julie Kallini ✨
3 months
You may remember Noam Chomsky’s NYT article on ChatGPT from last year. He and others have claimed that LLMs are equally capable of learning possible and impossible languages. We set out to empirically test this claim.
@nytimes
The New York Times
1 year
In Opinion “The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching,” Noam Chomsky, Ian Roberts and Jeffrey Watumull write in a guest essay.
34
145
409
1
1
39
@JulieKallini
Julie Kallini ✨
3 months
We create synthetic impossible languages of differing complexity by modifying English with word orders and grammar rules not seen in natural languages. We assess the capacity of GPT-2 models to learn each language by conducting experiments at various stages throughout training.
Tweet media one
3
2
33
@JulieKallini
Julie Kallini ✨
2 months
I’m so excited to be speaking at this very cool NLP-GenAI seminar! My talk is tomorrow at 9am PST.
@Onr_Kls
Onur Keleş
2 months
📣 The program for the first two days of @bouncompec NLP-GenAI series is out! Join us today and tomorrow (EST 10-1PM). Register to receive a link for the talks:
Tweet media one
1
2
9
0
2
29
@JulieKallini
Julie Kallini ✨
3 months
We find that GPT-2 struggles to learn impossible languages. In Experiment 1, we find that models trained on exceedingly complex languages learn the least efficiently, while possible languages are learned the most efficiently, measured through perplexities over training steps.
Tweet media one
3
0
29
@JulieKallini
Julie Kallini ✨
4 months
My biggest 2023 PhD achievement was making so many new friends ❤️ thank you for a wonderful year!
0
1
25
@JulieKallini
Julie Kallini ✨
3 months
A big thank you to my wonderful co-authors: @isabelpapad , @rljfutrell , @kmahowald , @ChrisGPotts . This tweet will self-destruct in 5 seconds. 🔥
0
1
23
@JulieKallini
Julie Kallini ✨
5 months
If you want to hear me crack silly jokes like this in real time (or just talk about research), come see me at #EMNLP2023 !
@JulieKallini
Julie Kallini ✨
5 months
ChatGPT: Sorry, I can't draw copyrighted characters like Sonic the Hedgehog. Also ChatGPT: Wow, Sonic the Hedgehog sounds like a fun and original character!
Tweet media one
Tweet media two
3
22
105
2
4
22
@JulieKallini
Julie Kallini ✨
3 months
For more experiments targeting specific patterns, take a look at the paper! We believe that our results challenge Chomsky’s claims, and we hope to open more discussions of LLMs as models of language learning and the possible/impossible distinction for human languages.
1
0
22
@JulieKallini
Julie Kallini ✨
3 months
@OwainEvans_UK Thanks for sharing our work! For a quick overview of the paper, check out my thread:
@JulieKallini
Julie Kallini ✨
3 months
Do LLMs learn impossible languages (that humans wouldn’t be able to acquire) just as well as they learn possible human languages? We find evidence that they don’t! Check out our new paper… 💥 Mission: Impossible Language Models 💥 ArXiv: 🧵
Tweet media one
12
114
479
0
0
13
@JulieKallini
Julie Kallini ✨
9 months
Just to make sure it’s clear: the conjunction and the last conjunct of a coordination phrase form a constituent. So we have [Barbie [and Ken]] rather than [[Barbie and] Ken].
1
0
12
@JulieKallini
Julie Kallini ✨
3 months
@ylecun @nerissimo If that's the case, then you might be interested in our new paper questioning Chomsky's more recent claims about LLMs!
@JulieKallini
Julie Kallini ✨
3 months
Do LLMs learn impossible languages (that humans wouldn’t be able to acquire) just as well as they learn possible human languages? We find evidence that they don’t! Check out our new paper… 💥 Mission: Impossible Language Models 💥 ArXiv: 🧵
Tweet media one
12
114
479
1
0
9
@JulieKallini
Julie Kallini ✨
3 months
@letiepi @benno_krojer Thanks for your comment! Yeah, I think it’s clear that totally random sequences would be hard to learn (though there is information to be learned from a bag of words). That’s at the far end of the scale in the figure, and we test a wider variety of languages in the paper.
0
0
8
@JulieKallini
Julie Kallini ✨
7 months
@ChrisGPotts @life_of_ccb I feel like we could make a linguistics version of the "math lady" meme using the thumbnails
Tweet media one
2
1
8
@JulieKallini
Julie Kallini ✨
10 months
@stanfordnlp I’ve also made the difficult decision to leave my job at Meta, after an incredible journey of nearly two years. I am deeply grateful for all I’ve learned about the ML privacy space during my time at the company and all of the great engineers I’ve met along the way.
0
0
7
@JulieKallini
Julie Kallini ✨
2 months
Ok, who is the Kingdom Hearts fan at OpenAI that decided the model should be called Sora?
@OpenAI
OpenAI
2 months
Introducing Sora, our text-to-video model. Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions. Prompt: “Beautiful, snowy…
10K
33K
141K
0
0
7
@JulieKallini
Julie Kallini ✨
2 months
Very cool work from @aryaman2020 !
@aryaman2020
Aryaman Arora
2 months
New paper! 🫡 LM interpretability has made progress in finding feature representations using many methods, but we don’t know which ones are generally performant or reliable. We ( @jurafsky @ChrisGPotts ) introduce CausalGym, a benchmark of 29 linguistic tasks for interp! (1/n)
Tweet media one
6
45
284
0
0
5
@JulieKallini
Julie Kallini ✨
2 years
I’ll be co-instructing @AndrewLeeMaas 's Applied Machine Learning course on @corise_ this September! Thanks to our student @emekdahl for sharing how the ML foundations track can transform your ML career.
@uplimit_
Uplimit
2 years
Why Emily Ekdahl chose co:rise to level up her job performance as a machine learning engineer
0
0
4
0
1
5
@JulieKallini
Julie Kallini ✨
3 months
@SOPHONTSIMP @ChrisGPotts @IntuitMachine Locality is a property that says, when parts of a text that are close together, are better at predicting each other
1
0
4
@JulieKallini
Julie Kallini ✨
3 months
@AndrewLampinen Great points! We know that the impossible vs. possible distinction is elusive—our methods aim to empirically explore this for LMs. Fully agree with the idea that some 'impossible' languages might not arise simply because they present tougher learning challenges.
1
0
3
@JulieKallini
Julie Kallini ✨
3 months
@TristanThrush @ChrisGPotts @SOPHONTSIMP @IntuitMachine My intuition is that locality bias would be less clear in BERT than GPT. The incremental nature of GPT’s causal language modeling induces an implicit notion of word order and locality bias; we even explored GPT-2s without positional encodings and found that our results hold.
1
0
3
@JulieKallini
Julie Kallini ✨
2 years
So happy you enjoyed the course @laura_uzcategui ! It has been an absolute pleasure to teach Applied ML!
@laura_uzcategui
Laura Uzcátegui
2 years
I had a really good time attending @corise_ applied ML course taught by @AndrewLeeMaas & @JulieKallini , the experience is totally different from any other course in the sense that you learn a lot from classes but also there is a direct interaction with your professors and 1/2
1
2
9
0
0
2
@JulieKallini
Julie Kallini ✨
1 month
@ElliotMurphy91 @ChrisGPotts Wouldn’t it be more productive to consider both the merits and shortcomings of a new set of tools? Also, the new tools don’t have to invalidate the old ones—tons of work show that LLMs learn syntax from data, confirming what we already know about the structure of language.
0
0
2
@JulieKallini
Julie Kallini ✨
4 months
@TristanThrush I feel you 😅
0
0
2
@JulieKallini
Julie Kallini ✨
4 months
@ddemszky Yikes… that has a high edit distance from your actual name!
0
0
2
@JulieKallini
Julie Kallini ✨
4 months
@ericmitchellai 😢 Feel better!
0
0
1
@JulieKallini
Julie Kallini ✨
3 months
@letiepi @benno_krojer I would say that Chomsky’s idea of an impossible language is closer to what we call languages with “count-based grammar rules”. We run specific experiments for these unnatural but predictable patterns, showing that GPT-2 struggles to learn these rules.
0
0
1
@JulieKallini
Julie Kallini ✨
2 years
Last month, I attended my first in-person international conference to present my paper at TSD2022! This is the second published compling paper based on my thesis work at @PrincetonCS , advised by Christiane Fellbaum.
0
0
1
@JulieKallini
Julie Kallini ✨
8 months
@aryaman2020 Yes! That’s the part I initially didn’t want to spoil that disproves conjunction reduction. Now that Barbie has been out for a while, we can enjoy all its funny language jokes.
@ChrisGPotts
Christopher Potts
9 months
Julie also told me that the entire movie culminates in a joke pointing out the falsity of the conjunction reduction transformation, but the joke itself is a spoiler.
1
2
10
1
0
1
@JulieKallini
Julie Kallini ✨
3 months
@TristanThrush @ChrisGPotts @SOPHONTSIMP @IntuitMachine Since BERT models use masked language modeling, it’s unclear whether it would induce notions of word order as well without positional encodings (and it follows that locality bias needs some notion of word order).
0
0
1
@JulieKallini
Julie Kallini ✨
7 months
@ElisaKreiss @CoalasLab Congrats, Elisa! 🎉
0
0
1
@JulieKallini
Julie Kallini ✨
3 months
@alexisgallagher After you read the grammar, you should tell us whether you think Ithkuil is an impossible language!
0
0
1
@JulieKallini
Julie Kallini ✨
2 months
0
0
1