EigenGender Profile
EigenGender

@EigenGender

5,849
Followers
659
Following
61
Media
2,686
Statuses

all my posts are shitposts that simultaneously reveal the true nature of reality. large language models; kinda EA; 🏳️‍⚧️

Seattle, WA
Joined March 2021
Don't wanna be here? Send us removal request.
Pinned Tweet
@EigenGender
EigenGender
1 year
There are now a lot of people using a breakthrough AI technology daily. This technology is created by very smart companies who repeatedly and explicitly say that their goal is to work towards AGI. But the people using the fruits of these companies labors insist that AGI is sci-fi
Tweet media one
5
6
77
@EigenGender
EigenGender
1 year
Universities are really not prepared for the flood of LLM-based honor code violations that are coming, and I'm really not sure what it's possible to do about it. I'd estimate ~5% of university students will submit an assignment largely written by a LLM by end of 2023.
190
449
5K
@EigenGender
EigenGender
1 year
Twitter is filled with a bunch of guys who say "ChatGPT is so much better than Google, I'm using it as my search engine now". And like....ChatGPT will hallucinate facts between 1%-5% of the time. That's good for a LLM! And Really Really bad for a search engine.
135
182
3K
@EigenGender
EigenGender
1 year
I’ve had a Bayes theorem tattoo on my arm for a while, and I’m looking to supplement it with a lot more equation tattoos. I’ve got a list of candidates, but I figured I should take suggestions from Twitter. Give me your favorite cool-looking equations!
Tweet media one
358
105
2K
@EigenGender
EigenGender
1 year
If you don’t have low-level technical experience in machine learning but you’ve thought really hard about the nature of intelligence and you know how to fix the problems that machine learning algorithms run into, then your idea has been tried dozens of times and it doesn’t work
41
90
2K
@EigenGender
EigenGender
1 year
as someone who has bad reactions to blood draws but has to do them a lot, it would be so great if there was a tech startup that could run all these tests with a single drop of blood
36
41
1K
@EigenGender
EigenGender
9 months
Honestly shocked that there isn’t an airline that appeals to business travelers by being 40% more expensive, putting a lot more money into avoiding delays or cancellations, and providing a modestly nicer experience in “economy”
96
29
1K
@EigenGender
EigenGender
1 year
What’s the “hardest” sci fi book you can think of that’s set in the very far future? Ie. The most technologically, socially and economically plausible description of society in humanities far future. Kind of weird that there aren’t a lot of choices.
302
75
972
@EigenGender
EigenGender
1 year
Aligned AGI who makes a utopia for humans but has a warehouse that’s just for his paperclip collection and excitedly shows them off to any humans who visits
31
69
817
@EigenGender
EigenGender
1 year
Doctors don’t really understand anything; they’re just doing next word prediction
@emollick
Ethan Mollick
1 year
Extraordinary new paper from Google on medicine & AI: When Google tuned a AI chatbot to answer common medical questions, doctors judged 92.6% of its answers right … compared to 92.9% of answers given by other doctors. And look at the pace of improvement!
Tweet media one
Tweet media two
157
2K
9K
19
40
828
@EigenGender
EigenGender
1 year
Big problem with "any task a LLM can do is not worth teaching" is that LLMs are not currently capable of producing great writing on meaningful topics. But to develop those writing skills, students have to write a bunch of bad freshman-composition essays on derivative subjects
8
27
805
@EigenGender
EigenGender
1 year
If you really focus on and grade based on original analysis and ideas, you can probably distinguish a A-student from a LLM pretty reliably. But it's going to be very hard to grade in a way to distinguishes B-students or C-students from LLMs.
11
28
724
@EigenGender
EigenGender
1 year
I don’t have any hot takes but anyone interested in the social impacts of AI should spend some time checking out r/replika today.
25
55
629
@EigenGender
EigenGender
1 month
Probably dating apps just need to be operated as a public service. Absolutely massive amounts of consumer surplus and any attempt to capture nontrivial amounts of that surplus is insanely destructive
24
24
621
@EigenGender
EigenGender
1 year
The amazing thing about utilitarianism is that its a bunch of people saying “we should make all of our moral philosophy decisions based on this magical mathematical utility function” and like <10% of the discourse around utilitarianism is “okay what’s that function then?”
50
27
600
@EigenGender
EigenGender
1 year
Since this tweet has now spread way way beyond my usual corner of Twitter, LLM=Large Language Model, a recent type of massive AI model that (among other cool things) can generate well-written and coherent text on any subject. Follow me for more cold takes on LLMs I guess?
7
26
581
@EigenGender
EigenGender
1 year
For decades, Natural Language processing has been “seems like it should be easy for a computer to do, but it’s actually surprisingly hard”. That makes it really hard to convey how impressive modern LLMs are to lay people that haven’t been paying attention
23
42
571
@EigenGender
EigenGender
1 year
But since this tweet apparently has the pleasure of introducing a whole bunch of people to LLMs: welcome. This is the coolest new technology we’ve had in a long time. You can play with the models I work on at
10
40
554
@EigenGender
EigenGender
1 year
Very imperfect evidence, but ChatGPT's "memory" seems to be based on GPT-3's 4096-token context window. A >4096 token intervening prompt removes it's ability to remember things from earlier in the conversation.
Tweet media one
Tweet media two
Tweet media three
15
48
548
@EigenGender
EigenGender
1 year
It's early days, but OpenAI Chat seems like the thing that is going to make language models go from "weird corner of the internet is talking about this" to mainstream. Easy to access and produces good results without prompt-engineering. I'd predict a front-page NYT article soon.
16
29
537
@EigenGender
EigenGender
1 year
(High schools are gonna be even more screwed)
7
6
523
@EigenGender
EigenGender
11 months
While I was working for Facebook, I started to make friends with a guy who’s job was conducting environmental reviews. At the time, I felt a bit self conscious because I had a job that was harming society and he had a pro-social job, but it turns out that was backwards
3
16
512
@EigenGender
EigenGender
2 years
In a few years, copilot will become such an essential part of development that newer languages will be at a massive disadvantage because they won’t have enough training data
46
17
483
@EigenGender
EigenGender
1 year
The amazing thing about the GPT-3 chat is that this *isnt* GPT-4; this isn’t significantly more intelligent than the GPT-3 of the past 6 months. It’s just easier to access its intelligence to the fullest extent. LLMs have so much hard-to-access power
5
19
453
@EigenGender
EigenGender
1 year
I’ve read a lot about how the airline industry is able to achieve such high rates of safety, but is there anything about why such a safety-conscious culture developed? Seems easy for airplanes to be 10x as deadly as they are today, still safer than cars, and everyone accepts it
112
15
425
@EigenGender
EigenGender
1 year
I think a large part of the success of ChatGPT is that 95% of users always wanted to talk to LLMs like they were a conversational agent. ChatGPT just aligned the model with the user’s expectations, and suddenly an average user was almost as good as the best prompt engineers
12
28
421
@EigenGender
EigenGender
1 year
a lot of Yudowsky-style alignment is predicated on the idea that a super-intelligence can persuade anyone of anything. but doesn’t Yudowsky failing to convince the big labs to take this hypothesis seriously kinda disprove this?
45
14
416
@EigenGender
EigenGender
11 months
It’s pretty amazing that community notes were the one universally liked twitter feature until a few weeks ago, and then One Poster decided to go to war with community notes and quickly wiped the floor with them
@growing_daniel
Daniel
11 months
It’s amazing that arresting everyone with a gang tattoo in El Salvador literally cut their murder rate in half
200
437
10K
12
18
404
@EigenGender
EigenGender
10 months
seems pretty noteworthy that the first nuclear weapons were made under conditions where they couldn’t do any experiments and they involved a lot of math but still worked on the first try.
19
18
396
@EigenGender
EigenGender
1 year
Prediction: A year from now, there will be a large number of products that chain >10 LLM API calls to produce a useful and consistent program
18
13
371
@EigenGender
EigenGender
11 months
Imagine being a guy who wrote some slightly-less-than-perfectly optimized sorting code a few decades ago, and suddenly its pointed out by a well publicized nature paper
2
10
358
@EigenGender
EigenGender
1 year
If you’ve just started to pay attention to AI progress when ChatGPT came out then you’re perceiving an artificially fast sense of progress. Remember that GPT3->4 is 2-3 years of progress, not six months
15
13
346
@EigenGender
EigenGender
2 years
@RottenInDenmark Everyone knows the standard of care for when a suicidal teen calls a suicide hotline is to express doubts about any desires that teen has
1
5
320
@EigenGender
EigenGender
1 year
Now is the time to pre register your takes about GPT-4 and decide how to update your AI timelines based on different results
28
15
332
@EigenGender
EigenGender
1 year
@d_feldman We run elections by taking the mode of all the votes
1
5
325
@EigenGender
EigenGender
1 year
Gonna pre-register this take: There’s a good chance that the explosive rate of LLM progress over the past few years is about to hit a ceiling and move into a more gradual slope of progress. I still expect LLMs to keep progressing/potentially reach AGI over the long term.
29
15
321
@EigenGender
EigenGender
1 year
I interned at Facebook in 2018, peak Alexa hype. I was talking to a manager high in the Assistant org who said that she knew that Alexa was the future when she had her hands busy in the kitchen and set a timer. That was an example of things to come. Turns out it was the thing
@br___ian
brian
1 year
man Siri and ok Google really revolutionized setting a timer for your pasta
13
24
398
6
3
311
@EigenGender
EigenGender
1 year
A deepness in the sky, by Vernor Vinge is probably the closest I can think of (and it’s pretty amazing)
13
3
301
@EigenGender
EigenGender
1 year
This is the discussion that we should have been having after Lambda. If a very smart engineer with 99th percentile knowledge of LLMs could be convinced that it was conscious then, when a lot of people are exposed to similar technology, this is going to happen on a large scale
@jd_pressman
John David Pressman
1 year
The biggest update of the past 2 days should be that a substantial fraction, if not most people, are going to try to 'side with the AI', to the extent that is a coherent concept.
6
9
119
6
28
289
@EigenGender
EigenGender
1 year
I want to see an interview where the reporter asks the celebrity “You’ve aged so well! What’s your secret?” And the celebrity answers “well I’ve been using this non-FDA-approved nootropic I read about on LessWrong a decade ago and n=1 but it seems to have worked pretty well
4
3
273
@EigenGender
EigenGender
1 year
IMO the longtermism/x-risk-focused EA is only good when it's rooted from a place of deep compassion for the global poor. This describes the old school EAs, who started in global health. But some newer EAs took too direct of a path to x-risk and just like talking philosophy.
18
13
275
@EigenGender
EigenGender
1 year
"Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar" end of an era
8
13
269
@EigenGender
EigenGender
1 year
If you include second-order effects, the death toll of three-mile island is probably in the millions
15
14
262
@EigenGender
EigenGender
1 year
tfw you make an AI to maximize the number of paperclips but luckily you accidentally make a sign error so now your in the weirdest police state ever
8
17
265
@EigenGender
EigenGender
1 year
It’s always amazing when people make confident next-decade predictions about the future of large language models based on an eight-month-old scaling law
8
14
261
@EigenGender
EigenGender
1 year
Every once in a while I read a bit about fusion and I’m stuck at how technically complex it is compared to any (non-theoretical) work going on in AI right now
17
10
254
@EigenGender
EigenGender
1 year
llm progress is slowing down. Gpt-3 was a 50% improvement from GPT-2, but GPT3->GPT4 was only a 33.33% improvement. and GPT-5 is only predicted to be a 20% improvement!
59
23
249
@EigenGender
EigenGender
11 months
Constantly thinking about the girl in my 2020 ai ethics class who, when asked when we’d have human level artificial intelligence, said 5 years, because things like the iPhone have gotten better fast. hope she’s doing well
15
7
250
@EigenGender
EigenGender
1 year
Everyone wants to put their ML training cluster in Narnia but no one wants to write the communication protocols to transmit 175B parameters through severe and variable time dilation
15
11
245
@EigenGender
EigenGender
4 months
almost no layperson has interacted with a LLM trained to produce the next token, but that’s the sophisticated-layperson understanding of LLMs. As RLHF models get more dominant, “they just predict the next token” crosses over to misinformation
24
16
242
@EigenGender
EigenGender
1 year
seems noteworthy that NASA maintained an absurdly safety conscious culture during some pretty intense race dynamics
13
3
241
@EigenGender
EigenGender
1 year
LessWrong is the place I go to find some very bad AI takes and some heavily-researched posts on medical topics that I don’t have the expertise to evaluate but that I’m sure are trustworthy.
8
3
232
@EigenGender
EigenGender
1 year
Holy shit I re-read Omelas after reading this thread. I had read it ~9mo ago; When rereading it, I legit thought that someone had swapped out Omelas for another story. This isn't novel contrarian interpretation; it's very explicit in the text and we all fell for it.
@HTHRFLWRS
cohost.org/hthrflwrs
1 year
it fucking kills me how ursula leguin, in writing a story about how people refuse to engage with a narrative unless it contains suffering, inadvertently created one of the most long-lasting shorthands for dystopian society in the modern narrative
45
359
4K
11
17
238
@EigenGender
EigenGender
4 months
Between this and PEPFAR, I’ve unfortunately come to the conclusion that George W Bush is the most EA president
@ettingermentum
ettingermentum🥥🌴
4 months
Fun story: during his presidency, George W Bush read a book about the Spanish flu, freaked out, held a dozen briefings on pandemic outbreaks and demanded billions of dollars be spent on pandemic mitigation.
243
3K
78K
10
14
230
@EigenGender
EigenGender
1 year
“AGI kills everyone” is like the least-weird outcome of inventing AGI. Not “weird” as in “unlikely”; “weird” as in “hard to imagine”. Every other outcome is stranger
20
7
226
@EigenGender
EigenGender
1 year
It’s unfortunate that AGI-risk is both plausible enough to be worth taking seriously and simultaneously deeply similar to the most unhealthy parts of religious beliefs
12
12
220
@EigenGender
EigenGender
1 year
whether or not he's correct, Yudowsky's writings are expertly and perfectly and magnificently crafted to lure smart people into wanting to believe him completely. That's why Yudowsky should argue that no one should read his own work.
19
4
214
@EigenGender
EigenGender
2 years
No matter how good of an argument you give, I'm always going to have a deep suspicion of EA causes that are primarily donating money to pay upper-class people to think about problems. Not sure if that's a good thing or a bad thing.
17
9
213
@EigenGender
EigenGender
1 year
I’m finally reading Greg Egan and I can’t begin to express what an alpha move it is to make up a system of physics for your book
15
9
215
@EigenGender
EigenGender
1 year
I can’t justify it by the biggest red flag that someone is bullshitting about AI is the phrase “an AI” when talking about current systems
22
14
207
@EigenGender
EigenGender
1 year
"AI Safety is an apocalyptic cult" dude I'd probably put a lower probability on human extinction from AI than most of the left-wing people my age do on extinction from climate change.
15
7
203
@EigenGender
EigenGender
1 year
I’ve never been able to stomach the taste of coffee, but I took a 100mg caffeine pill today to possibly help treat a health condition and holy shit I’m productive I see why everyone likes this stuff
14
4
201
@EigenGender
EigenGender
8 months
How is China so much better than us at holding community meetings
@mnolangray
M. Nolan Gray
8 months
In the time it's going to take LA Metro to do "community engagement" for a single rail subway line, China built a nationwide high-speed rail network.
Tweet media one
92
390
3K
7
14
202
@EigenGender
EigenGender
1 year
Okay I was waiting for the EA defense of this has come out but it's disappointing. This is bad. Not just because of the optics or the visuals. This is a bad use of funds and, as a EA-adjacent person, this significantly lowers my opinion of EA.
@xriskology
Dr. Émile P. Torres
1 year
Absolutely stunning that the Centre for Effective Altruism *bought* this place. #effectivealtruism
Tweet media one
101
151
2K
12
7
199
@EigenGender
EigenGender
1 year
Everyone saying “LLMs are just doing next word prediction” needs to take a closer look at the cases where LLMs show genuine psuedo-intelligence, but ChatGPT has created a large group of people who need to remember that most generated text is interpolation on the training data
21
11
192
@EigenGender
EigenGender
2 years
@RottenInDenmark Peanut allergies would be a great example for “something that might look like a social contagion but is so obviously not a social contragion that it’s a great example of the problems with social contagion theories
3
5
191
@EigenGender
EigenGender
1 year
Downside to being an EA is that I find myself suddenly emotionally invested in crypto drama against my will
4
8
189
@EigenGender
EigenGender
9 months
@mealreplacer did you know that the release of ChatGPT is closer to the birth of cleopatra than it is to the present day
0
6
186
@EigenGender
EigenGender
1 year
Very funny how, when the Blake Lemoine stuff came out, the normie reaction was mostly “Haha even google engineers are dumb”. Now as Lambda-level systems are available to the public we’re seeing more and more “please stop being mean to the AI” takes.
@EigenGender
EigenGender
1 year
This is the discussion that we should have been having after Lambda. If a very smart engineer with 99th percentile knowledge of LLMs could be convinced that it was conscious then, when a lot of people are exposed to similar technology, this is going to happen on a large scale
6
28
289
9
13
182
@EigenGender
EigenGender
1 year
I’m expecting all the academics who said that LLMs could never understand anything as long as they weren’t grounded to conclude that, since GPT-4 can look at images, it’s grounded.
12
10
170
@EigenGender
EigenGender
2 years
@BadMedicalTakes This is like two paragraphs away from reinventing the concept of original sin
0
0
160
@EigenGender
EigenGender
6 months
we are going to learn about AGI when the AGI books itself a slot on the Dwarkesh podcast
2
9
167
@EigenGender
EigenGender
1 year
Getting so much secondhand embarrassment when I’m listening to a podcast and the podcast host is just clearly thinking better about the issue than the expert guest they have on
6
1
161
@EigenGender
EigenGender
1 year
EA go like one week without making me embarrassed to be associated with y’all challenge
5
2
165
@EigenGender
EigenGender
7 months
Currently being forced to listen to a presentation from an EA guy who wants to optimize African farms with RL and LLMs. Most of what he’s built is a receipt reimbursement app that stops fraud by checking the normality assumption. The simulation hypothesis is real and this is hell
10
3
161
@EigenGender
EigenGender
1 year
basically all the useful capabilities of a LLM come from inner alignment failures
13
10
161
@EigenGender
EigenGender
1 year
LLM inference is absurdly expensive compared to other models and absurdly cheap compared to other sources of intelligence
6
12
160
@EigenGender
EigenGender
1 year
EA is weird because there’s a philosophy that could be used to justify horrible things but then everyone in the movement is like “yeah we know that’s not real right? Like we’re all adults who are just gonna do basically good things right?”. I like EA but this really doesn’t scale
22
2
163
@EigenGender
EigenGender
1 year
Since this is getting far beyond my usual circle let me just be clear that: 1) this is a shitpost 2) like all of my shitposts it is deep and profound and contains all the answers to life’s mysteries
4
4
157
@EigenGender
EigenGender
1 year
I know nothing about the fundamental tech behind theranos but now I’m concerned that someone is going to actually get this working but will never be able to get VC funding.
11
1
156
@EigenGender
EigenGender
3 months
honestly this is a pretty shockingly high number
Tweet media one
5
4
156
@EigenGender
EigenGender
8 months
basically every pre-industrial agricultural society spent most of its time in a Malthusian trap, and was mostly miserable. This doesn’t seem to apply to hunter gatherer societies which seem happy and at a stable population. Why is this?
64
5
156
@EigenGender
EigenGender
1 year
guy who thinks that Large Language Models will never “really understand” anything but that they will turn us all into paperclips anyway.
17
4
155
@EigenGender
EigenGender
1 year
I gotta say I'm not loving the fact that AI's best subject is quickly turning out to be Biology.
@csvoss
Chelsea Sierra Voss
1 year
@HiFromMichaelV Ah, the key is that only 400 students in the US qualified to take the Semifinal Exam in 2020. Only one student that year achieved a score higher than GPT-4’s.
4
6
65
12
2
153
@EigenGender
EigenGender
2 years
@goodside A bit clunky and it took a few tries, but I was able to get GPT3 to un-escape-string a sentence. Hypothetically possible to do an injection attack even if the programmer escapes the actual user input but not the LLM output.
Tweet media one
1
19
154
@EigenGender
EigenGender
1 year
timeline where we align an AGI, everything is great, and then a few years later a cosmic ray corrupts it’s parameters enough that it becomes unaligned
7
3
150
@EigenGender
EigenGender
2 years
@goodside Wow I'm honestly curious when the first LLM-mediated SQL injection attack will take place now.
3
5
148
@EigenGender
EigenGender
1 year
@ESYudkowsky if you’re so smart then why do I disagree with you?
1
3
150
@EigenGender
EigenGender
11 months
The OpenAI employee who published the “<1% TAI by 2043” is clearly new to this; it would have got ten times the attention if he published it internally, leaked it, and then referred to it as a “leaked OAI memo”
2
2
145
@EigenGender
EigenGender
1 year
constantly thinking about the guy who saw my Twitter account and decided to slide into my DMs with “Have you heard about LessWrong”
6
0
147
@EigenGender
EigenGender
1 year
Doctors are a cool demo but they frequently make mistake and can’t be relied upon in real world applications
3
5
149
@EigenGender
EigenGender
6 months
that’s it the “EA-adjacent” meme has gone too far
Tweet media one
8
9
146
@EigenGender
EigenGender
2 years
@tszzl idk I think everyone reads too much into enders game. The true message doesn't have anything to do with children fighting or ethics or genocide. The real metaphorical meaning is all about how it's possible to pick any coordinate system as your frame of reference in 0g.
5
4
143
@EigenGender
EigenGender
1 year
Starting a campaign to cancel Yudowski for writing a fanfic for a work by a transphobic writer. Yeah HPMOR was written long before JKRs tweets, but he should have been able to Functional Decision Theory his way out of this one.
7
9
138
@EigenGender
EigenGender
1 year
Let’s play a game of “is this talking about LLMs or humans”
@ylecun
Yann LeCun
1 year
5- they have limited working memory 6- they execute a fixed number of computational steps per generated token 7- hence they are very far from Turing complete 8- Auto-regressive generation is a exponentially-divergent diffusion process, hence not controllable. 3/
22
23
313
6
6
141
@EigenGender
EigenGender
1 year
preventing an AGI from self-modifying is pretty easy: `chmod 400 agi.exe`
6
10
140
@EigenGender
EigenGender
8 months
deeply funny to be the CEO of a 42B dollar company and having to spend your day coming up with a project to retroactively justify your CTOs unthinking 6am tweets. It’s like the trump administration except if trump was technically vice president.
@lindayaX
Linda Yaccarino
8 months
Our users’ safety on X is our number one priority. And we’re building something better than the current state of block and mute. Please keep the feedback coming.
5K
783
6K
6
5
141
@EigenGender
EigenGender
1 year
e/acc is so weird because it runs the gamut from “basically the same policies as AI safety people but a more optimistic vibe” to “actually if humans get killed by a paperclip optimizer it’s good “
14
4
138
@EigenGender
EigenGender
1 year
the lack of gender diversity among rationalists has really caused them to mostly miss the one category of true info hazards that are common and show up in every day life
10
3
131
@EigenGender
EigenGender
1 year
I think most people freaking out over direct xrisk implications of Bing are being silly but if Bing=GPT-4 and some of the weird behaviors are the result of scale I’ll get a bit worried.
19
2
131
@EigenGender
EigenGender
3 months
@revhowardarson no you don’t understand. I want a dictatorship where me and my friends are the dictators
1
1
130