Rob Lynch Profile
Rob Lynch

@RobLynch99

774
Followers
33
Following
14
Media
113
Statuses

Head of Product at UniCourt | Structured Litigation Data for law, insurance and more. Interested in all things data, LLM and AI

Joined October 2022
Don't wanna be here? Send us removal request.
@RobLynch99
Rob Lynch
5 months
@ChatGPTapp @OpenAI @tszzl @emollick @voooooogel Wild result. gpt-4-turbo over the API produces (statistically significant) shorter completions when it "thinks" its December vs. when it thinks its May (as determined by the date in the system prompt). I took the same exact prompt
Tweet media one
Tweet media two
Tweet media three
138
354
2K
@RobLynch99
Rob Lynch
5 months
@ChatGPTapp @OpenAI @tszzl @emollick @voooooogel Small but important clarification. The distribution is labeled tokens but the measure, and analysis, is actually done on character length *not* tokens. Thought for an effect this size I think it seems to be a good proxy.
4
3
114
@RobLynch99
Rob Lynch
5 months
@NickADobos @ChatGPTapp @OpenAI @tszzl @emollick @voooooogel I wanted to do a month by month comparison but the effect is such that you need a fairly high N (because the standard deviation in completion lengths is already pretty high in the first place), and it gets pricey fast, haha. I published my code though, so others can try!
2
1
33
@RobLynch99
Rob Lynch
5 months
@ChatGPTapp @OpenAI @tszzl @emollick @voooooogel Posted code here. Please note the analysis was done at N=477 and based on character count, not tokens as in the label: And also please note its parallelized so it'll run fast an not cheap 😅 Around $28 per run to exactly repro!
0
0
22
@RobLynch99
Rob Lynch
5 months
Wow, so cool. I'm glad someone else has seen it in the wild! Thanks for checking it out @voooooogel
@voooooogel
thebes
5 months
Reproduced! There were some small bugs in the original test code (lack of zero padding for May (h/t @gwern ) and one of those pervasive """-string indentation issues), but still reproduces without them, to the best of my stats knowledge 🎉
Tweet media one
Tweet media two
7
7
96
0
1
14
@RobLynch99
Rob Lynch
5 months
@0xPrash @ChatGPTapp @OpenAI @tszzl @emollick @voooooogel @0xiAkhil Yes, needs to be reproduced for sure to make sure I'm not missing something or going crazy!
1
0
14
@RobLynch99
Rob Lynch
5 months
@voooooogel DMs opened, give it another try!
1
0
11
@RobLynch99
Rob Lynch
5 months
@IanArawjo @ChatGPTapp @OpenAI @tszzl @emollick @voooooogel Interesting, I ran at N = 80 again just now and got p-value of 0.089 (two-tailed) but I did it on character count, not tokens (see my clarification to the original post). I ran it a few times over the weekend (making sure *not* to p-hack), and the effect definitely grew as
4
0
12
@RobLynch99
Rob Lynch
5 months
I deny any influence from the May December PR team on my experiment 🤣
@nathanwchan
Nate Chan
5 months
We live in a simulation
Tweet media one
Tweet media two
1
1
9
0
1
10
@RobLynch99
Rob Lynch
5 months
Are @yoheinakajima ’s BabyAGI and other paired loop LLM experiments like @SigGravitas Auto-GPT a precursor to truly agentic “conscious” AI? Julian Jaynes might think so. In 1976, Jaynes wrote a book called “The Origin of Consciousness in the Breakdown of the Bicameral Mind".
2
3
10
@RobLynch99
Rob Lynch
4 months
Adding a link to the GitHub repo with code I'm using to (hopefully) train an LLM to play text-adventure games (). Starting with the core of a PPO loop (no policy model yet) to interact with Zork in terminal and train a reward model. And also a logit
2
1
7
@RobLynch99
Rob Lynch
5 months
@IanArawjo @ChatGPTapp @OpenAI @tszzl @emollick @voooooogel @voooooogel was able to repro on character counts, I retweeted. Looks like a higher N is needed.
1
0
5
@RobLynch99
Rob Lynch
4 months
Looking at the logit outputs of Mistral 7B (not instruct) when "placed" into a text-adventure environment is really encouraging. Verbs like look, touch, take, go, open and get were in the top ten, and petting the dog was in position 24. Lots of potential here to be tuned.
Tweet media one
Tweet media two
0
0
3
@RobLynch99
Rob Lynch
5 months
Answering the question objectively and accurately of whether or not an AI model (or system of AI models) is independently and successfully goal-seeking seems of pretty key concern to ⏸️/⏹️-ers, ⏩-ers, people who believe we're close, people who believe we're far off and everyone
0
0
5
@RobLynch99
Rob Lynch
5 months
@mbaker000 @ChatGPTapp @OpenAI @tszzl @emollick @voooooogel If others can reproduce, I'd definitely be encouraged to start looking for experiments that would disprove that hypothesis. It's a big claim!!
1
0
5
@RobLynch99
Rob Lynch
4 months
Text adventure games are said to have “sparse rewards” which is one of the things make it hard for RL algorithms to solve. However, they’re very rewarding to play. Where is the reward coming from? It seems to me like discovering new states (rooms you can visit, things you can
0
0
4
@RobLynch99
Rob Lynch
5 months
@Mihoda @0xPrash @ChatGPTapp @OpenAI @tszzl @emollick @voooooogel @0xiAkhil Exactly, and I am not a statistician by far!! It's a big claim with many possible explanations!
1
0
4
@RobLynch99
Rob Lynch
5 months
Super interesting development from @a_karvonen . A 50M parameter model trained on chess move sequences not only learns how to play chess (making sequences of moves not in the training set) but can also be shown with probing to have developed a world model of the board. So any
@a_karvonen
Adam Karvonen
5 months
I trained Chess-GPT, a 50M parameter LLM, to play at 1500 ELO. We can visualize its internal state of the board. In addition, to better predict the next character it estimates the ELO of the players involved. 🧵
Tweet media one
41
123
917
0
0
3
@RobLynch99
Rob Lynch
5 months
@janekm @insom_ai333 @ChatGPTapp @OpenAI @tszzl @emollick @voooooogel I reproduced the system prompt from ChatGPT in the system message over the API. That was the only difference. One said the current date was in May, one in December.
0
0
3
@RobLynch99
Rob Lynch
5 months
Wondering what other context window enhancers and diminishers might be out there and how to quantify them (without breaking the OpenAI budget!) h/t @voooooogel
@voooooogel
thebes
6 months
so a couple days ago i made a shitpost about tipping chatgpt, and someone replied "huh would this actually help performance" so i decided to test it and IT ACTUALLY WORKS WTF
Tweet media one
268
1K
8K
0
0
3
@RobLynch99
Rob Lynch
5 months
Takeaways: -- OpenAI's Ada embeddings can underperform open-source embeddings and are way more expensive -- Embeddings capture so much information that even the simplest of models are able to do solid text/sentiment classification -- Here's some code to compare embedding models
@RobLynch99
Rob Lynch
5 months
Embedding models are so good at capturing content and semantics of text that even a basic logistic regression model trained on them can get surprisingly good results on text classification and sentiment analysis tasks (saving the need for heavy model training and loading). Even
Tweet media one
1
0
3
0
1
3
@RobLynch99
Rob Lynch
5 months
Wow. This blew up. Thanks @arstechnica and @benjedwards . Thanks too to @IanArawjo efforts in trying (and failing) to replicate! Super interested to see what (if anything) others find on this and on similar experiments like @voooooogel ’s tipping finding.
@arstechnica
Ars Technica
5 months
Is ChatGPT becoming lazier because it’s December? People run tests to find out
6
8
16
0
0
3
@RobLynch99
Rob Lynch
5 months
Embedding models are so good at capturing content and semantics of text that even a basic logistic regression model trained on them can get surprisingly good results on text classification and sentiment analysis tasks (saving the need for heavy model training and loading). Even
Tweet media one
1
0
3
@RobLynch99
Rob Lynch
5 months
@ai_for_humans @IanArawjo @ChatGPTapp @OpenAI @tszzl @emollick @voooooogel I didn’t and @gwern caught this too, but @voooooogel fixed and was able to repro on character count (higher N than 80 was needed), I retweeted that. Looks like it really is there.
0
0
3
@RobLynch99
Rob Lynch
5 months
@RandomSprint @aspergtame @ChatGPTapp @OpenAI @tszzl @emollick @voooooogel I am very much not a statistician. 😅That's why I threw out the distribution and repro steps too.
0
0
2
@RobLynch99
Rob Lynch
5 months
Spent some time playing Zork on my phone (see prior very long tweet), shout out to Frotz on App Store for making classic text adventure games accessible on mobile. First takeaway, it’s not easy out of the gate at all. Spent time stuck in a maze and building a picture of the map
Tweet media one
Tweet media two
0
0
1
@RobLynch99
Rob Lynch
1 year
9/ Want to learn more about attention mechanisms in the brain? Check out these resources: Brain mechanisms associated with internally directed attention and self-generated thought The Dorsal Attention Network
0
1
2
@RobLynch99
Rob Lynch
4 months
Robot soma 🤣 Using PPO reinforcement learning I fine-tuned Llama2 13B (with only 20GB of VRAM!) to produce hugely more positive responses using a BERT sentiment measure as a reward function. Blue is distribution of sentiment before training, red after. Next will put the same
Tweet media one
0
0
1
@RobLynch99
Rob Lynch
1 year
9/ Want to dive deeper into transformers and self-attention? Check out these resources: "Attention is All You Need" (original paper): Illustrated Transformer:
0
1
2
@RobLynch99
Rob Lynch
1 year
1/ Are there connections between AI models like transformers and the human brain? We previously discussed self-attention in transformers and attention mechanisms in the brain. Now, let's focus on the fascinating similarities between these systems!
1
0
1
@RobLynch99
Rob Lynch
1 year
1/ Have you ever wondered how AI can understand and generate human-like text? The secret lies in an architecture called transformers, which use a powerful mechanism called self-attention to learn context and dependencies in the input data. Let's dive in!
1
0
1
@RobLynch99
Rob Lynch
1 year
8/ In summary, the idea of life as an unbroken algorithmic run, driven by energy gradients, free energy minimization, and entropy, provides an interesting perspective on how our complex world came to be.
1
0
1
@RobLynch99
Rob Lynch
1 year
1/ Is all life on Earth a single long running algoritm?: Ever considered that life on Earth might be an unbroken, continuous process from its origin to the present day? Let's dive into this intriguing idea and explore how energy gradients might have driven the emergence of life.
1
0
0
@RobLynch99
Rob Lynch
1 year
We’ll come back to coordinates a bit later but now we’re going to jump over to machine learning. Specifically, machine learning on words which is a type of “natural language processing”.
1
0
0
@RobLynch99
Rob Lynch
1 year
6/ The brain also relies on Hebbian learning to strengthen synaptic connections over time. This principle, often summed up as "neurons that fire together wire together," allows our brains to learn, adapt, and create associations between related stimuli.
1
0
1
@RobLynch99
Rob Lynch
1 year
7/ The attention mechanisms in our brains are intricate and interconnected, enabling us to process the vast amounts of sensory information we encounter every day. These neural networks play a critical role in our perception, decision-making, and learning.
1
0
1
@RobLynch99
Rob Lynch
5 months
@TomDavenport @simonw Yes way more research needed as it seems totally odd and unlikely to me too, but after I reproed it three times over the weekend with no p-hacking, I figured I should throw it out so others can try and replicate (or fail to!)
0
0
1
@RobLynch99
Rob Lynch
1 year
@highlightmv_ Rob Thomas - Lonely No More or Ace of Base - Beautiful Life is the closest I’ve found. It’s been driving me nuts too.
0
0
1
@RobLynch99
Rob Lynch
5 months
tl;dr summary: The Turing Test is likely no longer a useful measure of human-level AI capabilities, but being able to complete a text adventure game (like Zork) to human-level could be a good goal/canary of goal-seeking AGI. LLMs seem unable do this as they're deeply trained on
@RobLynch99
Rob Lynch
5 months
Answering the question objectively and accurately of whether or not an AI model (or system of AI models) is independently and successfully goal-seeking seems of pretty key concern to ⏸️/⏹️-ers, ⏩-ers, people who believe we're close, people who believe we're far off and everyone
0
0
5
0
0
1
@RobLynch99
Rob Lynch
1 year
So this is why people are excited about, and will to pay for embeddings. They allow us to do math and geometry with meaning that allow all kind of powerful features like semantic search.
0
0
1
@RobLynch99
Rob Lynch
1 year
6/ Although there are fundamental differences between self-attention in transformers and attention mechanisms in the human brain, the intriguing similarities between these systems can shed light on the nature of intelligence.
1
0
1