EleutherAI Profile
EleutherAI

@AiEleuther

19,542
Followers
76
Following
27
Media
639
Statuses

A non-profit research lab focused on interpretability, alignment, and ethics of artificial intelligence. Creators of GPT-J, GPT-NeoX, and VQGAN-CLIP

Joined August 2022
Don't wanna be here? Send us removal request.
Pinned Tweet
@AiEleuther
EleutherAI
1 year
Over the past two and a half years, EleutherAI has grown from a group of hackers on Discord to a thriving open science research community. Today, we are excited to announce the next step in our evolution: the formation of a non-profit research institute.
20
155
876
@AiEleuther
EleutherAI
1 year
Introducing the release 2.0 of GPT-NeoX, the open-source Megatron-DeepSpeed based library used to train GPT-NeoX-20B and the Pythia model suite.
6
109
547
@AiEleuther
EleutherAI
1 year
The most common question we get about our models is "will X fit on Y GPU?" This, and many more questions about training and inferring with LLMs, can be answered with some relatively easy math. By @QuentinAnthon15 , @BlancheMinerva , and @haileysch__
12
102
510
@AiEleuther
EleutherAI
1 year
Everyone knows that transformers are synonymous with large language models… but what if they weren’t? Over the past two years @BlinkDL_AI and team have been hard at work scaling RNNs to unprecedented scales. Today we are releasing a preprint on our work
5
118
474
@AiEleuther
EleutherAI
1 year
What do LLMs learn over the course of training? How do these patterns change as you scale? To help answer these questions, we are releasing a Pythia, suite of LLMs + checkpoints specifically designed for research on interpretability and training dynamics!
4
87
475
@AiEleuther
EleutherAI
2 years
As part of our work to democratize and promote access to language model technology worldwide, the Polyglot team at EleutherAI is conducting research on multilingual and non-English NLP. We are excited to announce their first models: Korean LLMs with 1.3B and 3.8B parameters.
3
43
372
@AiEleuther
EleutherAI
1 year
We have been getting emails from confused individuals trying to access recently. That webpage doesn’t exist, because we don’t have an API. One of them finally clued us in to why: apparently ChatGPT suggests it for trying out our models.
7
30
276
@AiEleuther
EleutherAI
8 months
ggml is a deeply impressive project, and much of its success is likely ascribable to @ggerganov 's management. Managing large-scale branching collaborations is a very challenging task (one we hope to improve at!), and Georgi deserves huge props for how he handles it.
@ggerganov
Georgi Gerganov
9 months
The ggml roadmap is progressing as expected with a lot of infrastructural development already completed We now enter the more interesting phase of the project - applying the framework to practical problems and doing cool stuff on the Edge
Tweet media one
7
41
540
7
15
239
@AiEleuther
EleutherAI
10 months
We applaud @Meta ’s continued push to openly license their models with #Llama2 having the most permissive license yet. However we are extremely sad to see Meta continue to spread misinformation about the licensing of the model: LLaMA 2 is not open source
4
51
213
@AiEleuther
EleutherAI
3 months
We’re excited to be collaborating on a new *resource release* to help provide an on-ramp for new open model developers: the Foundation Model Development Cheatsheet!
Tweet media one
4
44
211
@AiEleuther
EleutherAI
1 year
A common meme in the AI world is that responsible AI means locking AIs up so that nobody can study their strengths and weaknesses. We disagree: if there is going to be LLM products by companies like OpenAI and Google then independent researchers must be able to study them.
@ClementDelangue
clem 🤗
1 year
What am I excited about for 2023? Supporting more open-source science, models, datasets and demos like Dalle-mini by @borisdayma , Bloom by @BigscienceW , GPTJ by @AiEleuther @laion_ai , @StableDiffusion by compvis @StabilityAI @runwayml , Santacoder by @BigCodeProject & many more!
14
30
280
2
20
183
@AiEleuther
EleutherAI
5 months
The EMNLP camera-ready version of @RWKV_AI is now available on arXiv! Congrats again to @BlinkDL_AI @eric_alcaide @QuentinAnthon15 and the rest of the team on the first successful scaling of RNNs to the ten billion parameter regime! A 🧵
Tweet media one
4
28
164
@AiEleuther
EleutherAI
7 months
The Foundation Model Transparency Index by @StanfordCRFM purports to be an assessment of how transparent popular AI models are. Unfortunately its analysis is quite flawed in ways that minimize its usefulness and encourage gamification
7
36
152
@AiEleuther
EleutherAI
2 years
The latest paper in EleutherAI's close collaboration with @mark_riedl 's lab on computational storytelling shows how to use a CLIP-like contrastive model to guide the generation of natural language stories to meet human preferences.
@_akhaliq
AK
2 years
Robust Preference Learning for Storytelling via Contrastive Reinforcement Learning abs:
Tweet media one
1
58
305
3
30
135
@AiEleuther
EleutherAI
1 year
If you have substantially contributed to a ML training that required multiple compute nodes, we would like to interview you! Email contact @eleuther .ai with your resume, details about the training run, and a short description of your current interests. More jobs coming soon!
8
23
117
@AiEleuther
EleutherAI
1 year
Our recent blog post contained a meme about code golfing, inspired by a paper that bragged about reaching 80%+ on ImageNet with code that fit in a tweet. In the past 24 hours we've received five emails with code beating Gao (2021), with the current record holder being 260 bytes:
Tweet media one
2
11
110
@AiEleuther
EleutherAI
3 months
Congratulations to our friends at @allen_ai on joining (along with EleutherAI and @llm360 ) the tiny club of organizations that have trained a large language model with: 1. Public training data 2. Partially trained checkpoints 3. Open source licensing on model weights
@allen_ai
Allen Institute for AI
3 months
OLMo is here! And it’s 100% open. It’s a state-of-the-art LLM and we are releasing it with all pre-training data and code. Let’s get to work on understanding the science behind LLMs. Learn more about the framework and how to access it here:
29
353
1K
1
14
106
@AiEleuther
EleutherAI
6 months
How can we talk about the way AI chat bots behave without falling into false anthropomorphic assumptions? In our latest paper we explore role-play as a framework for understanding chatbots without falsely ascribing human characteristics to language models
3
21
108
@AiEleuther
EleutherAI
10 months
We're glad to share our work on Minetester, a fully open RL framework we've been working on as part of a larger alignment research agenda.
1
23
107
@AiEleuther
EleutherAI
9 months
A few days ago we had the first meeting of our newest reading group, focusing on Mixture of Expert (MoE) models. Check out the recording, and drop by our discord server to join the next meeting!
0
14
97
@AiEleuther
EleutherAI
6 months
Great to see our work with @NousResearch and @EnricoShippole on context length extension highlighted at @MistralAI 's presentation at AI Pulse. And a very deserved shout-out to @huggingface and @Teknium1 as well!
Tweet media one
3
10
94
@AiEleuther
EleutherAI
2 years
The first major codebase to come out of @carperai , our Reinforcement Learning from Human Feedback (RLHF) lab. Previous work by @OpenAI and @AnthropicAI has made it clear that RLHF is a promising technology, but a lack of released tools and frameworks makes using it challenging
1
11
87
@AiEleuther
EleutherAI
1 year
We are discussing ramping up our public education efforts. What are topic(s) regarding LLMs and other large scale AI technologies that you would like to see more lay-accessible blog posts, infographics, etc. about?
19
7
88
@AiEleuther
EleutherAI
7 months
Read more about our team and collaborators’ work on Llemma, powerful domain-adapted base models for mathematics! Blog post: Models/data/code: 1/n
1
32
83
@AiEleuther
EleutherAI
3 months
Amazing work by the @CohereForAI ! Dataset paper: Model paper:
@CohereForAI
Cohere For AI
3 months
Today, we’re launching Aya, a new open-source, massively multilingual LLM & dataset to help support under-represented languages. Aya outperforms existing open-source models and covers 101 different languages – more than double covered by previous models.
77
382
1K
1
19
77
@AiEleuther
EleutherAI
1 year
We are very excited to share the results of our collaboration with @farairesearch on developing tooling for understanding how model predictions evolve over the course of training. These ideas are already powering our ELK research, so expect more soon!
@norabelrose
Nora Belrose
1 year
Ever wonder how a language model decides what to say next? Our method, the tuned lens (), can trace an LM’s prediction as it develops from one layer to the next. It's more reliable and applies to more models than prior state-of-the-art. 🧵
Tweet media one
18
180
928
1
15
72
@AiEleuther
EleutherAI
1 year
We believe that building a robust, interoperable research community requires collaboration. @huggingface has been doing a phenomenal job organizing multilateral collaborations and we're excited to continue to participate. Congrats to @haileysch__ and the entire @BigCodeProject !
@BigCodeProject
BigCode
1 year
Introducing: 💫StarCoder StarCoder is a 15B LLM for code with 8k context and trained only on permissive data in 80+ programming languages. It can be prompted to reach 40% pass @1 on HumanEval and act as a Tech Assistant. Try it here: Release thread🧵
Tweet media one
76
671
3K
1
15
72
@AiEleuther
EleutherAI
7 months
We’ve trained and released Llemma, strong base LMs for mathematics competitive with the best similar closed+unreleased models. We hope these models + code will serve as a powerful platform for enabling future open Math+AI research!
@zhangir_azerbay
Zhangir Azerbayev
7 months
We release Llemma: open LMs for math trained on up to 200B tokens of mathematical text. The performance of Llemma 34B approaches Google's Minerva 62B despite having half the parameters. Models/data/code: Paper: More ⬇️
Tweet media one
11
138
565
1
15
70
@AiEleuther
EleutherAI
1 year
Great to see @CerebrasSystems build on top of Pile and releasing these open source! Cerebras-GPT is Chinchilla optimal up to 13B parameters. A nice compliment to our Pythia suite, allowing for the comparison of the effect of different training regimes on model behavior
@CerebrasSystems
Cerebras
1 year
🎉 Exciting news! Today we are releasing Cerebras-GPT, a family of 7 GPT models from 111M to 13B parameters trained using the Chinchilla formula. These are the highest accuracy models for a compute budget and are available today open-source! (1/5) Press:
32
341
1K
2
10
70
@AiEleuther
EleutherAI
9 months
Amazing news for our close partner in research and major donor, @huggingface . We've be thrilled to work with HF on projects like BLOOM and the Open LLM Leaderboard, and are excited to continue to work with them to advance open AI research and the open source ecosystem.
@ClementDelangue
clem 🤗
9 months
Super excited to welcome our new investors @SalesforceVC , @Google , @amazon , @nvidia , @AMD , @intel , @QualcommVenture , @IBM & @sound_ventures_ who all participated in @huggingface ’s $235M series D at a $4.5B valuation to celebrate the crossing of 1,000,000 models, datasets and apps
Tweet media one
244
325
2K
0
4
68
@AiEleuther
EleutherAI
1 year
Huge shout out to the donors who have helped us get to where we are today and where we will go next: @StabilityAI @huggingface @CoreWeave @natfriedman @LambdaAPI and @canva And finally, come hang out in our online research lab! We can't wait to meet you.
2
3
68
@AiEleuther
EleutherAI
1 year
Very exciting work from @databricks ! We’re excited to see GPT-J continuing to power open source innovation close to two years after we released it.
@matei_zaharia
Matei Zaharia
1 year
Building a ChatGPT-like LLM might be easier than anyone thought. At @Databricks , we tuned a 2-year-old open source model to follow instructions in just 3 hours, and are open sourcing the code. We think this tech will quickly be democratized.
43
513
3K
2
8
64
@AiEleuther
EleutherAI
4 months
We are excited to join other leaders in artificial intelligence in partnering with @NSF to launch the National AI Research Resource (NAIRR), a shared infrastructure that will promote access to critical resources necessary to power AI research.
@NSF
U.S. National Science Foundation
4 months
NSF and its partners are proud to launch the National AI Research Resource pilot. Its goal? To democratize the future of #AI research & development by offering researchers & educators advanced computing, datasets, models, software, training & user support.
Tweet media one
23
84
246
3
7
63
@AiEleuther
EleutherAI
2 years
Reinforcement Learning from Human Feedback is an allegedly powerful technology for language models, but one that has so far been kept out of the hands of most researchers. We are thrilled to be working on bringing the ability to study and evaluate these model to the mainstream.
0
13
61
@AiEleuther
EleutherAI
3 months
Kyle is one of four members of our community without a PhD with their first first-author paper under review currently! We view providing this training and mentorship as an important part of our public service.
@KyleDevinOBrien
Kyle O'Brien
3 months
We are grateful to EleutherAI for permitting access to their compute resources for initial experiments. The welcome and open research community on the EleutherAI Discord was especially helpful for this project and my growth as a scientist. 😊
0
0
10
4
4
61
@AiEleuther
EleutherAI
3 months
Another day, another math LM bootstrapping their data work off of the work done by OpenWebMath and Llemma teams. This makes the three in the past week! Open data work is 🔥 Not only do people use your data, but high quality data work has enduring impact on data pipelines.
@_akhaliq
AK
3 months
AutoMathText Autonomous Data Selection with Language Models for Mathematical Texts paper page: dataset: . To improve language models' proficiency in mathematical reasoning via continual pretraining, we introduce a novel strategy
Tweet media one
1
18
85
3
10
58
@AiEleuther
EleutherAI
26 days
An essential blocker to training LLMs on public domain books is not knowing which books are in the public domain. We're working on it, but it's slow and costly... if you're interested in providing support reach out!
@Is_Dan_Bull
Daniel Bullock
26 days
@BlancheMinerva @rom1504 Indeed, these would be *extremely* valuable data resources. The databases on are, unfortunately, ununified and the records themselves seem anemic. Somewhat odd considering USPTO has flagship datasets (available via NAIRR). Greater financial incentives?
1
0
0
2
13
56
@AiEleuther
EleutherAI
7 months
Releasing data is amazing, but tools like these that help people make sense of the data is arguably an even more important step forward for data transparency. We're thrilled to see our community continue to lead by example when it comes to in transparent releases.
@keirp1
Keiran Paster
7 months
We also made an @nomic_ai Atlas map of OpenWebMath so you can explore the different types of math and scientific data present in the dataset:
3
15
64
1
13
55
@AiEleuther
EleutherAI
29 days
We are excited to see torchtune, a newly announced PyTorch-native finetuning library, integrate with our LM Evaluation Harness library for standardized, reproducible evaluations! Read more here: Blog: Thread:
@kakemeister
Kartikay Khandelwal
29 days
torchtune provides: - LLM implementations in native-PyTorch - Recipes for QLoRA, LoRA and full fine-tune - Popular dataset-formats and YAML configs - Integrations with @huggingface Hub, @AiEleuther Eval Harness, bitsandbyes, ExecuTorch and many more [3/5]
1
3
22
0
7
56
@AiEleuther
EleutherAI
1 year
A very interesting analysis from @ZetaVector looks at how the most cited papers each year break down. We’re especially proud of this statistic: Almost 20% of papers with EleutherAI authors were in the top 100 most cited papers of their year. Full report:
@ZetaVector
Zeta Alpha
1 year
And fixed an issue that caused @AiEleuther to miss their spot as the second most effective in impact.
Tweet media one
1
0
7
3
8
55
@AiEleuther
EleutherAI
1 year
Very glad to see this. “Public release” of models doesn’t mean much at this scale if you can’t provide a free API, as almost nobody can afford to deploy the model. Great work by @huggingface and @Azure making this happen and keeping it supported
@julien_c
Julien Chaumond
1 year
BLOOM API is back online 🌸🌸🌸🔥 Thanks @Azure for the support
2
12
75
0
5
52
@AiEleuther
EleutherAI
1 year
Interested in Mixture-of-Experts models but don't want to train it from scratch? Check out the latest from @arankomatsuzaki , who spent his internship @GoogleAI figuring out how to convert existing dense models to MoE ones.
@arankomatsuzaki
Aran Komatsuzaki
1 year
We have released "Sparse Upcycling: Training Mixture-of-Experts from Dense Checkpoints"! Our method converts a pretrained dense model into a MoE by copying the MLP layers and keeps training it, which outperforms continued dense training. (1/N)
Tweet media one
11
82
390
0
6
53
@AiEleuther
EleutherAI
1 year
most NLP researchers had a very minimal understanding of the engineering undertaking required to train such models or their capabilities & limitations. We started as a ragtag group nobody had heard of, and within a year had released the largest OSS GPT-3-style model in the world.
1
0
51
@AiEleuther
EleutherAI
7 months
A huge thank you to everyone who has helped make our training and evaluation libraries some of the most popular in the world. Especially @QuentinAnthon15 's work leading GPT-NeoX and @haileysch__ @lintangsutawika @BlancheMinerva and @jonbtow for their eval work over the years
Tweet media one
Tweet media two
2
8
52
@AiEleuther
EleutherAI
10 months
@Meta @ylecun @paperswithcode @huggingface @StabilityAI If “open source” is to mean anything, we must stand with @OpenSourceOrg and call out corporate misinformation. You don’t need to license your models open source. It may even be the best choice to *not* do so. But if you don’t, you shouldn’t lie and say you did.
4
11
49
@AiEleuther
EleutherAI
1 year
Claiming that you can match a transformers’ performance is nothing new, and plenty of other papers put forth that claim. What makes RWKV special is that we actually train models up to 14B params and show consistently competitive performance with token-matched transformers!
Tweet media one
1
9
50
@AiEleuther
EleutherAI
7 months
A really phenomenal deep dive into LLM evaluations and good illustration of why 1. Real life applications should be evaluated in the deployment context 2. Open access to models and evaluation code is essential for understanding the claims made in papers
@a13xba
Alex
7 months
𝗗𝗼𝗻’𝘁 𝗯𝗹𝗶𝗻𝗱𝗹𝘆 𝘁𝗿𝘂𝘀𝘁 𝘁𝗵𝗲 𝗢𝗽𝗲𝗻 𝗟𝗟𝗠 𝗟𝗲𝗮𝗱𝗲𝗿𝗯𝗼𝗮𝗿𝗱! We used @try_zeno to explore the Open LLM Leaderboard data. Spoiler: Unless you use LLMs for multiple choice questions, these benchmarks aren’t that helpful. Zeno Report:
4
20
84
0
9
50
@AiEleuther
EleutherAI
8 months
Congrats to everyone who won one of these grants. The open source community desperately needs more funding so that people can be *professional* open source engineers and researchers, lest the only end-game be a closed-source job.
@BornsteinMatt
Matt Bornstein
9 months
[New program] a16z Open Source AI Grants Hackers & independent devs are massively important to the AI ecosystem. We're starting a grant funding program so they can continue their work without pressure to generate financial returns.
71
270
1K
0
5
49
@AiEleuther
EleutherAI
11 months
A little over a month ago, @Vermeille_ showed up in our discord server with a simple question: can CFG be applied to LLMs? Probably, but the devil’s in the details. So we sat down to figure those details out. Check out his new paper for more ⬇️⬇️⬇️
@Vermeille_
Guillaume "Vermeille" Sanchez
11 months
We borrowed CFG from vision and run it with LLMs. We get increased control, and benchmarks increases similar to a model twice the size. Ready for all your models (incl. chatbots!) : no special training or fine tuning required. thx @AiEleuther !
17
89
394
1
8
49
@AiEleuther
EleutherAI
1 year
It’s been a pleasure to watch Theodore’s ideas develop over the past two years. Definitely check out his paper one finetuning LLMs into “text-to-structure” models, and how to use them to design tools that are useful to architects.
@TheodoreGalanos
Theodore Galanos
1 year
It's finally out! After almost 2 years of delay, our paper on Architext, the first, open-source, language model trained for Architectural design, is now on arxiv. In the unlikely event you're curious to read it, you can find it here: Quick thread ↓
Tweet media one
26
78
535
0
11
47
@AiEleuther
EleutherAI
1 year
This will enable us to do much more, and we look forward to building a world class research group for public good! Lead by Stella Biderman @BlancheMinerva as Executive Director and Head of Research, Curtis Huebner as Head of Alignment, and Shiv Purohit as Head of Engineering.
3
1
45
@AiEleuther
EleutherAI
5 months
Interested in meeting up with EleutherAI at #NeurIPS2023 ? Over a dozen members of our community will be there to present ten papers, including @BlancheMinerva @norabelrose @lcastricato @QuentinAnthon15 @arankomatsuzaki @KyleDevinOBrien @zhangir_azerbay @iScienceLuvr @LauraRuis
1
5
42
@AiEleuther
EleutherAI
1 year
RNNs struggle to scale because of how they parallelize, but making the time decay of each channel data-independent, we are able to parallelize RWKV the same way transformers are during training! After training, it can be used like an RNN for inference.
Tweet media one
Tweet media two
1
4
43
@AiEleuther
EleutherAI
1 year
Interested in studying formal proofs with LLMs? Check out ProofNet, a new benchmark for theorem proving and autoformalization of undergraduate-level mathematics by @zhangir_azerbay @haileysch__ and others. Follow-up work is already in progress!
@zhangir_azerbay
Zhangir Azerbayev
1 year
How good are language models at formalizing undergraduate math? We explore this in "ProofNet: autoformalizing and formally proving undergraduate-level mathematics" Thread below. 1/n
3
54
180
0
7
42
@AiEleuther
EleutherAI
1 year
This is some really phenomenal work out of @StanfordHAI . Evaluation work, like data work, is a massively understudied and undervalued. But work like this has far more impact than a half dozen medium papers about minor tweaks to transformer architectures
@Tianyi_Zh
Tianyi Zhang
1 year
Two lessons we learned through HELM (Sec 8.5.1; ): 1. CNN/DM and XSum reference summaries are worse than summaries generated by finetuned LMs and zero-/few-shot large LMs. 2. Instruction tuning, not scale, is the key to “zero-shot” summarization.
3
27
110
1
7
41
@AiEleuther
EleutherAI
9 months
Congrats to @StabilityAI and their collaborators. We are excited to see people continuing to push for non-English non-Chinese LLM research, and thrilled that they're finding our libraries including GPT-NeoX and lm-eval useful! To get started on your next LLM project, check out 👇
@StabilityAI
Stability AI
9 months
Today, we are releasing our first Japanese language model (LM), Japanese StableLM Alpha. It is currently the best-performing openly available LM created for Japanese speakers! ↓
Tweet media one
16
56
287
1
6
40
@AiEleuther
EleutherAI
1 year
@databricks GPT-J-6B might be “old” but it’s hardly slowing down. Month after month it’s among the most downloaded GPT-3 style models on @huggingface , and no billion+ param model has ever come close (“gpt2” is the 125M version, not the 1.3B version).
Tweet media one
1
5
37
@AiEleuther
EleutherAI
2 years
@BigscienceW This is just the beginning of our work on non-English and multilingual NLP. We have a 6B Korean model currently training, and plans to expand to East Asian and Nordic language families next! Keep an eye on our GitHhub or stop by #polyglot on our Discord!
2
1
36
@AiEleuther
EleutherAI
3 months
EleutherAI is excited to collaborate with NIST in its newly formed AI Safety Institute Consortium (AISIC) to establish a new measurement science for safe AI systems. See the official announcement here: #AISIC @NIST @CommerceGov
1
5
37
@AiEleuther
EleutherAI
1 year
As access to LLMs has increased, our research has shifted to focus more on interpretability, alignment, ethics, and evaluation of AIs. We look forward to continuing to grow and adapt to the needs of researchers and the public Check out our latest work at
1
2
37
@AiEleuther
EleutherAI
9 months
HF transformers, Megatron-DeepSpeed, and now Lit-GPT... what will be the next framework to support our language model evaluation harness?
@LightningAI
Lightning AI ⚡️
9 months
Use Lit-GPT to evaluate and compare LLMs on 200+ tasks with a single command. Try it ➡️ #MachineLearning #LLM #GPT
Tweet media one
4
9
47
3
3
36
@AiEleuther
EleutherAI
9 months
We are thrilled to share the latest in our collaboration with @EnricoShippole and @NousResearch on sequence length extension. We're now pushing sequence lengths that will enable work in malware detection and biology that are currently hamstrung by sequence length limitations!
@EnricoShippole
EnricoShippole
9 months
Releasing Yarn-Llama-2-13b-128k, a Llama-2 model, trained for 128k context length using YaRN scaling. The model was trained in collaboration with u/bloc97 and @theemozilla of @NousResearch and @Void13950782 of @AiEleuther .
Tweet media one
28
174
789
0
3
36
@AiEleuther
EleutherAI
5 months
RWKV substantially lags behind S4 on the long range arena benchmark, as well as subsequent work by @_albertgu et al. @HazyResearch such as SGConv and Mamba. It remains to be seen if that's a killer for NLP applications. Note that the scores are nearly identical for the text task.
Tweet media one
1
4
36
@AiEleuther
EleutherAI
1 year
Public release makes AI models better, more diverse, and spreads their benefits more widely.
@jlondonobo
Jose Londono
1 year
Just a few days into @huggingface and @LambdaAPI 's Whisper fine-tuning event and have already seen huge breakthroughs in multilingual ASR. Very smart people working on this. Here's the SOTA whisper-based module I fine-tuned for Portuguese 🇧🇷🇵🇹
1
9
64
2
8
35
@AiEleuther
EleutherAI
1 year
Transparency about whose data is contained in datasets is an essential first step towards establishing meaningful provenance and consent. We applaud @BigCodeProject 's efforts in this regard and look forward to implementing similar techniques for our future datasets.
@julien_c
Julien Chaumond
1 year
Yay, I have 29 of my GH repositories included in The Stack 😎 Prepare for some very good quality codes 🤪 The Stack:
Tweet media one
5
3
38
0
10
34
@AiEleuther
EleutherAI
5 months
Very cool work! @_albertgu has been pushing on state-space models for some time now and the release of billion-parameter scale models is a big step forward for this line of work. We look forward to the community testing the models out!
@_albertgu
Albert Gu
5 months
Quadratic attention has been indispensable for information-dense modalities such as language... until now. Announcing Mamba: a new SSM arch. that has linear-time scaling, ultra long context, and most importantly--outperforms Transformers everywhere we've tried. With @tri_dao 1/
Tweet media one
53
422
2K
0
3
36
@AiEleuther
EleutherAI
2 months
A new minor version release, 0.4.2, of the lm-evaluation-harness is available on PyPI! 1/n
1
6
35
@AiEleuther
EleutherAI
1 year
Congrats to @BlinkDL_AI and the team :) We hope to have a paper about RWKV out by the end of the month!
@huggingface
Hugging Face
1 year
The first RNN in transformers! 🤯 Announcing the integration of RWKV models in transformers with @BlinkDL_AI and RWKV community! RWKV is an attention free model that combines the best from RNNs and transformers. Learn more about the model in this blogpost:
Tweet media one
19
268
1K
0
7
33
@AiEleuther
EleutherAI
10 months
If you’re attending #ACL2023NLP or #icml2023 don't miss our seven exciting papers on crosslingual adaption of LLMs, the Pythia model suite, novel training methodologies for LLMs, data trusts, and more! 🧵
1
6
33
@AiEleuther
EleutherAI
10 months
Even after only 55% of the training, @BlinkDL_AI ’s multilingual RWKV “World” model is the best open source Japanese LLM in the world! Check out the paper: Code and more models can be found at:
@BlinkDL_AI
BlinkDL
10 months
The JPNtuned 7B #RWKV World is the best open-source Japanese LLM 🚀Runner: Model (55% trained, finishing in a few days): More languages are coming🌍RWKV is 100% RNN
Tweet media one
1
43
140
1
8
33
@AiEleuther
EleutherAI
5 months
Looking for something to check out on the last day of #NeurIPS2023 ? Come hang out with EleutherAI @solarneurips @BlancheMinerva is speaking on a panel and @jacob_pfau @alexinfanger Abhay Sheshadri, Ayush Panda, Curtis Huebner and @_julianmichael_ have a poster Room R06-R09
Tweet media one
Tweet media two
0
5
31
@AiEleuther
EleutherAI
2 months
Interested in practical strategies to continually pre-train existing models on new data? Take a look at the recent paper between @AiEleuther and @irinarish 's CERC lab as a part of our joint INCITE grant!
@benjamintherien
Benjamin Thérien
2 months
Interested in seamlessly updating your #LLM on new datasets to avoid wasting previous efforts & compute, all while maintaining performance on past data? Excited to present Simple and Scalable Strategies to Continually Pre-train Large Language Models! 🧵 1/N
Tweet media one
4
49
158
2
5
30
@AiEleuther
EleutherAI
7 months
This is deeply necessary work and a heroic effort by Shayne et al. "This is the best NLP data work of 2023." @BlancheMinerva "If there's anything less glamorous yet higher-impact in ML than looking at the data, it's doing due diligence on licensing." @haileysch__
@ShayneRedford
Shayne Longpre
7 months
📢Announcing the🌟Data Provenance Initiative🌟 🧭A rigorous public audit of 1800+ instruct/align datasets 🔍Explore/filter sources, creators & license conditions ⚠️We see a rising divide between commercially open v closed licensed data 🌐: 1/
10
151
463
1
6
31
@AiEleuther
EleutherAI
1 year
The world has changed quite a lot since we first got started. When EleutherAI was founded, the largest open source GPT-3-style language model in the world had 1.5B parameters. GPT-3 itself was not available for researchers to study without special access from OpenAI, and
1
0
31
@AiEleuther
EleutherAI
2 years
Benchmark results show that the models have performance comparable to or better than the best publicly available Korean language models, including Facebook's 7.5B xGLM and Kakao Brain's 6.0B koGPT model. We do not show @BigscienceW 's BLOOM models as they are not trained in Korean
Tweet media one
Tweet media two
2
1
31
@AiEleuther
EleutherAI
6 months
Interested in our recent paper "LLeMA: An Open Language Model For Mathematics"? Check out this summary by @unboxresearch Or dig into our work directly Paper: Code:
@unboxresearch
Unbox Research
6 months
I can imagine a future where advanced mathematics has completely changed. What makes math challenging today is the ability to learn abstract technical concepts, as well as the ability to construct arguments that solve precise logical problems. [article: ]
1
6
25
2
7
30
@AiEleuther
EleutherAI
1 year
@arankomatsuzaki has graduated from explaining other peoples’ papers in Discord and on Twitter to doing it at conferences when the author misses their poster session
@MichaelTrazzi
Michaël Trazzi
1 year
Aran Komatsuzaki giving walkthroughs of the codeRL paper before the author arrives. After 10 minutes of SBFing his way into answering poster questions he revealed he was not the author and everyone lost their mind (Poster 138 #NeurIPS2022 )
9
33
590
1
2
31
@AiEleuther
EleutherAI
5 months
It was great to see a lot of excitement about attention-free models @NeurIPSConf ! We had great conversations with many people interested in next-gen architectures for language models. Pic from Systems for foundational models and foundation models for systems by Chris Re
Tweet media one
1
3
30
@AiEleuther
EleutherAI
1 year
EleutherAI is blessed to have ungodly amounts of compute for a research non-profit. Part of that blessing though is a responsibility to develop things that are interesting and useful not just to us, but to the many researchers who wouldn’t have been able to do this themselves.
@AiEleuther
EleutherAI
1 year
We are currently using these models to investigate a variety of phenomena (expect initial papers within the month!), but are making the models public now because we believe that these models will be widely useful to the NLP community writ large and don't want to make others wait
1
0
14
0
2
30
@AiEleuther
EleutherAI
5 months
We present the first compute-optimal scaling laws analysis of a large RNN, finding highly predictable scaling across runs. Unfortunately we don't sample densely enough to estimate the optimal token-per-parameter, but we plan to in future work.
Tweet media one
1
4
29
@AiEleuther
EleutherAI
1 year
Excellent and timely reminder from the FTC. The question, as always, is whether the USG will be able to bring itself to levy meaningful penalties that actually deter illegal behavior by companies. h/t: @emilymbender
2
8
29
@AiEleuther
EleutherAI
5 months
Really great work by @guitaricet that we were thrilled to sponsor.
@guitaricet
Vlad Lialin
5 months
Parameter-efficient methods revolutionized the accessibility of LLM fine-tuning, but can they do pre-training? Today at NeurIPS Workshop on Advancing Neural Network Training we present ReLoRA — the first PEFT method that can be used for LLMs at scale!
Tweet media one
7
48
221
0
4
28
@AiEleuther
EleutherAI
1 year
We are also introducing DeeperSpeed v2.0, which will be synced with the latest upstream DeepSpeed. It also provides GPT-NeoX-specific bugfixes and features and additional optimizations specific to EleutherAI's HPC providers ( @StabilityAI @CoreWeave @ORNL )
1
3
28
@AiEleuther
EleutherAI
3 months
We envision a world where "safety" isn't dictated by model developers but is something that downstream deployers have agency over. For a small step in this direction, check out the latest work by @lcastricato @haileysch__ @BlancheMinerva and their collaborators.
@synth_labs
SynthLabs
3 months
PINK ELEPHANTS! 🐘 Now, don’t think about it. Chatbots also find this supremely difficult. Ask one of the most popular open source models NOT to talk about pink elephants, and it will fail 34% of the time. In our new paper, we address this problem. 1/N
Tweet media one
4
21
76
0
7
28
@AiEleuther
EleutherAI
7 months
The biggest issue with the FMTI is that it's not what it purports to be: instead of focusing on transparency, most of the questions are closer to "being a good product." An extremely transparent LLM can score as low as 30/100 on the FMTI! See how here:
1
1
27
@AiEleuther
EleutherAI
3 months
@ClementDelangue
clem 🤗
3 months
We just crossed 100,000 organizations on HF! Some of my favorites: - The MLX community for on-device AI: - The @AiEleuther org with over 150+ datasets: - The @Bloomberg org to show big financial institutions can use the hub:
18
36
271
0
7
27
@AiEleuther
EleutherAI
10 months
If you're interested in eliciting and editing knowledge in neural networks, don't miss @norabelrose 's talk on her recent and up-coming research. These ideas form one of the core interpretability research areas at EleutherAI.
@CohereForAI
Cohere For AI
10 months
Thank you to @norabelrose who gave an engaging presentation on Concept Erasure and Elicit Latent Knowledge to our open science community this week. ✨ Thanks @oohaijen and @jonas_kg for hosting. 📹 Catch the replay here
1
2
11
1
4
27
@AiEleuther
EleutherAI
9 months
We're thrilled to be sponsoring @cv4ecology with >12,000 A6000-hours of compute. Sharing innovations in ML beyond "core ML" applications is an essential and underfunded job that we are proud to play a part in. The world needs better ecological work more than another LLM.
@cv4ecology
CV4Ecology Workshop
9 months
Week 1 (of 3) in the books at #CV4Ecology2023 ! We're working hard while still enjoying the California sunshine, and celebrating our wins as they come 😀!
Tweet media one
0
10
43
0
1
26
@AiEleuther
EleutherAI
7 months
Instead, it shoehorns questions about "impact", "risks" and "mitigation" under the umbrella of "transparency." These are important things, certainly. But they're not transparency and pretending they are muddles the conversation about responsible AI.
2
2
26
@AiEleuther
EleutherAI
1 year
@QuentinAnthon15 @BlancheMinerva @haileysch__ This is the first in a series of blog posts on implementation details for large scale distributed DL that are far too often skimmed over in papers and articles. Stay tuned for more, including how to choose your parallelization and a deeper dive on FLOPs, latency, and perf metrics
0
3
26
@AiEleuther
EleutherAI
1 year
RWKV isn’t without its flaws. While we do approximately match the performance of transformers, our anecdotal experience is that it’s more sensitive to prompts and struggles to incorporate very long range information more than traditional transformers do.
1
0
26
@AiEleuther
EleutherAI
2 months
If you're interested in training LLMs in @AMD GPUs, check out our latest collaboration with @ORNL @OLCFGOV . We train materials science models and experiment with a variety of architecture and parallelism settings to determine best practices on the USG's largest supercomputer.
@QuentinAnthon15
Quentin Anthony
2 months
How do LLMs scale on AMD GPUs and HPE Slingshot 11 interconnects? We treat LLMs as a systems optimization problem on the new #1 HPC system on the Top500, ORNL Frontier. Learn more in our paper:
Tweet media one
2
22
80
2
2
25
@AiEleuther
EleutherAI
1 year
🧵 on @AnthropicAI ’s alternative idea for how to add human feedback to LLMs
@iScienceLuvr
Tanishq Mathew Abraham, Ph.D.
1 year
Claude, @AnthropicAI 's powerful ChatGPT alternative, was trained with "Constitutional AI". Constitutional AI is particularly interesting since it uses less human feedback than other methods, making it more scalable. Let's dive into how Constitutional AI works in 13 tweets!
19
121
926
1
7
25
@AiEleuther
EleutherAI
1 year
And coming soon: Support for @MetaAI 's LLaMA, allowing you to finetune the released checkpoints or train your own model using their architecture in the GPT-NeoX library! Note that the LLaMA codebase does not allow for finetuning natively. (by @zhansheng )
1
2
25
@AiEleuther
EleutherAI
10 months
@Meta If @Meta @ylecun @paperswithcode believe in ideals of openness, they need to stop and correct their disinformation. It’s okay to not release OS models. It’s not okay to pretend you did. Unfortunately, it seems open source washing is part of their AI business model.
1
4
24
@AiEleuther
EleutherAI
10 months
@Meta Doing this once is a mistake. Doing this twice is maybe still a mistake. But at this point Meta has a well established pattern of lying about the licensing of their models. They’ve done it for every 20B+ model they’ve ever released.
1
5
24
@AiEleuther
EleutherAI
1 year
Note that this paper is a work in progress, and its release is forced on us by anonymity deadlines. We are planning on continuing to improve and update the paper (incl. explicit scaling laws) and you can come to or (RWKV-specific)
1
1
25
@AiEleuther
EleutherAI
1 year
@huggingface @carperai @laion_ai @openbioml @StabilityAI @Mila_Quebec We believe in the importance of AI ethics and safety work, both in terms of preventing short-term harms and preempting longer-term ones. But we do not believe that the elite few at a couple of tech companies have the right to be the people who make these decisions for the world.
2
7
23
@AiEleuther
EleutherAI
1 year
Many of our intermediate results and existing analyses can be found on the GitHub or in the EleutherAI Discord server. This is part of our ethos of doing "Science in the Open" recently presented by @zhansheng NeurIPS @broadercollabs workshop
1
3
24