AndriyMulyar Profile
AndriyMulyar

@andriy_mulyar

11,329
Followers
550
Following
201
Media
3,109
Statuses

building tech that enables humans to interact with latent spaces 🗺️ founder / cto @ prev. ML Ph.D. Student at NYU Courant

New York, NY
Joined July 2019
Don't wanna be here? Send us removal request.
@andriy_mulyar
AndriyMulyar
1 year
I'm excited to announce the release of GPT4All, a 7B param language model finetuned from a curated set of 400k GPT-Turbo-3.5 assistant-style generation. We release💰800k data samples💰 for anyone to build upon and a model you can run on your laptop! Real-time Sampling on M1 Mac
161
981
7K
@andriy_mulyar
AndriyMulyar
1 year
Announcing GPT4All-J: The First Apache-2 Licensed Chatbot That Runs Locally on Your Machine💥 Large Language Models must be democratized and decentralized.
84
625
3K
@andriy_mulyar
AndriyMulyar
1 year
Elite hackers have gotten gpt4all to run on a ti-84 calculator. AP calculus exams will never be the same again.
Tweet media one
71
257
2K
@andriy_mulyar
AndriyMulyar
1 year
Gigantic Announcement for Language Models That Run on your CPU!💥📣 We are releasing: - GPT4All-Snoozy: the strongest local LLM that runs on your private CPU hardware! - The first local OS native LLM app verified by Apple ! Try it at:
Tweet media one
71
321
1K
@andriy_mulyar
AndriyMulyar
1 year
Serious question: What does an NLP Ph.D student work on nowadays with the presence of closed source GPT models that beat anything you can do in standard academic lab? @sleepinyourhat @srush_nlp @chrmanning @mdredze @ChrisGPotts
136
199
1K
@andriy_mulyar
AndriyMulyar
1 year
Local LLMs just got 2x faster on M1/M2 Macs⚡ - Supports all LLaMA models - GPT4All exclusively supports Replit for code gen! This demo video is 13B parameters running on an M2 Macbook Pro with 16GB of RAM Run powerful, privacy-aware LLMs anywhere at
27
162
1K
@andriy_mulyar
AndriyMulyar
1 year
GPT4All and LLaMa.cpp Python Bindings Are Here 🐍💥 Over the weekend, an elite team of hackers in the gpt4all community created the official set of python bindings for GPT4all. They will be maintained for llama.cpp compatibility going forward.
15
209
999
@andriy_mulyar
AndriyMulyar
1 year
Nearly a Petabyte of GPT4All Models Downloaded in 30 Days. This is why closed-sourced AI is on capital hill. They cannot win. open source will dominate in the limit.
Tweet media one
25
175
944
@andriy_mulyar
AndriyMulyar
1 year
LLMs on edge devices without internet are the future, join us to build it.
Tweet media one
26
80
953
@andriy_mulyar
AndriyMulyar
1 year
Very Big Announcement for Local LLM Devs💥 One line code change to use GPT4All in your existing app! Local LLMs are now compatible with a certain familiar API (and all of its software layers)
Tweet media one
39
174
919
@andriy_mulyar
AndriyMulyar
1 year
gpt4all can now be run from any python script on CPU 🤯🚀 (kudos to elite hacker and historian @benmschmidt )
Tweet media one
23
135
902
@andriy_mulyar
AndriyMulyar
1 year
Official GPT4All Chat UI is out 💥 The elite team of hackers has not slept all week. This UI comes built-in with features that allow you to participate in the democratic process of developing large language models.
Tweet media one
29
162
861
@andriy_mulyar
AndriyMulyar
1 year
Chat with your data privately on CPU with GPT4All! 💥💬 -Open source - Drag and drop files into a directory that GPT4All will query for context when answering questions. - GPT4All cites its sources. Install the chat client from and go! How it works
Tweet media one
29
118
743
@andriy_mulyar
AndriyMulyar
1 year
No. That model is not better than chatgpt3.5. false hype does a big disservice to everyone working on this Try it yourself. Open source models currently surpass chatgpt quality on small collections of individual tasks . Across the board chatgpt is a much better assistant model.
@ItakGol
Itamar Golan 🤓
1 year
I've been waiting for this 🤯 Open Source LLM Models Surpass GPT-3.5 🎉 In a groundbreaking development, a remarkable set of open-source LLM models has outperformed the capabilities of GPT-3.5. What truly amazed me is not only the exceptional performance of these models but
Tweet media one
41
350
2K
20
46
550
@andriy_mulyar
AndriyMulyar
1 year
The GPT4All movement has been the top trending Github repository worldwide for the last eight straight days. open source the data. open source the models. gpt4all.
Tweet media one
5
74
540
@andriy_mulyar
AndriyMulyar
1 year
Inspired by learnings from Alpaca, we carefully curated ~800k prompt-response samples to produce 430k high-quality assistant-style prompt/generation training pairs including code, dialogue, and stories. Detailed procedure for replication and data:
7
56
527
@andriy_mulyar
AndriyMulyar
1 year
Democratized AI Begins with Democratized Data! The GPT4All Open Source Datalake has launched!⛵💥 Find out how you can help democratize access to powerful local large language models by simply using them!
15
112
517
@andriy_mulyar
AndriyMulyar
1 year
Huge update on open source LLMz 💥 The Falcon model is now completely open source. Previously it was released under a license that required commercial royalty payments.
17
93
499
@andriy_mulyar
AndriyMulyar
1 month
How do models like GPT-4o and Meta’s Chameleon generate images? Answer: They don’t, they generate tokens. A short thread on multimodal tokenizers:
Tweet media one
10
52
486
@andriy_mulyar
AndriyMulyar
1 year
Huge upgrade for LLMs💥 You don't need to fine-tune! Augment your LLMs with memory with a powerful open source vector database.
13
88
468
@andriy_mulyar
AndriyMulyar
1 year
local llms nearly have apple silicone support with @ggerganov latest ggml version  gpt4all will soon support 40 tok/s inference of 7B transformer decoders on a Mac! open source the data open source the models gpt4all
13
65
465
@andriy_mulyar
AndriyMulyar
1 year
Have you heard of Deepscatter? 🗺️ Deepscatter is the only graphics engine that supports the rendering of billions of points in your web browser. It is non-commercially open source and built by Nomic's resident WebGL wizard @benmschmidt .
Tweet media one
7
85
454
@andriy_mulyar
AndriyMulyar
1 year
Google used Atlas to visualize its LLM embeddings 🗺️ - Find out what you can learn by interactively exploring 8M embeddings.
7
79
434
@andriy_mulyar
AndriyMulyar
1 year
Interactively Explore 21M Scientific Articles on One Screen 🗺️
Tweet media one
9
94
436
@andriy_mulyar
AndriyMulyar
1 year
Local LLMs work out of the box with @LangChainAI 🦜! Run chat in server mode and get started! Instructions:
Tweet media one
7
85
394
@andriy_mulyar
AndriyMulyar
1 year
tell me you became an AI expert in November 2022 without telling me you became an AI expert in November 2022
@ItakGol
Itamar Golan 🤓
1 year
1/ Holy Moses 🤯 Is Vector Databases (Pinecone, Chroma...) soon to be DEAD? 🤔 Anthropic just expanded their Claude LLM's context window to 100K tokens. X3 than GPT-4 not-yet-released 32K version. 🚀 Here is my full analysis ⤵️⤵️⤵️
Tweet media one
26
42
169
21
15
380
@andriy_mulyar
AndriyMulyar
1 year
The power of clean data: gpt4all beats chatgpt on certain hallucination benchmarks. 🗺️
Tweet media one
9
39
353
@andriy_mulyar
AndriyMulyar
1 year
Excited to announce that GPT4All is now an official Langchain backend! 💥 your own models. on your own hardware. gpt4all.
@LangChainAI
LangChain
1 year
Rather large 🦜🔗0.0.131 release! 🆓GPT4all model ( @nomic_ai ) 🦙Llama-cpp model ⏹️Support for @qdrant_engine local db 🌲Zilliz cloud ( @milvusio ) Vectorstore support 📧New OutlookMessage Document Loader 🕸️New Selenium Document Loader 🪟 Support for SQL views in SQLChain 🧵
12
84
589
12
38
346
@andriy_mulyar
AndriyMulyar
1 year
GPT4All on a Nintendo DS lite, DSi and 3DS New hardware? No problem.
Tweet media one
14
56
333
@andriy_mulyar
AndriyMulyar
9 months
gpt4all pre-release with mistral 7b running locally is ⚡. 34 tok/s on Mac metal. open-source and ships with support for nearly every GPU (amd, intel, nvidia, etc) you can try the nightly dev-build on discord.
17
30
317
@andriy_mulyar
AndriyMulyar
1 year
To create a gpt4all, you need to pre-train on trillions of tokens. we have the tokens. we have the gpus. we need your help to curate the terabytes of text. consider joining @nomic_ai to make history and open-source a powerful foundation model.
7
47
302
@andriy_mulyar
AndriyMulyar
1 year
The elite team of GPT4All community hackers is working tirelessly to address this. A GPT4All must run natively on All devices and be accessible to All. Remember, the web browser is the world's best distribution platform for software. Exciting announcements to come.
@BrianRoemmele
Brian Roemmele
1 year
@frhd27 Thank you. I am working on a free how to. Most folks have no ability to do many of the things in this link. It would add to the confusion. But some can just click your link and have at it if they are so inclined.
6
7
175
9
28
291
@andriy_mulyar
AndriyMulyar
1 year
Can someone point me to a deployed 'AI agent' that is found useful by a *some* group of people outside of its developers.
32
8
287
@andriy_mulyar
AndriyMulyar
1 year
Then God said, "Let there be Typescript", and there was Typescript. Official GPT4All @typescript Bindings are out! The elite team of hackers moves fast. opensource the data. opensource the models. gpt4all.
5
52
279
@andriy_mulyar
AndriyMulyar
1 year
Big News for Open Source AI 🎉 I'm excited to announce that @nomic_ai is doubling down on its commitment to making AI systems more accessible and explainable with our latest 17m Series A led by @coatuemgmt .
41
32
273
@andriy_mulyar
AndriyMulyar
1 year
You can spin up your own hosted GPT4All on @modal_labs in 10 lines of code!
Tweet media one
3
59
265
@andriy_mulyar
AndriyMulyar
1 year
Embeddings uncover scientific fraud - Check out how looking at embeddings of your data allows you to uncover patterns like potential scientific fraud. This interactive visual is powered by @nomic_ai embedding platform and @benmschmidt graphics engine
Tweet media one
10
34
258
@andriy_mulyar
AndriyMulyar
1 year
Tired of breaking llama.cpp changes? 🔨 GPT4All is working to support old and new versions of llama.cpp with dynamic submoduling of ggML. Your models will just work! Come help us build the most stable ecosystem for local LLMs!
6
32
242
@andriy_mulyar
AndriyMulyar
5 months
Announcing Nomic Embed 🧨 You can now train your own OpenAI quality text embedding model. - Open source, fully reproducible text embedding model that beats OpenAI and Jina on long context tasks. - 235M text pairs openly released for training 💰 - Apache 2 License
Tweet media one
17
37
242
@andriy_mulyar
AndriyMulyar
1 year
GPT4All will support all ggML and llama.cpp versions going forward!💥 Try 100's of different CPU LLMs on @huggingface all from the same chat client and python package! Instructions: …
Tweet media one
10
42
240
@andriy_mulyar
AndriyMulyar
1 year
@jonathanbesomi found it on a USB that fell off a truck 🚚
5
2
219
@andriy_mulyar
AndriyMulyar
1 year
PromptLayer now stands behind the GPT4All movement! 🍰 When you use the OpenAI API through @promptlayer , you now have an opt-in option to share all your request outputs with the GPT4All open source data lake.
6
34
211
@andriy_mulyar
AndriyMulyar
1 year
my Twitter feed is full of ph.d. students having an existential crisis
4
10
208
@andriy_mulyar
AndriyMulyar
1 year
Orca Mini at 40 tok/sec on Apple Metal in
9
34
193
@andriy_mulyar
AndriyMulyar
9 months
Large Language Models Now Run on All GPUs with GPT4All 🚀 GPT4All is the first software to support all modern @AMD , @intel , @Qualcomm , and @nvidia GPUs for running LLMs. You don't need to know how to code to use the tech revolutionizing the world.
6
37
188
@andriy_mulyar
AndriyMulyar
1 year
High-quality pretraining sets like RedPajama are a key ingredient in democratizing access to LLMs. Here is a brief exploration of what an LLM trained on RedPajama would see during training👀 Explore in Atlas:
Tweet media one
4
30
176
@andriy_mulyar
AndriyMulyar
9 months
@paul_rottger @MistralAI while I too like Twitter points, a good pretrained LLM will always be able to do this. If you want to complain about safety, you should be evaluating a finetuned/rlhf'd chat model and saying things. You can do this with a pre-trained LLaMa2 as well. Nothing new.
3
0
175
@andriy_mulyar
AndriyMulyar
1 year
One line code change to use any GPT4All model from your LLM apps! Just point to localhost! You can even use them through the official OpenAI Python API! The Elite GPT4All Hackers have struck again.
@nomic_ai
Nomic AI
1 year
Big New Release of GPT4All📶 You can now use local CPU-powered LLMs through a familiar API! Building with a local LLM is as easy as a 1 line code change! Simply spin up the chat app at and place it in server mode! Documentation:
Tweet media one
19
146
607
5
30
174
@andriy_mulyar
AndriyMulyar
1 year
You wouldn't let your student grade their own exam right? I would question the scientific integrity of any senior author on a paper who let 'lets just eval using GPT4!' slide through the early draft discussions of a paper. This is just silly.
15
8
170
@andriy_mulyar
AndriyMulyar
1 year
You can get a sense of the data diversity in this interactive viewer: code, stories, questions
Tweet media one
6
18
166
@andriy_mulyar
AndriyMulyar
1 year
We improve on GPT4All by: - increasing the number of clean training data points - removing the GPL-licensed LLaMa from the stack - Releasing easy installers for OSX/Windows/Ubuntu Details in the technical report:
11
14
161
@andriy_mulyar
AndriyMulyar
1 year
Embed4All Generate embedding *without* an API key.
@nomic_ai
Nomic AI
1 year
GPT4All now supports Text Embeddings ⚡ - Generate text embeddings of arbitrary length documents for free on CPU at 8,000 tok/second. - No external dependencies except C.
Tweet media one
23
145
671
8
26
151
@andriy_mulyar
AndriyMulyar
1 year
A GPT4All runs all devices. WebGPU enables the distribution of on edge large language models to millions of individuals and tens of thousands of enterprises. The future is bright.
@benmschmidt
Ben Schmidt / @[email protected]
1 year
Big day for the Web: Chrome just shipped WebGPU without flags. Someone on @nomic_ai 's GPT4All discord asked me to ELI5 what this means, so I'm going to cross-post it here—it's more important than you'd think for both visualization and ML people. (thread)
15
214
943
2
23
141
@andriy_mulyar
AndriyMulyar
1 year
Local LLMs now have plugins! Privately chat with your data with GPT4All. Open source and free to use!
@nomic_ai
Nomic AI
1 year
Local LLMs now have plugins! 💥 GPT4All LocalDocs allows you chat with your private data! - Drag and drop files into a directory that GPT4All will query for context when answering questions. - Supports 40+ filetypes - Cites sources.
43
181
817
4
23
136
@andriy_mulyar
AndriyMulyar
1 year
Large Language Model Powered Video Games Are Now Feasible With GPT4All🎮🕹️ Join the discord and build the future with us: (credit: #teddybear082 on GPT4All Discord)
3
29
124
@andriy_mulyar
AndriyMulyar
1 year
apache-2'ing LLaMa weights would probably save a few millions tons of CO2 over the next 6 months. GPUs go brrrrrrr. how's that for a carbon offset. @ylecun
5
11
128
@andriy_mulyar
AndriyMulyar
1 year
GPT4All LocalDocs Plugin 🔌 - Lets businesses privately chat with their employee handbooks and cites sources! - Sideloaded Samantha model ( @erhartford ) specialized for assistant interaction! - Accelerated by new GPT4All Apple Silicon support ⚡ Try it at
Tweet media one
4
20
115
@andriy_mulyar
AndriyMulyar
7 months
who is the anthropic customer that said '100k token context isn't enough for us' and who is the pm that agreed to prioritize it lol
Tweet media one
16
5
117
@andriy_mulyar
AndriyMulyar
1 year
GPT4All-J is packaged in an easy-to-use installer. You are a few clicks away from a locally running large language model that can - answer questions about the world - write poems and stories - draft emails and copy all without the need for internet access.
9
6
111
@andriy_mulyar
AndriyMulyar
6 months
AI is nothing without open source #keepAIopen
Tweet media one
7
9
108
@andriy_mulyar
AndriyMulyar
1 year
Early Access Announcement 🚪 Early access to the newest GPT4All model is available through a discord bot (running on CPU and built by an elite open source community hacker). Try it out from any device.
Tweet media one
5
21
108
@andriy_mulyar
AndriyMulyar
1 year
a vector db is now the 'calculator app' of learning to code (:
@willdepue
will depue
1 year
tinyvector - the tiny, least-dumb, speedy vector embedding database. pretty much: you don't need complicated algos, just brute force nearest neighbors. pre-launching this project + why i'm building this:
Tweet media one
22
54
748
4
7
106
@andriy_mulyar
AndriyMulyar
1 year
The elite hackers are shipping GPT4All updates every few days. Models are improving quickly.
@BrianRoemmele
Brian Roemmele
1 year
The moral dilemma. GPT4All-(jazzy) vs ChatGPT-3.5. Sometimes the simple answer is the best answer.
Tweet media one
Tweet media two
37
11
172
2
8
104
@andriy_mulyar
AndriyMulyar
1 year
pretty sweet feedback from folks who have taken it for a spin!
Tweet media one
7
3
104
@andriy_mulyar
AndriyMulyar
1 year
The GPT4All movement grows by the day. Our community is 10k people strong and filled with elite open-source hackers paving the way to a decentralized future. We will open-source the data. We will open-source the models. #GPT4All Join the movement:
3
10
102
@andriy_mulyar
AndriyMulyar
4 months
Nomic Embed v1.5 is out 🪆🪆🪆 - Variable-sized embeddings with matryoshka learning and an 8192 context. - Outperforms OpenAI text-embedding-3-small across output sizes. - Open source, open training code, open data. How does Matryoshka Learning work?
5
13
95
@andriy_mulyar
AndriyMulyar
11 months
I guess we can just ignore the fact that running llama and llama2 at interactive rates (this is slow) with pure C has been possible for three months in and use this instead
@karpathy
Andrej Karpathy
11 months
If we can get 7B model to run at nice and interactive rates then we can go from "scratch-trained micromodels" to "LoRA finetuned 7B base model", all within the code of the minimal llama2.c repo (both training and inference). Can reach more capability and with less training data.
16
30
501
5
8
90
@andriy_mulyar
AndriyMulyar
6 months
@willdepue @yacineMTB that is not a valid float32 value
4
0
89
@andriy_mulyar
AndriyMulyar
3 months
wtf, Amazon Go wasn't AI powered and literally just outsourced video monitoring of picked up items to India I never want to be told 'it doesn't scale' again
7
16
89
@andriy_mulyar
AndriyMulyar
1 year
Training this thing wasn't a cakewalk as @zach_nussbaum can attest. Learn about Zach's weekend tribulations at:
8
7
87
@andriy_mulyar
AndriyMulyar
1 year
Some samples (out of training set) Valid Python generation with markdown
Tweet media one
4
4
89
@andriy_mulyar
AndriyMulyar
1 year
gpt4all
@JayaGup10
Jaya Gupta
1 year
💀
Tweet media one
13
13
90
5
12
84
@andriy_mulyar
AndriyMulyar
11 months
Starcoder 3B runs on CPU ⚡ Excited to launch @huggingface 's Starcoder model in on CPU! Local code models will be everywhere
@BigCodeProject
BigCode
11 months
@nomic_ai team already added support for StarCoderBase-3B in their GPT4ALL local models. Download the model at: & follow the docs: Stay tuned for the 7B model integration!
Tweet media one
3
8
34
3
16
84
@andriy_mulyar
AndriyMulyar
1 year
You will own your AI.
@BrianRoemmele
Brian Roemmele
1 year
You will own your own AI. Final testing on a new massively smaller 100% locally running ChatGPT 3.5 turbo type of LLM AI in your hard drive on any 2015+ laptop. I will have pre-configured downloads and it is massively smaller than most models I have, just 4gb. Out soon!
336
2K
13K
4
3
84
@andriy_mulyar
AndriyMulyar
1 year
I suppose I forgot to mention that this model runs on your CPU with 4 GBs of RAM at 10 words (tokens) per second.
8
4
83
@andriy_mulyar
AndriyMulyar
6 months
9 months ago @nomic_ai had a hack weekend where we trained an LLM to mimic ChatGPT. It worked better than expected and we decided to call it gpt4all the morning of the codebase release. The rest was history. Happy New Year. To a 2024 filled with open source, models and data.
Tweet media one
Tweet media two
7
8
83
@andriy_mulyar
AndriyMulyar
1 year
AI winter confirmed
Tweet media one
@cephaloform
469
1 year
if i see one more github star graph with the caption "probably nothing" im quitting ai and moving onto creating doilies
3
0
24
4
5
77
@andriy_mulyar
AndriyMulyar
1 year
Alongside installers, we release the training data, model weights and perform extensive evaluations of comparable models:
Tweet media one
3
3
81
@andriy_mulyar
AndriyMulyar
8 months
Local LLMs have improved significantly since last March. Models like Mistral 7B are often drop-in replacements for common queries to the giants (GPT4). Give them a shot if you had a poor experience on your first try!
@nomic_ai
Nomic AI
8 months
Monthly reminder for everyone affected by today's @OpenAI outage: Local #GPT4All models like @MistralAI 7B run at 20tokens/sec+ on a Macbook air and don't go down.
Tweet media one
10
34
301
6
10
81
@andriy_mulyar
AndriyMulyar
1 year
local models never go down #gpt4all
Tweet media one
5
10
80
@andriy_mulyar
AndriyMulyar
1 year
i was today years old when i learned people are unplugging their routers to verify that gpt4all isn't accessing an external api
5
4
79
@andriy_mulyar
AndriyMulyar
1 year
the first large language model running on a Nintendo DS. open the data. open the models. gpt4all. credit: Tuxifan #0981
3
15
78
@andriy_mulyar
AndriyMulyar
1 year
OpenAI may start releasing some open source models! Seems like open-source is a bigger business risk than describing your data collection / training procedures
Tweet media one
4
16
77
@andriy_mulyar
AndriyMulyar
3 months
GGUF security alert 🚨 A heap based buffer overflow vulnerability exists in GGUF files that can be triggered by a malicious file. gpt4all is working to address and mitigate risks for all users. Exercise caution when running recent GGUF files from unknown origins.
4
16
75
@andriy_mulyar
AndriyMulyar
8 months
@ChombaBupe lol idk if they are speechless. this type of adversarial data perturbation is nearly a decade old
3
1
72
@andriy_mulyar
AndriyMulyar
1 year
This dataset is 16x larger than Alpaca!
3
1
71
@andriy_mulyar
AndriyMulyar
1 year
Exciting new release! PromptLayer 🍰 is an early backer of the GPT4All movement! They have native opt-in integrations to the GPT4All data lake (try it!). If you care about data provenance and privacy for your LLM-powered apps, look no further than PromptLayer!
@promptlayer
PromptLayer
1 year
🪩 New Analytics Page 🪩 Now you can track and visualize: 1. Cost 💰 2. Latency 🏎️ 3. Model Usage 🤖 4. Prompts 📃 Great for teams, we all know that one person who will live on this page! 🍰🍰🍰🍰
Tweet media one
1
3
37
3
12
70
@andriy_mulyar
AndriyMulyar
1 year
For everyone interested in where to take your NLP research directions, Open AI suggests that you should just study their LLM and hope to work for them. They even made it cheap for you to study it!
@_jasonwei
Jason Wei
1 year
I’m hearing chatter of PhD students not knowing what to work on. My take: as LLMs are deployed IRL, the importance of studying how to use them will increase. Some good directions IMO (no training): 1. prompting 2. evals 3. LM interfaces 4. safety 5. understanding LMs 6. emergence
52
285
2K
1
6
70
@andriy_mulyar
AndriyMulyar
1 year
i'll be dreaming about dropping flash drives with a certain 4gb file across the borders of authoritarian regimes. goodnight.
4
2
67
@andriy_mulyar
AndriyMulyar
1 year
You can explore the final curated training set in Atlas You'll find large regions dedicated to creative prompts like stories and poems in addition to an increased number of multi-turn responses.
Tweet media one
3
7
68
@andriy_mulyar
AndriyMulyar
1 year
visualize your model logits during training with 10 lines of code! if you use pytorch @LightningAI i would love someone to take my latest callback for a spin. dm feedback!
Tweet media one
Tweet media two
1
12
68
@andriy_mulyar
AndriyMulyar
1 year
The GPT4All Open Source data lake stores all ingested data in a constantly visible state, allowing anyone to download it. Improved GPT4All models are training on early versions of the data lake as we tweet. open source the data. open source the models. gpt4all.
2
10
64
@andriy_mulyar
AndriyMulyar
1 year
A GPT4All does not support or subvert specific political ideologies or choose winners. open source the data open source the models #gpt4all .
3
5
61
@andriy_mulyar
AndriyMulyar
5 months
GPT4All - v2.6.2 - has just been released! * Update to latest llama.cpp * Update to newly merged vulkan backend * Partial GPU offloading support * New localdocs speed increases and features * New GUI settings option for configuring how many layers to put on GPU * New lightmode
Tweet media one
3
10
61
@andriy_mulyar
AndriyMulyar
2 months
Run LLaMA 3 on edge devices with open source GPT4All ⚡⚡ - The best open weights 8B LLM in the world - Runs at 25 tok/s on Mac Metal
1
6
61