Lambda has raised a $320M Series C led by Thomas Tull's USIT. This new financing will be used to expand the number of NVIDIA GPUs available in Lambda Cloud and build features that will absolutely delight you.
This is the strongest evidence that I’ve seen so far that we will achieve AGI with just scaling compute. It’s genuinely starting to concern me.
I used to think that we would run into roadblocks:
end of scaling laws, maybe we don’t have the right model architecture, power density…
Fuck it, we just dropped our H100 on-demand prices to 1.99. From now on, Lambda will always have the cheapest price in the entire world, if we don't, just tweet at me and we will drop the price.
H100s here:
Yale has a perpetual bearer bond from the Dutch government from 1648 that pays 2.5% interest, they go to the Netherlands every few decades to collect the back interest.
It’s the oldest security still generating payments.
NVIDIA popped because everyone underestimated the amount of GPU compute the world needs. I’m honored that Lambda was mentioned in NVIDIA’s earnings call today.
Soon, every human will be using AGI powered by GPUs. And Lambda is going to be the
#1
GPU cloud in the world.
This is how Copilot is “programmed”. This is how your future AGI co-workers, friends, and lovers will be “programmed”.
The future is here, it’s just not evenly distributed.
I no longer use Google when programming and default to thinking: "I should ask ChatGPT about X."
LLMs are a far superior method vs. the standard (read documentation / blog post / stack overflow) loop. It's like having a thoughtful pair programmer with you at all times. A thread:
Lambda has raised a $500M GPU-backed financing vehicle. Designed to fund GPUs for hundreds of thousands of developers with No Contract Required.
This is half a billion dollars for the 'gpu poor'.
When new technologies appear in the market, they require new systems of finance…
Lambda GPU Cloud has NVIDIA A100s for $1.10 / hour. On demand. No commitment. That's the lowest on-demand price in the entire world. It's honestly a crazy low price.
Spin up a $1.10/hr A100 here ->
Lambda is excited to offer the best prices on the market for GPU cloud compute. On-demand A100 GPUs starting at $1.10/hr vs AWS $4.10/hr (73% savings). No commitments or negotiation required.
Learn more→
I've found that it's very useful to speech-to-text a stream of consciousness style overview of what I want to talk about, then ask chatGPT to summarize it. Then, I ask it to re-expand to three or four paragraphs. Like an autoencoder.
Announcing that Lambda has raised $44M to build the best cloud for training AI. (Lots of GPUs.)
I'll be doing a Q&A on Twitter for the next hour. Just reply in this thread.
Working at AWS, GCP, Apple, Meta, Twitter, Microsoft, Oracle, or IBM?
Lambda has H100s with 3,200 Gbps InfiniBand at the world's lowest price:
$1.89 / hour (100% upfront)
$2.04 / hour (80% upfront)
$2.15 / hour (monthly payment)
Share this to your manager.
Today, Lambda Cloud launched H100 instances. We’re the first and only cloud in the world to offer H100s on demand.
Grab yours here:
Extremely proud of the whole team for launching these before everyone else.
First person in the thread to get it was
@sarhaan_g
both companies are casino REITs:
$VICI and $GLPI
The other interesting contenders suggested by the thread were RenTec (thought to be $100M/employee), Tether (thought to be $60M/employee or more), and Vitol ($121M/employee).
So excited to see everybody at NeurIPS this year. One of the big launches from Lambda is the new Vector One PC. Buy a Lambda Deep Learning computer with a 4090 GPU w/ EDU discount just $4,899!
Buy it here:
My goodness, I thought this level of fidelity was going to be a few years out. How many GPU-hours needed to render? How is this here today!??
Dozens of "AI Pixars" are going to emerge in the next few years and the flood of amazing movies and games will blow our minds.
Explorations in AI accelerated game development.
Prompt:
# A pygame game where you can take care of a pet cat.
# The pet can has two actions: pet the cat and feed the cat.
# The cat responds accordingly.
Generated game:
hiring this guy at the right stage gives your company a 10x better chance of going public or having a successful exit. but if you hire him at the wrong time your company will spiral into a tangled web of bureaucracy
Today LLMs are not good at is dealing with large codebases.
There’s going to be a whole industry around fine tuning these models to work within large codebases.
They’ll learn the internal APIs and architecture. Then they can work on more significant features/refactors.
Sometimes writing prompts feels like writing positive affirmations for the LLM. You are a world-class writer. You are a smart engineer. You are helpful. You are happy.
Plato and Aristotle walking through The School of Athen's data center.
Lambda has just added a ton of new 8x A100 capacity to our cloud. You can sign up and launch one here => .
@_florianmai
I avoid engaging in debate around small probabilities of events with extremely negative utility for the same reason I wouldn’t pay much to play the St. Petersburg paradox game and for the same reason that I might be skeptical of pascal’s wager.
Today, Lambda launched Demos. An easy and inexpensive way to host Gradio-based generative AI apps.
Soon, our discovery page will be packed with stable diffusion UIs, balenciaga video generators, rap generators, GPT LLM chatbots, and more gen AI apps.
@tszzl
@alth0u
@Cixelyn
@EMostaque
Every VC: why are you building a computer company with a manufacturing facility in San Francisco?
Lambda: so we can provide infra on-demand in hours when roon pings us on Twitter.
I'm looking to hire some absolute killers to work with us on Lambda Cloud. If you want to work on hard engineering problems at the intersection of deep learning and data center scale computation, my DMs are open.
Midjourney v6 is a real delight to play with. Text works by just “quoting” the text you want to see. What a year it’s been in AI.
Merry Christmas and Happy Hanukkah everyone!
Lambda and Coreweave do the same thing. But Lambda charges $1.89/hr list price while Coreweave charges $2.23/hr list price. If you need 1024 H100s with InfiniBand for 3 years, you'd save $9.1M by choosing Lambda.
The math and choice is simple: (2.23-1.89)*24*365*1024*3 = $9.1M
A good counterpoint from
@GaryMarcus
was that we will see lots of holes / systemic glitches in Sora in the coming months and that it does not constitute a world model. My thoughts on how generative networks like Sora could be used as a world model:
@GaryMarcus
OK, if the vast majority of the artifacts (like walking chairs being pulled from the sand) go away, and more complex inter-object dependencies are reliably visually simulated, do you think that constitutes a world model?
I would argue that it does, because you could use it for…
Lambda was founded 10 years ago today. I was 23 back then and had no idea what I was getting into. I’ve dedicated over a decade of my life to building this company but still feel like we’re just getting started.
This is the email I sent to our investors in Jan 2016. (Page 1/4)
Wait until this is in full 4K HD, has significantly better spatiotemporal consistency, and also generates audio alongside.
Oh yeah, and you’ll be able to talk to the kangaroos too.
The future is gonna be crazy.
This aged well. In 2011, Stripe was a tiny startup going up against PayPal, a $5 billion dollar revenue incumbent. The better user experience won. Stripe now has more revenue than PayPal did when this tweet was written.
VCs who follow me please take note, if you want to get into Suhail’s next round, you can invest $150M at $1.5B post into playground, then they can get a 3,020 H100 cluster with InfiniBand for $1.89/hr from Lambda. You get allocation, they get GPU compute, win-win.
@Suhail
Lambda is working night and day to make the GPU capacity problem go away.
1,000s of A100s, even more, are soon to arrive at our data center door.
More GPUs, the world demands, we ratchet up purchase plans.
Christmas cheer will spread around with A10 instances soon to be found
Everyone: "But you can only do state of the art AI work with millions of dollars of compute and at a big company!!!"
@Buntworthy
: "text-to-pokemon cost $10 to train on Lambda Cloud."
New blog post just dropped on how text-to-pokemon was made:
New post on how
@Buntworthy
made the text-to-pokemon model by fine tuning stable diffusion.
The final model only cost around $10 to train on Lambda GPU Cloud. (Yes, $10.)
When I first saw this image on reddit, I knew AI was going to change the world.
Thanks
@ch402
@mtyka
and
@zzznah
So cool to see where we are today and think about where we will be soon.
So lucky to have
@twominutepapers
stop by Lambda HQ on his Silicon Valley tour. Karoly gave me a named “Hold on to your Papers” card! I’m told that it is a rare artifact!
California and Nevada should merge into a single state called Calvada with Reno as the capital. It just makes sense.
You know what else just makes sense? $1.89/hr H100s with 3,200 Gbps InfiniBand from Lambda Cloud. Lowest price in the world. Get yours:
FINALLY! Lambda just launched filesystem storage in the majority of our regions!
You can now store datasets, models, and code on shared filesystems that continue to exist after terminating your GPU instance.
Sat down with Pieter Abbeel
@pabbeel
to talk about the current state of the GPU cloud market, Lambda's founding history, and the deep learning powered future of gaming.
@therobotbrains
podcast is great.
@andrew_n_carr
You know when it really hit? When I was talking with
@chuanli11
at NeurIPS and he said “you know, most of these papers were submitted for review before stable diffusion came out.”
NeurIPS used to be the place for the greatest reveals and now it’s “months behind”. Which is a lot!
1. ChatGPT created a python client based on Lambda Cloud's API spec, and it works!
2. Lambda launched v1 of our API today. (Docs: )
3. We added lots of new A100 capacity, lots more GPUs! (Use gpu_1x_a100_sxm4 instances in us-east-1.)
ChatGPT's client👇
The fastest way to get 64 or more H100 GPUs in this environment is to *pay upfront* and sign a reserved H100 contract with Lambda. GPUs are allocated to the companies that pay first, to get in line just fill out this form:
@tszzl
local minimum
orthogonal
mutually exclusive
first/second derivative (referring to business metrics)
power law
at the limit
doubting you’ll “get much of a delta” out of an activity
alpha (crypto/fin/poaster crossover)
(especially when used incorrectly) law of large numbers
I can't believe that I can generate an entire song for maybe a dime or so? Each of these would be tens of thousands of dollars to produce just a few months ago.
A friend asked me to make some predictions for AI in 2023.
Prediction: nothing changes.
My wildest deep learning related predictions from 2013 are only now starting to come true.
Little happens in one year, a lot happens in 10 years. So let's start building for 2033 today!
We’ve added initial support for ChatGPT plugins — a protocol for developers to build tools for ChatGPT, with safety as a core design principle. Deploying iteratively (starting with a small number of users & developers) to learn from contact with reality:
ChatGPT gave me an example that perfectly demonstrated the superiority of it's solution. Imagine how much googling and documentation reading I would have to do otherwise!
Plus, it avoids the horribly formatted blog spam that permeates the internet of code.
If you want cloud H100s, we have them. Need H100s + InfiniBand? You can sign a 3 year upfront contract with Lambda. You’ll get the lowest price in the world and you’ll get allocation as fast or faster than other cloud providers.
DM me or sign up here:
@aicrumb
Hey crumb, we try to delight customers with everything we do. If you don’t like our alien mummy swag item, we can give you a hat instead and send the alien to others.
Just email our support team and we can get it squared away for you.
Lambda has lots of new H100s deployed and coming online soon!
Sign up and you’ll get an email the moment it’s launched or DM me to reserve large capacity.
OK, I know you're tired of being asked to sign 3 year contracts with upfront payment from Lambda. So Lambda now has burst compute and flexible contract terms!
We're offering 3 month bursts of GH200s with 0% down and 1 year H100 + infiniband contracts: …
This compression to latent space (summarization) / decompression back to input space (expansion), like a real autoencoder, removes verbal 'noise'. Uh/ums, non sequiturs, even disorganized thought.
The cool thing is, this time the latent 'latent space' is human editable text!
I just want to say that I have been arena posting for 13 years now — see blog post from this exact day in 2010.
You have to skate to where the puck is going people.
Created a Star Wars concept film using Pika Labs and Midjourney in just a few hours. It's wild how easy it is to animate the scene using these tools.
-Curious_Refuge
One of the coolest things about the ChatGPT iOS application is that the dictation can be imperfect but the LLM still understands what you meant.
Transcription errors are compensated for by the LLM.
This is not the case with Siri dictation + Google.
@boborado
Nope. Both firms have way higher revenue and also EV / employee
Instagram: $76M EV/employee
WhatsApp: $345M EV/employee
Mystery Co A: $487M EV/employee
Mystery Co B: $1.03B EV/employee
The “titans” of tech efficiency get rekt.
A
@tszzl
tweet got me thinking: roon says that DL offends theoreticians but empirically works to produce "absurd miracles".
Engineers have worked to build the "absurd miracle" of commercial aerospace since the early 1900s. Physicists have yet to figure out how lift works.
Both dots are orange and dimming, but only the left-hand dot has temporal dithering to preserve color as the brightness drops and provide extra levels of perceived brightness at the same time.
The right-hand dot eventually collapses to red when it runs out of color depth. 🎨
@GaryMarcus
OK, if the vast majority of the artifacts (like walking chairs being pulled from the sand) go away, and more complex inter-object dependencies are reliably visually simulated, do you think that constitutes a world model?
I would argue that it does, because you could use it for…
The year is 2038, Lambda is the largest GPU cloud in the world. AI produces 42% of global GDP. Your CEO sends you a calendar invite for your quarterly performance review. This is what you see when you get there.
What do you say?
If you are ever having a bad day just remember that there are arcs of plasma ten times larger than the earth ejecting from our sun.
Any problems you have are small in comparison.
2 hour timelapse of sunspots on the Sun.
The most impressive part isn't the arcs of plasma larger than Earth. It's that the footage was captured by a 150mm telescope with a Hydrogen Alpha filter by amateur astrophotographer David Dayag.
@oFFMetaSweat
Yea, there are a bunch of super interesting specialty REITs like American Tower (cell towers), Outdoor Media (billboards), Weyerhaeuser (forests), Gaming and Leisure Properties (casinos), Digital Realty and Equinix (data centers).
@boborado
Somebody figured it out in the thread.
My lesson learned from today is that people on tech twitter need to look outside of the silicon valley echo chamber more.
So many great companies in the US that you’ve never heard of. It’s a fun exercise to read through the S&P 500…
Wait until this is in full 4K HD, has significantly better spatiotemporal consistency, and also generates audio alongside.
Oh yeah, and you’ll be able to talk to the kangaroos too.
The future is gonna be crazy.
Nine sets of "two kangaroos busy cooking dinner in a kitchen" 🙂
Generated by Make-A-Video.
(Montage courtesy Yaniv; This kangaroo example had become our go-to example in the last few days to the deadline :))
#MetaAIMakes