Panaceas ARE available: Charter cities, open borders, building deregulation/YIMBY, free markets. But AI x/s-risk is a real concern. Software engineer. UNIX man.
@GarettJones
@tylercowen
I don’t understand sitting at the tip of the spike on the right and saying “Oh, relax, let thousands of years of history be your guide.” How about recent history? Or thinking from first principles?
The mood I hate the most is this idea that AI safetyists are merely techno-pessimists. No. You’re pattern-matching wrong. You’re thinking about this based on vibes. Look closer. The people who spearheaded this are lifelong techno-optimists, and often libertarian.
Wise up, fast!
@The_Fit_Gourmet
@Ukraine
It’s just satellite internet. It means he’s agreed to ship them some base stations so they can access it. How they’re going to ship into a country under war is unclear. I don’t know if FedEx is working there rn. Probably mostly a publicity stunt like Musk’s other stunts.
@Noahpinion
We just need ONE BILLION AMERICANS. Let the Indians, the Vietnamese, the Iranians, the Cubans, et al, come here. Their kids will assimilate and Pax Americana will reign for ten thousand years.
OK we absolutely, ASAP, need a hit movie that uses a time-loop gimmick to depict humanity wiping itself out with AI over and over in dozens of diverse, increasingly sophisticated ways.
Until our hero (Tom Cruise?) finally spurs the world into coordinated effort on AI alignment.
If I had billions of dollars to deploy like
@HoldenKarnofsky
or
@moskov
, a no brainer for me would be commissioning an HBO / Max educational/interview series on AI alignment hosted by
@robertskmiles
. And I’d also pour money on marketing for it.
Scott Alexander is the man.
But then I read a post from him where he explained how as a teen he was the world champion at some game centered on strategic deception and manipulation of real humans, and I avoided his blog after that lol.
I *believe* he’s trustworthy, but jeez.
I think a big problem w getting the public to care about AI risk is that it’s just a *huge* emotional ask — for someone to really consider that there’s a solid chance that the whole world’s about to end. People will instinctively resist it tooth-and-nail.
Why. Why are people with large platforms tweeting shit like this with confidence? You have no basis to conclude $80B of value “torched”. Let alone to dismiss safety concerns.
If this is really some EA, decel, AI safety coup at OpenAI, the board just torched $80B of value, destroyed a shining star of American capitalism, and will be sued to high heaven by investors.
Every talented employee at OpenAI should quit and join Sam/Greg's new thing (if they
@RyanRadia
@mattyglesias
I agree w the spirit of this, but for the record, the no-one-lived-past-30 idea is a myth. ~30 was the average life expectancy taking into account high infant and toddler mortality.
Y’know if we just opened our borders with Taiwan as a gesture of friendliness, we’d probably be able to rival its chip-making prowess in a generation or so.
@microscopicjpg
this is fake. if one drop of water had that much critters in it, and that big, then that means the beach would be packed w critters, and you’d feel them crawling all over you and smacking into you. But i’ve been to the beach and the water feels smooth. No critters.
Wait, don’t doomsday cultists generally welcome their ordained apocalypses? Not fight tooth-and-nail, pragmatically, politically, scientifically to avert them?
Folks, all SIX board members and literally everyone else involved in the weekend’s drama espouses belief in AI x-risk, is a “doomer” per the AI risk denialists.
Altman has agency and was a primary participant in all of this. He helped select the board!
Put your pitchforks away.
I don’t *want* you to carefully structure your arguments to maximize their appeal! I don’t *want* perfect little analogies inserted at just the right moments!
I want you to dump out your thoughts, so I can see exactly where your head’s at. I want that authenticity, transparency.
@GwendolynKansen
That’s not a hard truth at all. Maybe when you’re young it seems like a tragedy to not marry the most attractive potential partner, but as you get older people converge in their level attractiveness and it becomes clear you’re not missing much.
In 2007 just 27% of US Muslims said that homosexuality should be accepted by society. 10 years later, this had increased to 52%. US Muslims are now more tolerant than the average American was in 2006.
#FactsfromOpen
(34 of 100)
I’d like to get back to reading his blog but also… I dunno, it’s just maybe slightly too glib for me, or something. Like he’s *also* a super logical and rational dude, but that’s outweighed by a facility with writing and persuasiveness that is just off-the-charts. Unsettles me.
You watch a movie like Jurassic Park and you think “haha, we’d never be that stupid” - but, uh, then you see what OpenAI and Facebook AI are doing, and, uh….
The most credible people I can see are hovering around 1/3 chance of a disastrous long-term outcome (x-risk or worse), 2/3 chance of a great long-term outcome.
However, much of the 67% is generated by the hope that reasonable people will pull together and take doom seriously.
Most EAs who think about AI (myself included) do NOT think the creation of AGI will lead to nearly certain doom.
We are hopeful for an awesome post-singularity future, and we want to manage the risks of AI precisely because we want to ensure it will come to be.
We get
@ESYudkowsky
to pen the script with Chris & Jonathan Nolan. Chris directs.
Dustin Moskovitz and Bill Gates and a few other people put together an unprecedented $1 billion dollar budget.
We don’t have flying cars b/c flying cars are relatively dumb. Our best people have been working on more interesting things: computers, 3D gaming, the internet, social networks, VR, cryptography, quantum computing, AI, et alia.
@LPNational
@RealAlexJones
Is this what the LP National is now?? Replying to Alex Jones about some utter nonsense conspiracy theory? This is so utterly embarrassing, whoever is responsible should be fired immediately.
@kret_spec
@xlr8harder
I’ve been an “effective accelerationist” in the sense of being interested in the highest leverage ways to increase economic growth for ~decade. Highest leverage ways? Open borders, charter cities, and in general libertarian/ancap promotion. but:
I’ll never get over the fact that eternal Pax Americana is one border-policy change away. Open our borders to every innocent person on the planet, and that’s it. We win forever. And you unpatriotic cowards can’t wrap your little heads around that and do it.
Elon Musk is inspiring... inspiring me to get into the business of self-promotion and government subsidies! Why should anyone work hard or smart when you can just clothe yourself in greenwashing bullshit, cozy up to the government, and get rich with taxpayer money?
I don’t know *exactly* how we’re going to solve AI alignment, or civilizational decay, or any of our other massive problems, but one thing is for certain: we need 𝘮𝘰𝘳𝘦 𝘮𝘢𝘤𝘩𝘪𝘴𝘮𝘰 𝘢𝘯𝘥 𝘱𝘰𝘴𝘵𝘶𝘳𝘪𝘯𝘨.
Truly, please, share this interview far and wide. It is exactly what the world needs to hear right now. The whole world. The framing and the messaging are superb. And Tegmark has just the right air of elder-statesman credibility and authority—plus the credentials to back it up.
@JimDuxhette
@AlexNowrasteh
wat. It’s not competition, it’s cooperation. We work together to produce goods and services. The more we produce, the more we can consume.
Centralize the effort under the most responsible leadership, with the focus squarely on AI safety; and, backed by the full force of the US military, ban all research outside of that. Or we’re as good as dead.
(I say this as an ardent libertarian minarchist! It’s DEFCON 1!)
@ozziegooen
[Obligatory complaint about using the word "longtermism" to refer to the very near-term risk posed by the sudden proliferation of private WMD programs in San Francisco]
Currently, I live on a hill called “Mount Olympus” located at the exact geographic center of San Francisco. I didn’t quite plan it that way (my ego’s not THAT big!) but I tell ya you can’t put a price on being able to say, “So this morning I descended from Mount Olympus and…”
Maybe we learn it’s a simulation that’s being run to find a solution to the alignment problem. (Maybe even a sim inside of a sim inside of a sim, ?-levels deep.)
As an epilog we see the solution bubble up from the simulations to the “real world”, and all entering a utopian age.
@primalpoly
@jeremykauffman
… No, man. Pennsylvanians are allowed to move to New York. In your mind, does that mean New York residents are “forced to accept them”?? That’s so absurd. It means NY residents HAVE THE RIGHT to rent or sell housing to them, and to hire them.
We're announcing, together with
@ericschmidt
: Superalignment Fast Grants.
$10M in grants for technical research on aligning superhuman AI systems, including weak-to-strong generalization, interpretability, scalable oversight, and more.
Apply by Feb 18!
@GwendolynKansen
The reality is people are all mostly in the middle of a bunch of bell curves of human traits, you’ll find someone near you in some and complementary in others, and having kids is always rolling dice no matter who you are.
@AlecStapp
It’s a *very* common belief that even for adults the life expectancy was like ~35. That’s what she’s pushing back on and she’s right to do so.
@GarettJones
“historical reasoning” — We had the atom bomb and the Manhattan Project. Before that we had merely TNT. Is it really so hard to believe there is a new axis of technological progress that predictably leads to stuff w the power to blow up not just a city but the whole galaxy?
In 1939, physicists discussed voluntarily adopting secrecy in atomic physics. Reading Rhodes' book, one can hear three "camps":
- Caution
- Scientific Humility
- Openness Idealism
I feel like these are the same camps I hear in discussion of AI (eg. open sourcing models).
(1) Make the big-budget AI x-risk / alignment problem movie.
(2) Dozens of Ramanujan 13-year-olds across Asia, Africa, South America, etc, immediately grasp the whole problem and, inspired by Tom Cruise, devote their entire still-developing cerebra to the cause.
(3) World saved!
OK we absolutely, ASAP, need a hit movie that uses a time-loop gimmick to depict humanity wiping itself out with AI over and over in dozens of diverse, increasingly sophisticated ways.
Until our hero (Tom Cruise?) finally spurs the world into coordinated effort on AI alignment.
@robinhanson
Intelligence is so powerful it *turned rocks into an atom bomb*. Raw intelligence is infinitely more powerful and dangerous than mere uranium. We can barely coexist with nuclear bombs without killing ourselves instantly. We need much more time to advance as a cooperative species.
@AlecStapp
@JerusalemDemsas
I agree altho the point weakens a lot when you just look at *chronic* homeless, which is what ppl tend to care about. Especially unsheltered, drug-addicted, panhandling/shoplifting chronic homeless. Still correlated w housing supply, but much less so.
.
@TuckerCarlson
“Isn’t [keeping immigrants out] a core function of government?”
No. No it is not. Immigration restriction is an invention of the modern world, around ~1900. We’re a nation of immigrants and we need MORE AMERICANS. You small cowards.
@erikengheim
@OPLIX
@Snowden
there are currently two separate investigations ongoing. Some of the cameras were working - they have footage that they’re reviewing. It would be nice to hear deposition of the sleeping guards etc. Presumably will come out eventually.