For those curious, a bit more about the book.
We're thinking of subtitling it "How To Fight Back When the World Treats You Like a Number."
It's for readers of all backgrounds, no math ability or interest needed.
It's about how math shapes so many of our experiences and
1/4
Said goodbye to my 15yo St Bernard today 😢 My wife rescued him 12yrs ago, just a few months before she met me. He saw us get married, start our careers, and have two kids. He lived in 5 different states, rode across the US several times, and even pulled one grandpa from a river.
If you’re not using ChatGPT for every aspect of your life you’re falling behind.
I spent 20mins using ChatGPT to do something that would take 1min with Google—here’s how I did it and how you can too:
I just told my wife about all the OpenAI Sam Altman stuff happening, including the various theories and potential consequences, thinking it’s interesting how it’s playing out.
Her response was simply: “nerd drama.” 😂
(a) this is fascinating
(b) I hate to think how messed up science is going to get as people use LLMs for things they really shouldn’t, which evidently includes any kind of random sampling.
Thank you SO much to everyone who has commented with such kind words. I felt funny posting this personal sadness to strangers, but I’m finding amazing solace and pride in seeing George touch so many who didn’t know him but who can see his spirit and love. Thank you all ❤️ ❤️ ❤️
I made a 30min video tribute to him putting photos and videos of his life journey with us to music (my wife and I met over dogs and classical music!). Probably boring to most but it's a window into the life of this amazing dog, George:
Ugh
@Google
will "recalibrate" the level of risk it is willing to take when releasing AI tools, due to competitive pressure from
@OpenAI
. This is the kind of market-driven arms race we do NOT need when it comes to tech that could drastically reshape society, for better and worse.
People complain about the "woke mind virus" but honestly I find the Bayesian mind virus far more worrisome--all these tech cults & CEOs tossing around words like priors, updating beliefs, expected value, p(doom), in ways that don't make sense just to virtue signal or whatever 🤮
To recap: OpenAI dumps the women from its board & brings on the guy fired from Harvard for sexist remark, appointed under interim CEO who said "40-60% of women seem to have rape/non-consent fantasies," all while bringing back the CEO whose sister accused him of sexual assault.
Everyone is amazed at OpenAI and ChatGPT, but don’t forget: the transformer was invented by Google, the first LLM was Google’s BERT, and Google made an apparently impressive chatbot (LaMDA) before ChatGPT but didn’t release publicly since they didn’t feel it was safe to do so.
Since this tribute is going a little bit viral (thank you all for sharing George’s warmth and love with the world!), let me take the opportunity to link to the wonderful St Bernard rescue org we got George from—small donations go a long way there ❤️
My initial concern with ChatGPT is that social media would quickly be overrun by sophisticated AI-bot accounts.
I was wrong.
Social media has instead been overrun by unsophisticated AI influencer bros.
I love how Zuckerberg was all in on the metaverse being the future and then after a year was like never mind the future is cloning a 17yo text-based platform🤦♂️
When someone who works on "AI safety" at OpenAI speaks with a chatbot then writes "never tried therapy before but this is probably it?"... 🤦♂️
I don't know what doctors do or how they're trained/licensed but my chatbot sounds smart so it's pretty much a doctor. Feel safe now?
Just had a quite emotional, personal conversation w/ ChatGPT in voice mode, talking about stress, work-life balance. Interestingly I felt heard & warm. Never tried therapy before but this is probably it? Try it especially if you usually just use it as a productivity tool.
The best thing about switching your web search to ChatGPT is then the people who put all this info on the web don't get any credit and you have no idea how reliable the info is since all sources are jumbled into a giant opaque probability distribution.
Oh wait, that's terrible.
I just read OpenAI’s blog post about aligning superintelligence and… I have some concerns with the assertions and attitude on display there.
Reading it did not make me feel better about OpenAI’s approach to society & safety. Detailed 🧵 below
I found the recent
@nytimes
opinion piece on AI by
@harari_yuval
@tristanharris
@aza
very interesting and I agree with some of the overall thrust and points but object to MANY of the important details. So, time for a 🧵 detailing my critiques:
The tech hype cycle in a nutshell:
2021: Wow NFTs, we can finally assign ownership and value to computer images so they're no longer free!
2022: Wow AI, we can produce an unlimited supply of computer images dropping their value to zero so they're finally free!
I'm often critical of Effective Altruism (EA) and I'm sure I'll get more pushback for this, but I've been thinking a lot lately about the discourse on AI doomerism, extinction risk, etc., and here's my big take on what's going on and why.
Buckle up, friends, it gets spicy.🧵
everything 'creative' is a remix of things that happened in the past, plus epsilon and times the quality of the feedback loop and the number of iterations.
people think they should maximize epsilon but the trick is to maximize the other two.
WOW WOW WOW this
@NaomiAKlein
piece is by far the best take on AI and the AI discourse that I’ve seen—erudite, eye-opening, clarifying, cannot say enough good things about it or recommend it enough!!
@Kobenhavn_kbh
@justinhendrix
Just a bit---but it was (most directly) inspired by a guy saying he used ChatGPT to lose weight by developing a running plan, one that looked exactly like all the ones easily available on the web.
Silicon valley is no longer content to "move fast and break things," now the goal is to "destroy" things society depends on.
AI developers love to talk about how their creations could destroy humanity, not realizing its their rotten ideologies that are what's most destructive.
I don’t really care what the current law on this is, but we should be working to destroy copyright as thoroughly as possible so I am on OpenAI’s side in this case.
The Mechanical Turk (chess playing robot from 1770 that wowed audiences but turned out to be a human hidden in a box) strikes again: Cruise’s self-driving cars in SF apparently had humans remotely intervening every few miles, 3 such workers for every two cars. 🤦♂️
A lot of people are rushing to criticize Musk and his rebranding of Twitter, but I implore you: before you jump to any conclusions, please read this insightful piece that shines a new and important light on the matter
Nah, I’d put the microscope as the most amazing tool yet created. Or the printing press. Or trains, cars, planes. Radio. Computers. Phone. Even soap: imagine life without soap! We live in a world filled with amazing tools—not just the one you’re selling,
@sama
.
ai is the most amazing tool yet created, and this is a special moment.
it is remarkable to see what people around the world are doing with it; the creative force being unleashed onto the world will lead to wonderful things getting built for all of us.
This is really bad for society. (And WTF, he's famous for playing the ukulele, not the guitar.) Google's got to up its game to keep AI shit out of the top results like this.
It isn't just AI generated text that is starting to bleed over into search results.
The main image if you do a Google search for Hawaiian singer Israel Kamakawiwoʻole (whose version of Somewhere Over the Rainbow you have probably hear) is a Midjourney creation right from Reddit.
Breaking news: guys on internet question the qualifications of brilliant women in leadership roles at OpenAI after male CEO who dropped out of college is pushed out.
Here's the ultimate irony about the EA movement: they just *caused* one of the large-scale harms that they supposedly are trying to circumvent. And this was not a coincidence; if you follow the logic of the movement, you'll see that this was inevitable. Let me explain 🧵 1/7
TikTok is spyware.
But so is Facebook, Instagram, Google, Amazon, … and probably most of the chatbots you’ve been using.
Welcome to surveillance capitalism.
Here's a recap of some recent ChatGPT events that, even if you don't care about the moral dimensions, should give investors concern about pumping funds into a new technology that may soon be facing serious legal actions:
How in the world Effective Altruists went from supporting data-driven charitable impact like mosquito nets to organizing protests against open source is beyond me. This is no long altruism—this is ideological posturing that hurts, not helps, society.
I wonder if the reason OpenAI didn't reveal the architecture of GPT4 or even the number of parameters is because it's not a neural network, it's just a big old-fashioned random forest but they're too embarrassed to say so... 🤔
BREAKING: The greatest technical mind of our time
@elonmusk
has just spoken eloquently and eruditely on the need for, and possible approaches to, regulation of AI. As quoted in Reuters: "We need some kind of, like, regulatory authority or something"
Did I miss anything?
But hey, at least they let a woman run the company for a whole 24hrs, so some real progress towards equality in the tech industry here.
Now these men can get back to work building tech that "benefits all of humanity". Hats off to you, gentlemen🫡
There's an AI conference at MIT today, it looks great, but the conf website has bios for the speakers and says "ChatGPT provided CVs for the panelists" and wow are they cringy-worthy epitomes of vapid cliche and hyperbole 🤮
For your amusement, here are some opening lines:
Honest confession: I've never tried ChatGPT, and I have no plans to do so.
This isn't a political or moral stance.
I just really love words and love choosing them myself.
Maybe we should all put a little more weight on the opinions of mental health experts and a little less weight on the opinions of AI experts when it comes to matters like this.
In the future, once the robustness of our models will exceed some threshold, we will have *wildly effective* and dirt cheap AI therapy. Will lead to a radical improvement in people’s experience of life. One of the applications I’m most eagerly awaiting.
Not to be overly cynical, but is the plan with AI safety/alignment that we're just going to hope tech companies choose safety over profit when faced with that choice?
Such a deep, nuanced, historically-grounded convo about language and AI (and hype, marketing, ethics, longtermism, corporate power/priorities, and more!)—I learned a ton, PLEASE listen—we’ll all be better off if we do!
@emilymbender
@parismarx
See the irony? The EA movement warns of AI alignment, but EA is fundamentally an instance of a human alignment problem: they want to maximize charity, this leads them to maximize money, and cutting corners and lying is, unfortunately, the fastest way to do this--as FTX showed.7/7
After nearly two years, with three trips to the ER along the way (at one of which we almost lost him) and months of visits to Children’s hospital during his first year, my son took his first steps today. And I can’t stop crying from pride for this amazing little guy. 😭
If you think AGI is a real thing (like
@sama
and
@OpenAI
clearly do), then I dare you to answer this:
When a possible AGI is developed, what test/measurement would you do that would convince you it really is an AGI?
p(doom) estimates aren’t predictions, they’re Rorschach tests:
they tell us nothing about future AI, but they say a lot about the psychology of the individual involved.
For context: I say this as a STEM prof married to a humanities prof who is smarter than me in basically all respects and measures—and who sees so much of the tech community’s current problems reflected so clearly in history. But history is humanities so it’s not valuable, right?
Wrote op-ed w/ Nobel Prize winning economist
@paulmromer
in which we explore the power of mathematical functions (and mathematical thinking) to provide a clear debunking of AI sentience and to reveal what's been largely missing from the debate. 🧵 1/14
these AI apocalypse estimates are completely unscientific, just made-up numbers, there's nothing meaningful to support them. And AI experts are biased: they benefit from the impression that AI is more powerful than it is and could easily deceive themselves into believing it.
Here's a wild--but simple!--idea that I think might bridge a lot of ideological gaps in the heated debates concerning AI and help people assess & discuss & confront risks of all kinds:
What if instead of talking about "AI" we talk about "automation"?
Let's have a look:
I'm trying to keep an open mind, but I have decidedly mixed--mostly critical--feelings about this. Of course it's just a tiny statement so hard to pin down what it means, but allow me to unpack it with some reactions in this 🧵
Another sign-on statement about the existential risks of AI- this one signed by Google, Microsoft, OpenAI and other company execs and a slew of academics. The single-sentence statement, coordinated by the Center for AI Safety, is here.
It’s amazing how many people seem to think diversity lowers an organization’s performance—the same people who enthusiastically diversify their investment portfolio to improve its performance.
It’s the same math, y’all.
Honest confession: I've never tried ChatGPT, and I have no plans to do so.
This isn't a political or moral stance.
I just really love words and love choosing them myself.
Someone on Twitter: Look at this amazing thing AI did when I was talking to it!
Tech journalists: Amazing new AI behavior emerges!
Someone else on Twitter: Explains why it’s not as amazing as it seems.
Tech journalists: Internet fooled by AI “abilities”
Lather, rinse, repeat.
Does a 6mo baby have AGI? How about a 1yo? 3yo? 10yo? At one point does a human obtain AGI?
If you can't answer this, how are you giving timelines for when computers will obtain AGI?
Fascinating, thought-provoking
@NewYorker
essay by Ted Chiang--highly recommend!! Shines illuminating economic light on AI.
I have one quibble below for those interested--honestly not sure how much it undercuts his overall argument--curious your thoughts!
Next: Superintelligence "could lead to ... human extinction. ... We believe [superintelligence] could arrive this decade."
(a) technology doesn't "arrive", it is built and sold. Your company is trying to build and sell this dangerous tech. Own up to that.
When I see that X% percent of "AI experts" believe there's a Y% chance AI will kill us all, sorry but my reaction is yeah that's what they want us to think so we are in awe of their godlike power and trust them to save us. It's not science.
Since some ppl now follow me solely for my wife's snarky witticisms (eg "nerd drama") and disinterest in tech CEO comings and goings, here's the latest.
I asked what she thinks of AI killing us all. She said "I'm far more afraid of megalomaniacal billionaires killing us all".
A big oversimplification but it’s striking me that human learning and creativity is largely about *extrapolation* whereas in our current AI image and word generators (Midjourney, ChatGPT, etc) it’s largely about *interpolation*.
Here’s what I mean. 🧵
1/7
This raises the question: what *could* kill us all? One obvious answer is nuclear war, but nobody would be impressed by a sexy new philosophical movement whose main conclusion is that... nuclear war is bad.
So they had to dig deeper and find a less obvious x-risk. They chose AI.
I’m so glad we have all these tech CEO college dropouts and Twitter grifters to explain superconductor physics to us so we don’t have to rely on actual scientists or journalists for our information.
Curious how many financial ties there are between the people building the technology that's supposedly going to wipe us all out and the people supposedly fighting to protect us from the technology that's supposedly going to wipe us all out.
NEW: Tech billionaires with ties to top AI firms are funding the salaries of AI staffers in the key congressional offices working to regulate AI. The program is part of a broader network that's pushing Washington to focus on AI's apocalyptic potential.
Hence the very public statements we keep seeing about AI extinction risk--not AI harms like disinformation, election interference, discrimination, massive job loss, concentration of economic power, etc.--the message HAD to be that AI could kill us all, for that's what EA needed.
After my wife went low-key viral for this gem, she clarified her views to me: "It's not that I don't know who these people are, or even that I don't care. I just think they're all douchebags."
I asked who she meant by "they," but she simply responded "all of them."
🤔🤣
I just told my wife about all the OpenAI Sam Altman stuff happening, including the various theories and potential consequences, thinking it’s interesting how it’s playing out.
Her response was simply: “nerd drama.” 😂
Is this really how we’re advertising AI products now?!
It’s bad enough to halfheartedly discourage users from nefarious uses like disinfo—but to actively encourage it with ads like this is disgusting.
Do better AI firms, do better
@Twitter
, and please help
@FTC
.
Tech-bro Linguistics 101:
Let's have a quick look at one line from the
@OpenAI
blogpost CEO
@sama
recently wrote outlining the company's plan for the future: "We want AGI to empower humanity to maximally flourish in the universe."
What's wrong with this? Let's count the ways:
Thank you,
@profgalloway
!! Keep hearing people say how unusual it is for a tech CEO to call for regulation--but it's not, it's a standard tactic to stifle competition (build moats) and eschew responsibility for harms their products cause ("not our fault, we followed the regs").
We're falling for this (shit), again:
--Altman, CEO of OpenAI calls for US to regulate artificial intelligence (BBC) May '23
--Zuckerberg, We need a more active role for governments and regulators Oct. '20
--Facebook COO Sandberg calls for government regulation (CNN) June '19
Conjecture: any extreme harm you’re worried about from AI (launching nukes, creating a new pandemic, ..) will be done by humans using the AI as a tool long before a rogue AI does it autonomously.
I could be totally wrong, it’s just a wild guess, but I’ll put it out here anyway.
If AI were mostly used to accelerate progress in the biological sciences (eg protein folding, drug discovery, ...) I'd be a lot more excited and comfortable.
Instead I see it mostly used to accelerate surveillance (both govt and corporate) and the excesses & harms of capitalism.
Hi all - this is Emily,
@ProfNoahGian
's wife. For those wondering about the white dog in Noah’s video - that's our Violet. We adopted her 9 years ago, when she was still almost a puppy, from
@GPRAtlanta
. Like George, she had been a stray.
Many AI peeps I follow were excited last month about the bold FTC blog post inveighing against AI hype (Keep your AI claims in check). The same FTC employee just posted another official blog post (AI deception for sale) on deepfakes/generative AI, and it is 🔥. Some highlights:
I'm continually amazed that AI researchers see challenges in society that are clearly unsolvable yet think the non-biological versions will be solvable. If AIs are to be as smart & complex as humans, which OpenAI seems to believe, then AIs will be just as ungovernable as humans.
What we can all learn from this is that plagiarism accusations are a conspiratorial hit piece when they concern your friends and they're important revelations when they concern your enemies.
although not a complete solution, raising awareness of it is better than nothing.
we are curious to hear ideas, and will have some events soon to discuss more.
My wife is a music history prof, today she wrote in
@BostonGlobe
to fight for the arts. She's pre-tenure yet standing up against her university's president and top admins. I am humbled w/ pride for her bravery & eloquence. Please retweet✊(full text below)
Here’s a different characterization:
AI safety focuses on made up problems that don’t exist, AI ethics focuses on real problems that affect real people.
For those who seem to be confused:
“AI safety” (as a field) has nothing to do with the woke Gemini debacle. That is a result of “AI ethics” - a completely different thing:
AI ethics: focussed on stuff like algorithmic bias. Very woke & left-leaning. Dislike transhumanism & EA &…
From
@emilymbender
’s excellent take on the voluntary AI commitments: “We pinky promise to be good, now please go away while we continue to practice massive data theft while creating poorly engineered everything machines that can’t possibly be evaluated.”🔥
Opening line: “Superintelligence will be the most impactful technology humanity has ever invented.”
Not “may” but “will”. Already frames this post as advertising not science.
And I’m biased but I’d call math the most impactful tech since it powers all AI & so much else too 😉
What irks me (as a mathematician) is so many people rush to state their p(AI doom) without defining what the heck this is. A probability estimate is meaningless if the event isn't well-defined.
Here's what I mean 1/6
Working on an essay about 𝐩(𝐝𝐨𝐨𝐦).
Some folks estimate at zero, some much higher.
Has anyone actually shown their work (ie how they got to their estimates)?
Indeed, EA is essentially a point system where you weigh impact on human lives times likelihood of the event--and the basic trap of doing this is that a human extinction event, even if it's one-in-a-million, is infinite points so dominates even high likelihood large impact harms.