@McCallaPE
@Rainmaker1973
No, if this is a real-life scenario, the patient would be right to be optimistic, because 50% is the average success rate of all procedures, and this doctor has shown to be better than average.
It would be correct to be afraid if it was a true 50% chance, but it's not.
@mezaoptimizer
Ilya seems very trustworthy, but so did Sam.
By the way they talk, they both seem to understand, and care about x risk, so either one of them was lying, or something strange is going on.
@hyprturing
I have 8 years of experience, and I'm not finding a job, while I got tons of offers and found many in the last few years.
Finding a job with no experience is even harder, makes sense that they feel hopeless.
@ylecun
@AndrewCritchCA
In most cases, you appear dismissive of the risks.
I think it would be good for your credibility to acknowledge them more often, and more clearly.
2. Humans can make mistakes.
3. Calling them "ridiculous" or "fear mongering" is not a counter-argument.
@greentexts_bot
I went on vacation for 2 weeks, and barely met any Japanese women. They mostly keep to themselves, they're kind of hard to approach, but to be fair, I didn't try that much, mostly went to tourist places.
@PauseAI
@ylecun
Is it below him? He shows this kind of behavior pretty consistently. I think this is just him.
He's certainly exceptionally talented and capable in his field, but he's also arrogant, and doesn't realize when he's wrong, a sadly common flaw among smart people.
@VectVapor
@eigenrobot
I think it's because they think cheek fat makes them look "fat", which is obviously bullshit, because they usually are in great shape. It's like removing breasts because they are "fat".
Technically true, but they're the attractive kind of fat.
@ezra_marc
A Tote Bag for $120? Yes. We must go with the helicopter. We entered grandma's house, which is also a cave now, and there is a group of people outraged because you didn't close the door. Now you turn around and you're in a forest, eating cheese, sitting on a rock.
@liron
@cdixon
"New business model".
This is pure cope. They desperately want you to stick to that normalcy bias that suggests there will always be jobs.
"Sure it will automate this, but you'll just pivot to that other thing".
No brother, there will be nothing left.
@TVNewsNow
I would like an answer to "Do you acknowledge the risk of extinction from AI" as a simple yes or no from governments.
That might wake some people up.
@danfaggella
Counterargument: if it brings an end to humanity, it's not a "Worthy Successor" by definition, so we should avoid all paths that lead there.
@thechosenberg
When I was employed, I got offers almost every month, now that I'm looking to get back to work after working on a personal project I'm having no luck at all.
@RokoMijic
I'll do you one better:
If we don't rush AGI, we get a higher chance of surviving, and do AGI properly, and actually achieve off-world colonies. If we rush it, we most likely all die.
How much time do you think humanity has without AGI?
@AISafetyMemes
I thought investors had no power over the non-profit's decisions, and that's one of the main reasons they decided to make the parent company a non-profit.
@Levi7hart
I think it's an absurd notion that we can defend against hostile AGIs using "good" AGIs, but even assuming we can get a good one (which we probably can't), who would want to be constantly under attack by hostile AGIs? Is that Yann's dream future?
@ylecun
@Ciaran2493
@ESYudkowsky
Is that different from saying that humans are perfect and can't make mistakes? Because it sounds like that's what you're saying.
@AISafetyMemes
@apples_jimmy
The fact that there even are leaks is not very reassuring for a company that is building AGI. Imagine if the Manhattan project had all these leaks.
This whole endeavor is not being treated with the seriousness it deserves.
@AISafetyMemes
@_barrenwuffett
To be clear to any readers for why this is bullshit: most of us are very much pro technology, just not recklessly and blindly accelerating potentially world-ending ones, like AGI.
@norabelrose
> the white box thing. It has ~nothing to do with mech interp
I told you the term "white box" was going to lead to confusion. I know what you mean by it, but it's misleading to someone who just sees the term by itself.
But yes, he should have read it properly before replying.
@ESYudkowsky
@ciphergoth
I'm guessing also that many people hear "AI risk" and think "Terminator" immediately, without going any deeper, so the AI safety people must be scared about Terminator.
@robertskmiles
@pmddomingos
Also we're training it on language, and data that contains a lot of useful information that took us thousands of years to develop, which is a massive advantage.
@AISafetyMemes
> AI could pose a threat to writers of suspense novels and science fiction
I like how they always try to minimize the impact as narrowly as possible.
No, of course it couldn't replace all writers, only the ones doing suspense and sci-fi, for some reason.
@demishassabis
@guardian
Good, but from an external observer, it doesn't seem like what's happening is "taking risks seriously".
Ideally, everyone would slow down/pause capabilities research, and focus on alignment/safety research, instead it looks like everyone is rushing capabilities.
@quantum_oasis
@tszzl
He's a zealot, he doesn't update his position with reason and evidence, he just loves to have a cult following, or perhaps can't change his persona now that he's the center figure of his cult.
I see a lot of people falling for normalcy bias when talking about AI x risk.
Seems like a really basic mistake to make, but people keep doing it for some reason.
Maybe most people aren't suited to think about these things.
@TolgaBilge_
He's correct, and you're correct.
They have effectively no control, unless govt. steps in, and it should, but not only that, it should form an international collaboration to do it worldwide, otherwise effectiveness is limited.
Like this:
@TJEvarts
@karpathy
Comments are needed to explain "why" you did something, not what you did. Good code should be self-explanatory in "what" it does. The why is often lost in intricacies that go beyond the immediately visible code.
@AISafetyMemes
@RokoMijic
Yeah, but IME, telling them any of this is pointless. They either get angry and think you're making shit up, or they agree, but act as if nothing changed, as it failed to register for them somehow.
@kai_dogecoin
@Aella_Girl
There is a simpler explanation. "liking" and "wanting" are separate things, and communication is not always clear or even present.
They might like something more, but not want it, or communicate it because of other reasons (like shyness/embarrassment in saying it).
@danfaggella
Most people's response when I tell them that we'll have perfect VR indistinguishable from reality, so that it would be pointless to do anything in the "real" world is that they'd still prefer the real world.
I don't think they will keep this preference once they try it.
@michael_timbs
@tszzl
10 years ago mine would have been around 70 years, so 100 would be understandable. Now it seems absurd. I'd be surprised if it takes 5 years, and I'd be shocked if it takes more than 15.
@repligate
@TechBroTino
A thing can be beautiful, and dangerous at the same time.
Just because it's beautiful, doesn't mean we should let it kill us all.
Accelerationists are just in denial of (or unable to grasp) the risk.
@NPCollapse
@tszzl
> You are as much an extension of its body as a shambling corpse is of its creator's necromantic will.
Holy shit, this goes hard as fuck.
@EverSemirBalam
@greentexts_bot
I went to Kabukichō a few times, and to a few pubs, and bars.
The bars were mostly deserted, the pubs mostly filled with foreigners, and the red district mostly prostitutes and touts.
But again, I didn't try too much, maybe if I stayed more it would be better. It was fun anyway.
@gcolbourn
Getting rid of Sam doesn't stop AGI.
In fact, he might be better than many alternatives, at least he gives lip service to x risk, whether he means it or not.
@sama
It's true that decline is bad, and should be rejected.
But the mistake some people (mostly e/acc) make is to think that means we should accelerate as fast as possible, which is also bad, and can lead to disaster.
That is especially true for very powerful technologies like AGI.
@AISafetyMemes
Let's just call them AI Notkilleveryoneism researchers. The "safety" term is overloaded, and is no longer useful to describe what we actually mean.
@GjMcGowan
@KolotaTyler
But I really, really hate to clean, and I'm completely fine cooking.
Currently I do both, but if I found someone who likes cleaning, I'd be much happier just cooking.
@TimHinchliffe
This guy is absolutely insane. He's saying we should wait until people die. I'm pretty sure he doesn't even know about the alignment problem.
@loose_shorts
@Italian347
@daniel_da_vid
If you don't count Thriller Bark and Zou as islands, possibly only sky islands as far as I know, but there are probably some that move. Also manga spoilers: there is a special "island", that might or might not be able to move.
@kindgracekind
What if we get even more asteroids to collide with Earth?
Surely they will deflect each other, and we'll be safe.
We should give everyone a way to make asteroids.
@norabelrose
If they discover dangerous capabilities, that's the bare minimum.
The issue is if they don't discover them, while they are present.
Once you open source something, there's no taking it back.
@liron
@ilyasut
He's completely correct in saying it's not guaranteed, and we should encourage such honest behavior.
I'm not entirely happy with what they are doing safety-wise, I think they could do a lot better, and also, they should work towards enacting a global pause. They have influence.
@ylecun
@lmqlai
You don't even believe in the massive potential of the very technology you're developing, how can you be trusted with being the head of a major AI lab?
@seanxyue
@ylecun
"sentient"
Says everything you know about AI safety.
Maybe stop thinking we're talking about Terminator and The matrix, and read some actual papers on the subject?
@gcolbourn
He's right about the risk, but unfortunately his solutions are bad.
Making Grok "truth-seeking" is not a solution to misalignment. That's just a convergent instrumental goal for any capable AGI, it doesn't make it any less dangerous, it only makes it more capable.
Neuralink is at
@liron
@ylecun
"We have agency, we can decide not to deploy it"
is so incredibly easy to counter if one spends 10 seconds thinking about it.
You can't decide to stop others from deploying it, or convince them that it's "unsafe" if you have no proof, and you might only get "proof" after you -
@AISafetyMemes
@BasedDinesh
To most people AI is "robots", because when they hear AI, they think of Terminator.
I suspect embodiment will grab the interest of a lot of those people when it becomes good enough, but for the wrong reasons. The far greater danger is the intelligence, not the body.
@runaway_vol
@thechosenberg
Insects need duties, men can create art, invent things, shape society, or simply engage in hobbies, sports, or anything they want.
@RokoMijic
Correct, more or less.
Without AGI we eventually die.
With AGI, we have a chance, but it also presents a high risk of premature death.
We can work to reduce that risk.
Values should be dynamic, not locked.
Value divergence unlikely post-AGI, but might be permanently misaligned.
@futuristflower
@Megli00
Jobs will be automated, but it's not guaranteed that we'll get our shit together and implement UBI.
The transition period will be hard.
@RokoMijic
Yes and no.
People will laugh, and say "it's not good enough to replace real women", completely ignoring (as usual) that progress isn't finished.
But this is a slow way to go.
Likely too slow, so we might not even have to worry about it, there will be faster x-risks before this.