Mario Cannistrà Profile
Mario Cannistrà

@Blueyatagarasu

194
Followers
73
Following
84
Media
5,868
Statuses

This is the most important time in history for humanity to be wise. We must solve the AI alignment problem.

Joined June 2011
Don't wanna be here? Send us removal request.
@Blueyatagarasu
Mario Cannistrà
2 months
@ArkohnLock @egregirls Good thing I'm an expert chemist, or this would have been very confusing.
4
0
320
@Blueyatagarasu
Mario Cannistrà
1 month
@McCallaPE @Rainmaker1973 No, if this is a real-life scenario, the patient would be right to be optimistic, because 50% is the average success rate of all procedures, and this doctor has shown to be better than average. It would be correct to be afraid if it was a true 50% chance, but it's not.
2
1
195
@Blueyatagarasu
Mario Cannistrà
5 months
@AISafetyMemes @ESYudkowsky "but the pope is obviously doing it just for regulatory capture, so his startup can catch up"
5
3
121
@Blueyatagarasu
Mario Cannistrà
6 months
@mezaoptimizer Ilya seems very trustworthy, but so did Sam. By the way they talk, they both seem to understand, and care about x risk, so either one of them was lying, or something strange is going on.
7
0
112
@Blueyatagarasu
Mario Cannistrà
2 months
@egregirls Can someone explain for non-chemists?
22
1
107
@Blueyatagarasu
Mario Cannistrà
22 days
@hyprturing I have 8 years of experience, and I'm not finding a job, while I got tons of offers and found many in the last few years. Finding a job with no experience is even harder, makes sense that they feel hopeless.
6
1
103
@Blueyatagarasu
Mario Cannistrà
2 months
@softshikioni @eigenrobot The "already" implies that this is in any way better than before. It's not. It's a strict downgrade.
1
0
88
@Blueyatagarasu
Mario Cannistrà
3 months
@DUIbarbie @dissproportion A suicide cult masquerading as techno-optimism.
2
0
87
@Blueyatagarasu
Mario Cannistrà
8 months
@untitled01ipynb "It's sci-fi" is such a weak argument. Is it even an argument at all? It's equivalent to saying it doesn't exist yet. Yeah, no shit.
10
4
81
@Blueyatagarasu
Mario Cannistrà
8 months
@Levi7hart Sokath, his eyes uncovered.
2
1
35
@Blueyatagarasu
Mario Cannistrà
8 months
@ylecun @AndrewCritchCA In most cases, you appear dismissive of the risks. I think it would be good for your credibility to acknowledge them more often, and more clearly. 2. Humans can make mistakes. 3. Calling them "ridiculous" or "fear mongering" is not a counter-argument.
1
0
28
@Blueyatagarasu
Mario Cannistrà
4 months
@ThePaleoCyborg @BasedBeffJezos @RokoMijic Those who never "backtrack" when new evidence is available are zealots.
1
0
27
@Blueyatagarasu
Mario Cannistrà
3 months
@AISafetyMemes If someone in his position found out about this just now, most people are going to be absolutely blindsided.
4
2
25
@Blueyatagarasu
Mario Cannistrà
4 months
@greentexts_bot I went on vacation for 2 weeks, and barely met any Japanese women. They mostly keep to themselves, they're kind of hard to approach, but to be fair, I didn't try that much, mostly went to tourist places.
3
0
24
@Blueyatagarasu
Mario Cannistrà
5 months
@norabelrose @primalpoly There is a risk-benefit analysis to be made, it's not so simple as "it's good" or "it's bad".
1
1
22
@Blueyatagarasu
Mario Cannistrà
10 months
@PauseAI @ylecun Is it below him? He shows this kind of behavior pretty consistently. I think this is just him. He's certainly exceptionally talented and capable in his field, but he's also arrogant, and doesn't realize when he's wrong, a sadly common flaw among smart people.
3
2
22
@Blueyatagarasu
Mario Cannistrà
2 months
@VectVapor @eigenrobot I think it's because they think cheek fat makes them look "fat", which is obviously bullshit, because they usually are in great shape. It's like removing breasts because they are "fat". Technically true, but they're the attractive kind of fat.
0
0
21
@Blueyatagarasu
Mario Cannistrà
4 months
@ezra_marc A Tote Bag for $120? Yes. We must go with the helicopter. We entered grandma's house, which is also a cave now, and there is a group of people outraged because you didn't close the door. Now you turn around and you're in a forest, eating cheese, sitting on a rock.
0
0
20
@Blueyatagarasu
Mario Cannistrà
3 months
@liron @cdixon "New business model". This is pure cope. They desperately want you to stick to that normalcy bias that suggests there will always be jobs. "Sure it will automate this, but you'll just pivot to that other thing". No brother, there will be nothing left.
0
0
19
@Blueyatagarasu
Mario Cannistrà
1 year
@TVNewsNow I would like an answer to "Do you acknowledge the risk of extinction from AI" as a simple yes or no from governments. That might wake some people up.
0
0
18
@Blueyatagarasu
Mario Cannistrà
8 months
@daniel_271828 @ylecun @AndrewCritchCA Especially if the existence of that teapot presented some sort of existential risk.
0
0
17
@Blueyatagarasu
Mario Cannistrà
4 months
@danfaggella Counterargument: if it brings an end to humanity, it's not a "Worthy Successor" by definition, so we should avoid all paths that lead there.
2
1
18
@Blueyatagarasu
Mario Cannistrà
8 months
@untitled01ipynb @ESYudkowsky @realGeorgeHotz And the argument against killer nanobots is? "It's sci-fi", isn't it? Weak.
2
0
16
@Blueyatagarasu
Mario Cannistrà
1 month
@thechosenberg When I was employed, I got offers almost every month, now that I'm looking to get back to work after working on a personal project I'm having no luck at all.
0
1
17
@Blueyatagarasu
Mario Cannistrà
3 months
@RokoMijic I'll do you one better: If we don't rush AGI, we get a higher chance of surviving, and do AGI properly, and actually achieve off-world colonies. If we rush it, we most likely all die. How much time do you think humanity has without AGI?
3
1
16
@Blueyatagarasu
Mario Cannistrà
7 months
@ESYudkowsky "̵B̵u̵t̵ ̵w̵e̵ ̵w̵o̵n̵'̵t̵ ̵c̵o̵n̵n̵e̵c̵t̵ ̵t̵h̵e̵ ̵A̵I̵ ̵t̵o̵ ̵t̵h̵e̵ ̵i̵n̵t̵e̵r̵n̵e̵t̵"̵ "̵B̵u̵t̵ ̵w̵e̵ ̵w̵o̵n̵'̵t̵ ̵g̵i̵v̵e̵ ̵t̵h̵e̵ ̵A̵I̵ ̵a̵ ̵b̵o̵d̵y̵"̵ "But we won't -"
0
0
16
@Blueyatagarasu
Mario Cannistrà
3 months
@RokoMijic Insanely conservative estimate. I'd be surprised if it takes more than 5 years.
1
0
16
@Blueyatagarasu
Mario Cannistrà
6 months
@AISafetyMemes I thought investors had no power over the non-profit's decisions, and that's one of the main reasons they decided to make the parent company a non-profit.
2
0
15
@Blueyatagarasu
Mario Cannistrà
6 months
@Levi7hart I think it's an absurd notion that we can defend against hostile AGIs using "good" AGIs, but even assuming we can get a good one (which we probably can't), who would want to be constantly under attack by hostile AGIs? Is that Yann's dream future?
2
0
14
@Blueyatagarasu
Mario Cannistrà
5 months
@ylecun @Ciaran2493 @ESYudkowsky Is that different from saying that humans are perfect and can't make mistakes? Because it sounds like that's what you're saying.
0
0
14
@Blueyatagarasu
Mario Cannistrà
5 months
@alyssamvance Is this a joke? I can't believe it works like that in the US.
4
0
15
@Blueyatagarasu
Mario Cannistrà
2 months
@ylecun Interesting, how did you measure it? Do you happen to know the amount of self-awareness various animals have too?
1
0
14
@Blueyatagarasu
Mario Cannistrà
23 days
@NPCollapse "But there is no hidden world model! What you see is what you get! Perfectly interpretable!"
0
0
13
@Blueyatagarasu
Mario Cannistrà
5 months
@AISafetyMemes @apples_jimmy The fact that there even are leaks is not very reassuring for a company that is building AGI. Imagine if the Manhattan project had all these leaks. This whole endeavor is not being treated with the seriousness it deserves.
3
2
14
@Blueyatagarasu
Mario Cannistrà
5 months
@AISafetyMemes @_barrenwuffett To be clear to any readers for why this is bullshit: most of us are very much pro technology, just not recklessly and blindly accelerating potentially world-ending ones, like AGI.
1
0
13
@Blueyatagarasu
Mario Cannistrà
6 months
@norabelrose > the white box thing. It has ~nothing to do with mech interp I told you the term "white box" was going to lead to confusion. I know what you mean by it, but it's misleading to someone who just sees the term by itself. But yes, he should have read it properly before replying.
1
0
13
@Blueyatagarasu
Mario Cannistrà
5 months
@ESYudkowsky @ciphergoth I'm guessing also that many people hear "AI risk" and think "Terminator" immediately, without going any deeper, so the AI safety people must be scared about Terminator.
4
0
12
@Blueyatagarasu
Mario Cannistrà
6 months
@AISafetyMemes I'm referring specifically to this:
Tweet media one
1
0
11
@Blueyatagarasu
Mario Cannistrà
6 months
@nickfloats @AISafetyMemes @adamdangelo @ilyasut @sama @gdb @miramurati @eshear @satyanadella Yeah, that's an obvious conflict of interest, what were they thinking? Anyway, Ilya's regret doesn't necessarily mean it wasn't a safety issue. It could be that everyone moving to MS and continuing anyway is much worse for safety, and he might not have expected that.
3
0
12
@Blueyatagarasu
Mario Cannistrà
5 months
@mealreplacer "Press F12 and copy paste this script into that console to see my nudes"
0
0
11
@Blueyatagarasu
Mario Cannistrà
7 months
@robertskmiles @pmddomingos Also we're training it on language, and data that contains a lot of useful information that took us thousands of years to develop, which is a massive advantage.
0
0
12
@Blueyatagarasu
Mario Cannistrà
5 months
@AISafetyMemes > AI could pose a threat to writers of suspense novels and science fiction I like how they always try to minimize the impact as narrowly as possible. No, of course it couldn't replace all writers, only the ones doing suspense and sci-fi, for some reason.
0
0
12
@Blueyatagarasu
Mario Cannistrà
3 months
@BjarturTomas @NPCollapse Oh, it failed to debug multi-GPU training, so we're fine.
1
0
11
@Blueyatagarasu
Mario Cannistrà
5 months
@AgiDoomerAnon @AISafetyMemes @ESYudkowsky Need a prediction market for: "Will the pope bless a data center before 2026?"
1
0
11
@Blueyatagarasu
Mario Cannistrà
7 months
@demishassabis @guardian Good, but from an external observer, it doesn't seem like what's happening is "taking risks seriously". Ideally, everyone would slow down/pause capabilities research, and focus on alignment/safety research, instead it looks like everyone is rushing capabilities.
1
0
9
@Blueyatagarasu
Mario Cannistrà
4 months
@ylecun So Alan Turing would be a good candidate to win such a prize, since he warned about the risks of AI?
0
0
10
@Blueyatagarasu
Mario Cannistrà
9 months
@quantum_oasis @tszzl He's a zealot, he doesn't update his position with reason and evidence, he just loves to have a cult following, or perhaps can't change his persona now that he's the center figure of his cult.
0
0
10
@Blueyatagarasu
Mario Cannistrà
6 months
I see a lot of people falling for normalcy bias when talking about AI x risk. Seems like a really basic mistake to make, but people keep doing it for some reason. Maybe most people aren't suited to think about these things.
1
0
10
@Blueyatagarasu
Mario Cannistrà
4 months
@AaargHiel @Top1Rating @Rainmaker1973 Swim, eat food, good. Swim, eat food, good. Human says it's sad. Swim, eat food, good...
0
0
9
@Blueyatagarasu
Mario Cannistrà
3 months
@TolgaBilge_ He's correct, and you're correct. They have effectively no control, unless govt. steps in, and it should, but not only that, it should form an international collaboration to do it worldwide, otherwise effectiveness is limited. Like this:
2
0
10
@Blueyatagarasu
Mario Cannistrà
6 months
@GarrettPetersen This is a crime in Italy.
0
0
10
@Blueyatagarasu
Mario Cannistrà
2 months
@tszzl @Plinz What league are you in SC2?
1
0
10
@Blueyatagarasu
Mario Cannistrà
8 months
@AISafetyMemes @BasedBeffJezos Beff has no time for fact-checking, he must accelerate.
0
0
10
@Blueyatagarasu
Mario Cannistrà
30 days
@RokoMijic @JonasWustrack @ESYudkowsky @robinhanson And famously, ChatGPT is very robust in caring about those values, right?
2
0
10
@Blueyatagarasu
Mario Cannistrà
8 months
@TJEvarts @karpathy Comments are needed to explain "why" you did something, not what you did. Good code should be self-explanatory in "what" it does. The why is often lost in intricacies that go beyond the immediately visible code.
1
0
9
@Blueyatagarasu
Mario Cannistrà
1 month
@danfaggella Ah, nuance, the enemy of tribes.
1
0
9
@Blueyatagarasu
Mario Cannistrà
3 months
@AISafetyMemes @RokoMijic Yeah, but IME, telling them any of this is pointless. They either get angry and think you're making shit up, or they agree, but act as if nothing changed, as it failed to register for them somehow.
1
0
9
@Blueyatagarasu
Mario Cannistrà
6 months
@TheZvi I think @robertskmiles videos are always a great intro to the topic, very clear explanations and fairly comprehensive.
0
0
9
@Blueyatagarasu
Mario Cannistrà
3 months
@freed_dfilan Holy shit, Americans will polarize anything.
0
0
9
@Blueyatagarasu
Mario Cannistrà
5 months
@ArthurB I agree with Bostrom, but that doesn't mean to rush AGI. We still want to minimize x risk from that, while not delaying it too much.
1
0
8
@Blueyatagarasu
Mario Cannistrà
11 months
@s_batzoglou @ylecun @MelMitchell1 @tegmark How is it weak? By the way, his argument is essentially the orthogonality thesis.
2
0
9
@Blueyatagarasu
Mario Cannistrà
1 month
@aphercotropist @iamaheron_ Yeah, probably started in Diablo 1 (white, blue, and gold), then Diablo II and eventually WoW made it ubiquitous.
0
0
9
@Blueyatagarasu
Mario Cannistrà
7 months
@kai_dogecoin @Aella_Girl There is a simpler explanation. "liking" and "wanting" are separate things, and communication is not always clear or even present. They might like something more, but not want it, or communicate it because of other reasons (like shyness/embarrassment in saying it).
0
0
8
@Blueyatagarasu
Mario Cannistrà
3 months
@DUIbarbie @dissproportion Murder-suicide, more accurately, since they're fine if everyone on Earth dies if they get what they want.
0
0
9
@Blueyatagarasu
Mario Cannistrà
2 months
@danfaggella Most people's response when I tell them that we'll have perfect VR indistinguishable from reality, so that it would be pointless to do anything in the "real" world is that they'd still prefer the real world. I don't think they will keep this preference once they try it.
3
0
9
@Blueyatagarasu
Mario Cannistrà
9 months
@michael_timbs @tszzl 10 years ago mine would have been around 70 years, so 100 would be understandable. Now it seems absurd. I'd be surprised if it takes 5 years, and I'd be shocked if it takes more than 15.
1
0
7
@Blueyatagarasu
Mario Cannistrà
2 months
@repligate @TechBroTino A thing can be beautiful, and dangerous at the same time. Just because it's beautiful, doesn't mean we should let it kill us all. Accelerationists are just in denial of (or unable to grasp) the risk.
1
0
8
@Blueyatagarasu
Mario Cannistrà
3 months
@NPCollapse @tszzl > You are as much an extension of its body as a shambling corpse is of its creator's necromantic will. Holy shit, this goes hard as fuck.
1
0
8
@Blueyatagarasu
Mario Cannistrà
4 months
@EverSemirBalam @greentexts_bot I went to Kabukichō a few times, and to a few pubs, and bars. The bars were mostly deserted, the pubs mostly filled with foreigners, and the red district mostly prostitutes and touts. But again, I didn't try too much, maybe if I stayed more it would be better. It was fun anyway.
1
0
8
@Blueyatagarasu
Mario Cannistrà
6 months
@7ip7ap > truthseeking e/acc Never seen one of those. I'm joking, least I be accused of lacking nuance.
0
0
7
@Blueyatagarasu
Mario Cannistrà
3 months
@gcolbourn Getting rid of Sam doesn't stop AGI. In fact, he might be better than many alternatives, at least he gives lip service to x risk, whether he means it or not.
1
0
8
@Blueyatagarasu
Mario Cannistrà
4 months
@sama It's true that decline is bad, and should be rejected. But the mistake some people (mostly e/acc) make is to think that means we should accelerate as fast as possible, which is also bad, and can lead to disaster. That is especially true for very powerful technologies like AGI.
1
1
8
@Blueyatagarasu
Mario Cannistrà
11 months
@AISafetyMemes @YaBoyFathoM @emollick After it kills us all: "" That's an empty string. No one left to mock/quote.
0
0
8
@Blueyatagarasu
Mario Cannistrà
5 months
@AISafetyMemes Let's just call them AI Notkilleveryoneism researchers. The "safety" term is overloaded, and is no longer useful to describe what we actually mean.
2
0
8
@Blueyatagarasu
Mario Cannistrà
6 months
@GjMcGowan @KolotaTyler But I really, really hate to clean, and I'm completely fine cooking. Currently I do both, but if I found someone who likes cleaning, I'd be much happier just cooking.
0
0
7
@Blueyatagarasu
Mario Cannistrà
1 year
@TimHinchliffe This guy is absolutely insane. He's saying we should wait until people die. I'm pretty sure he doesn't even know about the alignment problem.
0
0
8
@Blueyatagarasu
Mario Cannistrà
6 months
@JamesLucasIT @Culture_Crit It's the best when you're so good they accuse you of cheating.
0
0
7
@Blueyatagarasu
Mario Cannistrà
3 months
@loose_shorts @Italian347 @daniel_da_vid If you don't count Thriller Bark and Zou as islands, possibly only sky islands as far as I know, but there are probably some that move. Also manga spoilers: there is a special "island", that might or might not be able to move.
1
0
8
@Blueyatagarasu
Mario Cannistrà
2 months
@kindgracekind What if we get even more asteroids to collide with Earth? Surely they will deflect each other, and we'll be safe. We should give everyone a way to make asteroids.
0
0
8
@Blueyatagarasu
Mario Cannistrà
1 month
@norabelrose If they discover dangerous capabilities, that's the bare minimum. The issue is if they don't discover them, while they are present. Once you open source something, there's no taking it back.
2
0
8
@Blueyatagarasu
Mario Cannistrà
6 months
@liron @ilyasut He's completely correct in saying it's not guaranteed, and we should encourage such honest behavior. I'm not entirely happy with what they are doing safety-wise, I think they could do a lot better, and also, they should work towards enacting a global pause. They have influence.
1
0
7
@Blueyatagarasu
Mario Cannistrà
4 months
@ylecun @lmqlai You don't even believe in the massive potential of the very technology you're developing, how can you be trusted with being the head of a major AI lab?
1
0
8
@Blueyatagarasu
Mario Cannistrà
10 months
@seanxyue @ylecun "sentient" Says everything you know about AI safety. Maybe stop thinking we're talking about Terminator and The matrix, and read some actual papers on the subject?
1
0
8
@Blueyatagarasu
Mario Cannistrà
5 months
@gcolbourn He's right about the risk, but unfortunately his solutions are bad. Making Grok "truth-seeking" is not a solution to misalignment. That's just a convergent instrumental goal for any capable AGI, it doesn't make it any less dangerous, it only makes it more capable. Neuralink is at
1
0
8
@Blueyatagarasu
Mario Cannistrà
7 months
@ylecun Why don't you post your credit card data then? Maybe all the passwords for all your accounts too, do away with that silly obscurantism.
1
0
8
@Blueyatagarasu
Mario Cannistrà
5 months
@liron @ylecun "We have agency, we can decide not to deploy it" is so incredibly easy to counter if one spends 10 seconds thinking about it. You can't decide to stop others from deploying it, or convince them that it's "unsafe" if you have no proof, and you might only get "proof" after you -
1
0
8
@Blueyatagarasu
Mario Cannistrà
5 months
@AISafetyMemes @BasedDinesh To most people AI is "robots", because when they hear AI, they think of Terminator. I suspect embodiment will grab the interest of a lot of those people when it becomes good enough, but for the wrong reasons. The far greater danger is the intelligence, not the body.
2
0
6
@Blueyatagarasu
Mario Cannistrà
11 months
@UubzU If ad hominem is the best you can do, you know your argument is garbage.
0
0
7
@Blueyatagarasu
Mario Cannistrà
3 months
@runaway_vol @thechosenberg Insects need duties, men can create art, invent things, shape society, or simply engage in hobbies, sports, or anything they want.
0
0
7
@Blueyatagarasu
Mario Cannistrà
4 months
@ThePaleoCyborg @BasedBeffJezos @RokoMijic I get it, but I don't like the implication that "backtracking" is a bad thing.
1
0
7
@Blueyatagarasu
Mario Cannistrà
7 months
@EvanHub @NPCollapse It's useless (or actually harmful), while pretending to be a solution.
@Josh__Clancy
Josh Clancy
7 months
Tweet media one
2
2
19
0
1
7
@Blueyatagarasu
Mario Cannistrà
10 months
@dedpva @nearcyan It does the opposite of solving AI safety, it only accelerates capability.
2
0
6
@Blueyatagarasu
Mario Cannistrà
5 months
@RokoMijic Correct, more or less. Without AGI we eventually die. With AGI, we have a chance, but it also presents a high risk of premature death. We can work to reduce that risk. Values should be dynamic, not locked. Value divergence unlikely post-AGI, but might be permanently misaligned.
4
2
6
@Blueyatagarasu
Mario Cannistrà
4 months
@futuristflower @Megli00 Jobs will be automated, but it's not guaranteed that we'll get our shit together and implement UBI. The transition period will be hard.
1
0
6
@Blueyatagarasu
Mario Cannistrà
4 months
@futuristflower @tszzl Bullshit, cope, deathism. I want to choose if/when to die, not leave it to chance.
0
0
7
@Blueyatagarasu
Mario Cannistrà
1 month
@RokoMijic Yes and no. People will laugh, and say "it's not good enough to replace real women", completely ignoring (as usual) that progress isn't finished. But this is a slow way to go. Likely too slow, so we might not even have to worry about it, there will be faster x-risks before this.
0
0
7