Tim Tyler Profile Banner
Tim Tyler Profile
Tim Tyler

@tim_tyler

385
Followers
210
Following
109
Media
4,922
Statuses

I'm a software engineer with lots of interests. Please feel free to check out my book on memetics.

Boston, Massachusetts
Joined March 2008
Don't wanna be here? Send us removal request.
@tim_tyler
Tim Tyler
7 months
@ylecun A bit of a funny definition of losing: "Top 10 Companies by Market Cap in 2023 #1 Apple Technology $2.728 trillion - #2 Microsoft Technology $2.344 trillion ..."
6
0
26
@tim_tyler
Tim Tyler
7 months
@ylecun I guess the real problem here is that you are cherry-picking the problem area. Google runs on propietary software. Facebook and Instagram are proprietary. Amazon is proprietary. All the main clouds are proprietary. Sure: open source is used in some areas.
6
0
26
@tim_tyler
Tim Tyler
9 months
@iamtrask Re:"Contradicting datapoints are taken as a higher truth than agreement." I think that's just how a standard Bayesian update works. Confirming evidence is weak evidence for a hypothesis - while falsifying evidence is strong evidence against it.
1
0
24
@tim_tyler
Tim Tyler
6 months
@PauseAI @JosephJacks_ @ylecun Dominance is not even a central issue. What happened to: "the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else"...?
1
0
15
@tim_tyler
Tim Tyler
6 months
@_andreamiotti @jasonhausenloy This is the "trust us, we got this" security paradigm.
1
0
15
@tim_tyler
Tim Tyler
7 months
@ylecun @Jake_Browning00 All knowledge is expressible using sequences of 1s and 0s.
6
0
12
@tim_tyler
Tim Tyler
1 year
@albrgr @lukeprog This is not how most regulation works. Usually products are regulated - not hardware or software involved in their implementation.
2
0
12
@tim_tyler
Tim Tyler
8 months
@RokoMijic Consensus in evo. biology is that there are typically many causes of aging - in large creatures. If one mechanism killed you faster than the others, then it would attract maintenance resources. The result is many different things conspiring to kill you at around the same time.
5
0
12
@tim_tyler
Tim Tyler
1 year
@RespectToX @d_feldman The first hit is often free.
0
0
10
@tim_tyler
Tim Tyler
6 months
Free speech for chatbots.
3
2
10
@tim_tyler
Tim Tyler
1 year
"Human-level AI" is a dumb concept. By the time machines are at least "human-level" at 50% of the things humans often do, they will be a million times better at other things. Since it's a poor-quality concept, it's best to ditch it before it further messes up thinking and debates
0
0
8
@tim_tyler
Tim Tyler
1 year
Who benefits from doomer alarmism? Probably current incumbents - since it promotes overregulation, and so raises the barrier for entry.
2
0
9
@tim_tyler
Tim Tyler
7 months
@romanyam Regarding comparisons with fire and language - I would go back further. We are likely to be facing a genetic takeover. The last one of those was around 4 billion years ago.
2
0
5
@tim_tyler
Tim Tyler
10 months
Imagine an old cracked dam. It's going to fail. Your must protect those downstream. You can block holes (to buy time) or enlarge holes (to allow water through and reduce pressure on the dam). Perhaps this second strategy is neglected because it seems more risky and less obvious.
0
2
9
@tim_tyler
Tim Tyler
8 months
@RichardSSutton This seems like the "lie down and close your eyes" plan.
0
0
7
@tim_tyler
Tim Tyler
1 year
Mitigating the risk of extinction from ignorance and stupidity should also be a global priority.
0
0
7
@tim_tyler
Tim Tyler
1 year
@ylecun @per_arneng You don't know how to "guarantee AI safety" either.
1
0
8
@tim_tyler
Tim Tyler
1 year
Attempting to ban things that people want has been attempted before. Alcohol prohibition, prostitution and drugs for example. It creates black markets which are run by criminals and are difficult to regulate.
1
1
7
@tim_tyler
Tim Tyler
1 year
@leecronin This one can be looked up:
0
0
6
@tim_tyler
Tim Tyler
7 months
@robinhanson This reminds me of the climate change concerns. We're going to get machine superintelligence this century. That dwarfs these other minor concerns.
1
0
7
@tim_tyler
Tim Tyler
1 year
@robinhanson The quote is also nuts: "members of the same race, relative to the species as a whole, are related to one another (rH = 0.18–0.26) almost as closely as half-siblings (rH = 0.25)." Surely that indicates that they are mixing together different ways of measuring of relatedness.
0
0
6
@tim_tyler
Tim Tyler
1 year
@PostOpinions @MaxBoot UK and US promised to defend Ukraine in the Budapest agreement - in exchange for nuclear disarmament. It seems as though they are not doing a good job.
3
1
6
@tim_tyler
Tim Tyler
5 months
@MatthewJBar The improvements over GPT-4 are all in an unreleased, vaporware product that we might get to see sometime next year.
0
0
7
@tim_tyler
Tim Tyler
11 months
The "p(doom)" term has gone viral. I think I may have come up with it. I've been using it since at least 2009 - with numerous of references in 2010:
4
0
7
@tim_tyler
Tim Tyler
10 months
@MikeAnissimov2 Counter-example: trade.
1
0
6
@tim_tyler
Tim Tyler
1 year
@robinhanson Or maybe the respondents accounted for evolutionary convergence.
1
0
7
@tim_tyler
Tim Tyler
9 months
@DrTechlash "I spoke to a few of those who signed the letter, and it was clear that they did not all agree entirely with everything it said" - that sounds fairly normal and doesn't seem like much of a news story.
0
0
7
@tim_tyler
Tim Tyler
7 years
The big bang is the belly button of the universe.
0
0
5
@tim_tyler
Tim Tyler
9 months
@realGeorgeHotz Computation doesn't depend on overwriting bits. For example, see reversible computation. The Landauer limit is irrelevant to reversible computation. Moore's law is different - since it is to do with dollars.
0
0
0
@tim_tyler
Tim Tyler
9 months
@floates0x As many have previously noted, it is pretty easy to make a video just like this with ordinary magnets.
4
0
7
@tim_tyler
Tim Tyler
1 year
@Plinz As machine capabilities grow, first they complement humans, then they substitute for humans. The humans probably won't be unemployed - but wages could fall to what you would pay a machine.
1
0
6
@tim_tyler
Tim Tyler
1 year
Some claim that we only get one try at building superintelligence. Apparently, if we fail, we don't get to try again, because we are already dead. We are not allowed to make any civilization-destroying mistakes, but that's always true, and other mistakes seem permissable.
1
0
6
@tim_tyler
Tim Tyler
11 months
@evanrmurphy @pmddomingos @AndrewYNg @geoffreyhinton That's what he said. It is probably safe to assume that that's what he meant.
0
0
7
@tim_tyler
Tim Tyler
1 year
@RokoMijic Most forms of therapy are broken on purpose - through rewarding failure and cultivating dependency.
0
0
6
@tim_tyler
Tim Tyler
1 year
@Plinz Decensoring:
@mmitchell_ai
MMitchell
1 year
Just read the draft Generative AI guidelines that China dropped last week. If anything like this ends up becoming law, the US argument that we should tiptoe around regulation 'cos China will beat us will officially become hogwash. Here are some things that stood out. 🧵
27
186
736
1
0
5
@tim_tyler
Tim Tyler
9 months
@AdamBRozycki @mustafasuleyman It seems to be: what if the bad guys get hold of it? As if the bad guys were going to respect the terms of the software license in the first place. If license restrictions won't work, keeping the technology secret is sometimes proposed. The "trust us" model.
2
0
4
@tim_tyler
Tim Tyler
7 months
@octonion @leecronin @RosieRedfield @Nature The paper does not claim the planets like Earth are closed systems in the first place.
2
0
4
@tim_tyler
Tim Tyler
9 months
@lVlarty @AISafetyMemes @tszzl Destruction has a kind of thermodynamic advantage over construction. They saying goes "it is easier to destroy than create". However, as you say, this is missing some factors from the accounting - and if you include them, the conclusion is often reversed.
1
0
6
@tim_tyler
Tim Tyler
7 months
@romanyam Am still uncomfortable with the "we have to get it right on the first try" rhetoric. We have been trying and failing since the 1950s - and there will be plenty more failures in the future. Severe, unrecoverable setbacks are possible, but they aren't the only kind of failure.
0
0
1
@tim_tyler
Tim Tyler
1 year
@Brewgaloo_ Retroanglofuturism.
0
0
6
@tim_tyler
Tim Tyler
9 months
@janleike Open source LLMs can reliably survive and spread today - with the help of humans. The human in the loop slows them down a bit and tthe human has its own preferences - but it's not a big deal. As it turns out, autonomy is not worth all that much.
0
0
0
@tim_tyler
Tim Tyler
1 year
@leecronin Invalid syllogism alert.
0
0
6
@tim_tyler
Tim Tyler
6 months
1
0
5
@tim_tyler
Tim Tyler
7 months
@sashachapin That's not a real quote from Steve Jobs.
0
0
2
@tim_tyler
Tim Tyler
1 year
Listening to the doomers, I don't think many of them realize how bad their chances are with no machines. 99% of all species go extinct. The odds for large vertebrates are especially bad. Machines are our most likely route to salvation.
0
0
4
@tim_tyler
Tim Tyler
6 months
@ylecun The "AI doomer movement he jumpstarted"...? Reading the "Superintelligence" book, it seems as though most of Bostrom's points came from Yudkowsky.
0
0
4
@tim_tyler
Tim Tyler
1 year
@MatthewJBar IMO, the basic idea is right, but the timescale is more like decades. Short timescales were probably cited (e.g in the "RSI" essay) because short timescales are scary and so stimulate donations.
1
0
5
@tim_tyler
Tim Tyler
6 months
@McaleerStephen It's a sneaky way to draw attention to your paper.
0
0
4
@tim_tyler
Tim Tyler
1 year
Is there any difference from the industrial revolution and the chance of destroying civilization with nuclear weapons? If so, it seems as though it is mainly because more powerful technology is involved.
1
0
4
@tim_tyler
Tim Tyler
10 months
@togelius @dioscuri @davidchalmers42 That is silly. Fire is "self amplifying". Life is "self amplifying". Life is not a simple system.
0
0
5
@tim_tyler
Tim Tyler
5 months
@TheStudyofWar @nataliabugayova The US is widely considered to have lost the Vietnam War. The US military is impressive - but distant international conflicts are always on a budget.
5
0
8
@tim_tyler
Tim Tyler
1 year
A meta-learning perspective suggests that back propagation is probably a dumb strategy. If you had an intelligent machine that understood your network to help update your weights then it could do much better. This may sound circular - but really it's more like a wheel.
0
0
3
@tim_tyler
Tim Tyler
13 years
Internet killed the copyright law.
0
1
0
@tim_tyler
Tim Tyler
9 months
@IPCC_CH That's a funny-looking projection.
0
0
0
@tim_tyler
Tim Tyler
7 months
@sashachapin What do you have against AI waifus?
0
0
0
@tim_tyler
Tim Tyler
1 year
@RokoMijic Unemployment rate in the USA is 3.4 percent - a 50 year low.
1
0
5
@tim_tyler
Tim Tyler
1 year
Banning things that many people want creates black markets run by criminals that are difficult to regulate. Look at what happened with prohibition, prostitution and drugs. The Prison Industrial Complex is the main beneficiary.
0
0
4
@tim_tyler
Tim Tyler
7 months
@RokoMijic So: maybe all the stuff about "misaligned" machines is an irrelevant distraction - and most of the problem is with what we already have: corporations, governments, humans and technology.
3
0
5
@tim_tyler
Tim Tyler
1 year
@ravisparikh Note that "now regrets his life’s work" is a quote from Cade Metz, not Geoffrey Hinton. It is not terribly clear what Hinton said on the topic.
1
0
5
@tim_tyler
Tim Tyler
7 months
@romanyam Assuming that superintelligence is uncontrollable, what are the options for preservation of our values? Here is what I can see: relinquishment, value transfer, merger and preservation via instrumental goals.
1
0
0
@tim_tyler
Tim Tyler
7 months
@RokoMijic Backpropagation is a dumb strategy. I mean: your network would learn faster and better if there was an expert system adjusting its weights in response to an error function - instead of a mindless "gradient descent" process.
1
0
5
@tim_tyler
Tim Tyler
1 year
@bengoertzel @SingularityNET Re: "If you believe, as we do, that at some point, AI - AGI - is going to be extremely, unbelievably potent, then it just does not make sense to open-source. It is a bad idea..." - so: let's keep it to the mega-corps and make sure no one else has the tech to defend against them.
0
0
5
@tim_tyler
Tim Tyler
7 months
@sashachapin Why do you think AI has been bad so far? I mean apart from your examples - which could be cherry picked or mistaken - did you attempt any sort of cost-benefit analysis?
0
0
0
@tim_tyler
Tim Tyler
9 months
@IPCC_CH Projecting out to 2100 without huge error bars means you don't think machine superintelligence is going to be a big deal.
0
0
0
@tim_tyler
Tim Tyler
1 year
Absurd beliefs actually signal group membership better. They are costly (see costly signalling theory) and difficult for outsiders to accurately fake.
@ExiledInfoHaz
ExiledInfoHaz
1 year
In other words, irrationality isn't a series of random mistakes and glitches. It's actually an adaptation where people bend their factual beliefs to be more like those of the group, even if they are fairly obviously wrong.
4
1
37
0
0
3
@tim_tyler
Tim Tyler
6 months
@ylecun The reason LLaMa 2 isn't open is because it doesn't have an open source license and instead places restrictions on commercial use.
0
0
3
@tim_tyler
Tim Tyler
5 months
@CFGeek @MatthewJBar It is not out of the door yet.
0
0
4
@tim_tyler
Tim Tyler
5 months
@keith_dorschner @mattwridley @DrJBhattacharya In science there is always doubt. Absence of doubt is faith. That means you can't update on evidence.
0
0
3
@tim_tyler
Tim Tyler
6 months
@GaryMarcus @NLeseul @sebkrier Usually, the people keeping the secrets *are* the bad actors. The reason they are keeping things secret is because they have something to hide that they don't want others to know about.
0
1
3
@tim_tyler
Tim Tyler
1 year
@RokoMijic "Eugenics" is conflated with "negative eugenics" in most people's minds. It thus suffers from guilt by association. As a result, few men explicitly endorse it either.
1
1
4
@tim_tyler
Tim Tyler
6 months
@liron @gdb @sama 1 is GPT-4 while 2 is GPT-5.
1
0
3
@tim_tyler
Tim Tyler
6 months
@DrNikkiTeran Why did they not control using a search engine? Is that because doing so would invalidate their conclusions?
0
0
4
@tim_tyler
Tim Tyler
11 months
@Roko__eth It only works if both sides can agree on what the outcome would be. In the example given, both sides thought they could win.
1
0
4
@tim_tyler
Tim Tyler
6 months
@AlecStapp The cop explained that this is conventionally called:
0
0
4
@tim_tyler
Tim Tyler
9 months
@primalpoly @liron @janleike @OpenAI The premise seems dubious, but working for OpenAI would make sense for a moral angel if they believed that OpenAI had a better chance of success than its rivals - and that by working for them there would be a chance to positively influence the outcome.
2
0
3
@tim_tyler
Tim Tyler
5 months
@MikePFrank @satyanadella Then everyone else will want a piece of the action.
0
0
4
@tim_tyler
Tim Tyler
1 year
2
0
3
@tim_tyler
Tim Tyler
6 months
@tegmark It seems odd to imagine machine superintelligence using very much solar power.
2
0
4
@tim_tyler
Tim Tyler
7 months
@octonion @leecronin @RosieRedfield @Nature Why you are talking about "closed systems" is not clear. That's not mentioned. Perhaps you are projecting.
1
0
4
@tim_tyler
Tim Tyler
6 months
@liron @gdb @sama Greg also says: "I think GPT-5 will just be different in some way that’s hard to describe now." I think this disagreement is mostly fabricated.
0
0
3
@tim_tyler
Tim Tyler
7 months
@leecronin @Nature Universal Darwinism unites physics and selection without "assemblies".
1
0
2
@tim_tyler
Tim Tyler
1 year
@jesswhittles Different timescales too. We will likely see the benefits before we get to any of the more serious risks.
1
1
4
@tim_tyler
Tim Tyler
1 year
@sama If they are the same, then why are there two names?
12
1
5
@tim_tyler
Tim Tyler
7 months
@ATabarrok In the beginning there were vi and emacs...
0
0
3
@tim_tyler
Tim Tyler
9 months
@Plinz Re: "Let states create their own FDAs" - competition might help, but this also seems like duplicated effort.
1
0
3
@tim_tyler
Tim Tyler
1 year
Open AI seem to have fixed a good many of the security exploits that led to their product giving advice about dangerous or illegal activities. Hopefully no bunnies got hurt in the interim.
1
0
2
@tim_tyler
Tim Tyler
1 year
@RokoMijic I'm pretty sure that that contradicts the facts. Plenty of smart people are not terrified.
0
0
2
@tim_tyler
Tim Tyler
8 months
@Aella_Girl I've tried the "no-poo" approach in the past. My testimony is that "it works" - but then so does the "poo" approach. I wouldn't mind seeing some stats on split ends, length, dandruff, baldness, greyness, etc.
0
0
1
@tim_tyler
Tim Tyler
6 months
@RokoMijic @primalpoly @BogdanIonutCir2 @robertskmiles I think we can agree on that then. 2,000 years is very small compared to the likely timescale of alien contact. It's much more important to avoid making a mess of things right now than to go pedal-to-the-metal to keep up with hypothetical future encounters with advanced aliens.
0
0
3
@tim_tyler
Tim Tyler
7 months
@MikePFrank 3 billion years to make a cave man. 3 million more years to make Albert Einstein and John von Neumann. The last bit could come in a rush.
1
0
3
@tim_tyler
Tim Tyler
1 year
OK, doomer: you can put those missiles down now.
0
0
2
@tim_tyler
Tim Tyler
9 months
@liron The size of the human brain has been gradually decreasing for the past 28,000 to 34,000 years. Humans have been domesticated by their own organizations and institutions. Peak encephalization is in the past.
0
0
3
@tim_tyler
Tim Tyler
5 months
@jpsenescence @EleanorSheekey I read the article. I says: "Clearly, there is a software program, encoded in the DNA, that is far more advanced, with much greater algorithmic complexity, than any computer program." What? 3 gigabytes?
2
0
2
@tim_tyler
Tim Tyler
11 months
1
0
3