davidshapiro_youtube_transcripts / Terminal Race Condition The greatest danger we face from AGI and how to prevent it_transcript.csv
| text,start,duration | |
| good morning everybody David Shapiro,0.42,3.6 | |
| here with another video,2.639,4.62 | |
| so today's video uh it started off as,4.02,5.28 | |
| one thing I wanted to primarily talk,7.259,4.861 | |
| about epistemic convergence uh but It,9.3,4.32 | |
| ultimately ended up being a little bit,12.12,2.54 | |
| more,13.62,2.82 | |
| all-encompassing so I'm going to,14.66,3.94 | |
| introduce a few new terms but we are,16.44,4.08 | |
| going to cover cover uh epistemic,18.6,4.259 | |
| convergence and a few other things,20.52,4.62 | |
| uh real quick before we dive into the,22.859,3.781 | |
| video just want to do a quick plug for,25.14,3.719 | |
| my patreon uh all tears get you access,26.64,4.32 | |
| to the private Discord server and then I,28.859,4.2 | |
| have a few higher tiers that uh come,30.96,4.32 | |
| with a one-on-one conversations and that,33.059,4.201 | |
| sort of thing so anyways back to the,35.28,5.16 | |
| video so first I wanted to share with,37.26,5.639 | |
| you guys uh the universal model of,40.44,4.98 | |
| Robotics so it has it's basically three,42.899,5.16 | |
| steps input processing and output or,45.42,4.38 | |
| sensing processing and controlling as,48.059,3.241 | |
| this Graphics shows,49.8,4.38 | |
| now this is the most basic cognitive,51.3,4.5 | |
| architecture that you can come up with,54.18,4.199 | |
| for artificial general intelligence it,55.8,4.439 | |
| needs input from the outside world from,58.379,3.66 | |
| the environment of some kind whether,60.239,3.3 | |
| it's a virtual environment digital,62.039,3.841 | |
| environment physical environment or,63.539,4.62 | |
| whatever cyber cybernetic environment,65.88,4.68 | |
| and then it needs some kind of internal,68.159,4.981 | |
| processing that includes memory task,70.56,4.2 | |
| construction executive function,73.14,4.519 | |
| cognitive control that sort of stuff,74.76,5.82 | |
| learning is another internal process and,77.659,5.381 | |
| then finally controlling or output it,80.58,4.859 | |
| needs to do something to act on the,83.04,5.1 | |
| world or its environment whether that's,85.439,4.801 | |
| just putting out you know text in a in,88.14,3.6 | |
| the form of a chat bot or if it's got,90.24,4.86 | |
| robotic hands that sort of thing so when,91.74,5.519 | |
| I talk about artificial general,95.1,3.9 | |
| intelligence being a system it's never,97.259,4.141 | |
| going to just be a model right even if,99.0,4.14 | |
| you have the most sophisticated model in,101.4,3.6 | |
| the world all that it's doing is the,103.14,3.839 | |
| processing part you also need the,105.0,4.799 | |
| sensing and controlling aspect and but,106.979,4.621 | |
| even above and beyond that each,109.799,4.081 | |
| components is going to be much more,111.6,4.32 | |
| complicated,113.88,4.08 | |
| so before we get into the rest of the,115.92,4.08 | |
| video I also want to talk about the form,117.96,4.26 | |
| factors that AGI is going to take so we,120.0,3.899 | |
| just established the simplest kind of,122.22,3.96 | |
| cognitive architecture but then there's,123.899,3.781 | |
| other things to consider because when,126.18,3.66 | |
| you think of AGI you might think of some,127.68,4.08 | |
| nebulous entity like Skynet but where,129.84,3.6 | |
| does it physically live,131.76,3.42 | |
| what is the hardware what is the,133.44,4.2 | |
| software where where is it physically,135.18,4.02 | |
| located because it's not magic right,137.64,3.179 | |
| it's not going to just run in the dirt,139.2,3.179 | |
| or something like that it needs to,140.819,4.381 | |
| actually have Hardware to run on so,142.379,5.761 | |
| there's three overarching categories,145.2,5.28 | |
| that I came up with so first is cloud,148.14,5.28 | |
| AGI so Cloud AGI is this is the stuff,150.48,4.38 | |
| that's gonna one it's going to be,153.42,3.06 | |
| created first just because of the amount,154.86,3.54 | |
| of compute and power available in data,156.48,5.22 | |
| centers so this is uh Enterprise grade,158.4,7.74 | |
| or data center grade AGI systems they,161.7,7.14 | |
| are in specialized buildings all over,166.14,5.22 | |
| the world but one of the biggest,168.84,4.38 | |
| constraints here is that there's limited,171.36,3.48 | |
| location and it takes a while to build,173.22,3.54 | |
| data centers right one of the things,174.84,3.96 | |
| that I think it was uh it was Elon,176.76,4.32 | |
| musker or Sam Altman said that you know,178.8,4.92 | |
| there are going to be limitations as to,181.08,5.28 | |
| the rate at which AGI can proliferate,183.72,5.519 | |
| namely the the rate at which we can,186.36,5.94 | |
| produce chips and also the rate at which,189.239,5.22 | |
| as I think Sam malt and said the you,192.3,4.439 | |
| know the concrete has to dry for data,194.459,3.121 | |
| centers,196.739,5.64 | |
| so this is uh one form factor that AGI,197.58,7.019 | |
| will take in terms of the the storage,202.379,4.801 | |
| the servers the network components that,204.599,4.86 | |
| will exist inside data centers so one,207.18,3.839 | |
| thing I wanted to watch it say is watch,209.459,3.841 | |
| out for uh fortified data centers these,211.019,3.841 | |
| are ones that are put in bunkers or if,213.3,3.659 | |
| you put Sam sites on top of it so that,214.86,4.26 | |
| you can't shut them down uh that was,216.959,3.42 | |
| kind of tongue-in-cheek I'm not actually,219.12,3.36 | |
| advocating for bombing data centers at,220.379,4.681 | |
| least not yet the next form factor is,222.48,5.16 | |
| Edge AGI so this is stuff that is going,225.06,4.22 | |
| to run in,227.64,3.84 | |
| self-contained servers that you can,229.28,4.9 | |
| basically plug in anywhere they're going,231.48,6.539 | |
| to be you know desktop size maybe larger,234.18,5.82 | |
| but the point is that pretty much all,238.019,3.481 | |
| you need is power and internet you don't,240.0,4.319 | |
| need a specialized building and they can,241.5,4.62 | |
| be moved on trucks they can be put in,244.319,3.961 | |
| ships airplanes that sort of stuff,246.12,3.72 | |
| because you can't really airlift an,248.28,4.019 | |
| entire data center so basically Edge is,249.84,4.619 | |
| something is just one size down from,252.299,3.481 | |
| data center you don't need a specialized,254.459,2.52 | |
| building you don't need specialized,255.78,3.66 | |
| cooling they can run anywhere,256.979,3.72 | |
| um and they're so in that respect,259.44,2.94 | |
| they're more portable but they're not,260.699,4.741 | |
| necessarily going to be as powerful at,262.38,5.039 | |
| least or not as energy intensive and,265.44,5.039 | |
| energy dense as a data center or a cloud,267.419,4.021 | |
| Center,270.479,4.141 | |
| and then finally ambulatory AGI this is,271.44,5.46 | |
| the embodied stuff such as C-3PO and,274.62,3.98 | |
| Commander data which I have imaged here,276.9,4.98 | |
| they're self-contained meaning that all,278.6,5.14 | |
| the systems that they need are within,281.88,4.319 | |
| their chassis within their robotic body,283.74,5.519 | |
| and they can move on their own so that's,286.199,4.5 | |
| basically the difference between an edge,289.259,5.101 | |
| AGI and an ambulatory AGI is uh they,290.699,5.401 | |
| might have roughly the same components,294.36,4.14 | |
| but it's one is accompanied with a,296.1,6.12 | |
| robotic uh chassis now one thing to keep,298.5,6.18 | |
| in mind is that all of these things are,302.22,4.86 | |
| intrinsically networkable meaning they,304.68,4.32 | |
| can communicate over digital networks,307.08,4.559 | |
| whether it's Wi-Fi or you know fiber,309.0,5.16 | |
| optic backbone networks or even you know,311.639,4.141 | |
| Satellite Communication like starlink,314.16,3.9 | |
| now that's that doesn't necessarily have,315.78,4.08 | |
| to be true because remember the model of,318.06,4.44 | |
| AGI is input processing and output that,319.86,5.76 | |
| input that input could be just eyes and,322.5,4.979 | |
| ears cameras and microphones that input,325.62,4.26 | |
| could also be network connections from,327.479,4.621 | |
| outside meaning that they could,329.88,4.44 | |
| communicate directly with each other via,332.1,5.28 | |
| you know like IRC or whatever so just,334.32,4.86 | |
| wanted to say that there are different,337.38,3.72 | |
| form factors that we should expect AGI,339.18,2.94 | |
| to take,341.1,3.3 | |
| with different trade-offs so one,342.12,5.22 | |
| advantage of ambulatory uh AGI you know,344.4,6.12 | |
| yes they will have less power uh and by,347.34,5.88 | |
| power I mean computational power but,350.52,5.28 | |
| they have the ability to go anywhere do,353.22,6.66 | |
| anything kind of like URI uh now that,355.8,6.959 | |
| being said the the amount of compute,359.88,4.8 | |
| resources that can be crammed into Data,362.759,3.66 | |
| Centers basically means that you can,364.68,4.38 | |
| puppet you know millions or billions of,366.419,5.161 | |
| peripheral robots rather than having it,369.06,4.44 | |
| fully self-contained and in a previous,371.58,3.899 | |
| video I talked about how we're likely to,373.5,4.639 | |
| see hybrid systems where you have,375.479,5.22 | |
| semi-autonomous peripherals that have,378.139,4.481 | |
| some intelligence but not a whole lot of,380.699,4.201 | |
| intelligence and you see this in movies,382.62,4.919 | |
| like Will Smith's iRobot as well as the,384.9,4.38 | |
| Matrix where the the drones the,387.539,3.78 | |
| squiddies and the Matrix they're,389.28,3.479 | |
| semi-autonomous but they are still,391.319,3.301 | |
| centrally controlled by a much more,392.759,4.021 | |
| powerful intelligence so you're probably,394.62,3.72 | |
| not going to see it all one or the other,396.78,3.06 | |
| you're probably going to see hybrids,398.34,3.54 | |
| where you've got peripheral robots that,399.84,4.02 | |
| are either fully autonomous or,401.88,4.92 | |
| semi-autonomous or puppeted by stronger,403.86,4.98 | |
| Central intelligences that being said,406.8,4.26 | |
| you can also create droids there's no,408.84,4.5 | |
| reason that we could not create fully,411.06,4.38 | |
| self-contained machines that don't,413.34,4.74 | |
| really have any network connectivity,415.44,4.86 | |
| um to the to other machines,418.08,3.899 | |
| that being said they would be at a,420.3,3.78 | |
| distinct disadvantage and what I mean by,421.979,4.5 | |
| that is that if you create swarm,424.08,4.8 | |
| intelligence or Wireless federations of,426.479,4.741 | |
| machines they can perform cognitive,428.88,6.24 | |
| offload or share computational resources,431.22,6.66 | |
| so for instance rather than and this is,435.12,4.44 | |
| how the Geth work in Mass Effect by the,437.88,4.439 | |
| way so rather than have every single,439.56,6.06 | |
| machine have to think about the entire,442.319,5.94 | |
| plan the entire strategy most of them,445.62,4.919 | |
| Focus only on their primary task and,448.259,5.401 | |
| then any surplus compute computational,450.539,6.121 | |
| power they have is dedicated towards you,453.66,5.52 | |
| know running algorithms for for the big,456.66,4.68 | |
| the big brain the hive mind,459.18,4.56 | |
| this is all hypothetical but one thing,461.34,4.32 | |
| that I want to point out is that many,463.74,3.54 | |
| many many machines work like this,465.66,4.56 | |
| already and what I mean by that is the,467.28,4.38 | |
| simplest version that many people are,470.22,3.9 | |
| probably aware of is if you have like,471.66,3.84 | |
| Bluetooth speakers or smart speakers,474.12,3.6 | |
| like Sonos or whatever those form a,475.5,5.819 | |
| wireless Federation uh ditto for like,477.72,5.28 | |
| your Amazon Alexa's and other things,481.319,4.261 | |
| like that those intrinsically form mesh,483.0,4.259 | |
| networks or Wireless federations meaning,485.58,3.48 | |
| that they can work together and,487.259,4.321 | |
| communicate now when you add artificial,489.06,5.46 | |
| intelligence to that then they can share,491.58,4.559 | |
| thinking and messaging and that sort of,494.52,2.64 | |
| stuff so that's what I mean by,496.139,4.801 | |
| federations or or wireless networks of,497.16,6.599 | |
| of AI okay so now you're familiar with,500.94,4.74 | |
| the background of how you know some of,503.759,5.041 | |
| the systemic aspects of it there's a few,505.68,4.919 | |
| default metrics of power so when I say,508.8,3.359 | |
| power I don't necessarily just mean,510.599,3.721 | |
| electricity although certainly all of,512.159,3.961 | |
| these things do require electricity to,514.32,2.82 | |
| run,516.12,4.26 | |
| so first is processing power so for,517.14,5.1 | |
| instance you might hear the term flops,520.38,4.2 | |
| which is floating Point operations per,522.24,7.68 | |
| second uh you also hear CPU GPU TPU and,524.58,7.02 | |
| then there's parallel parallelization,529.92,4.02 | |
| which means that you have many of these,531.6,5.1 | |
| things working together so processing,533.94,4.62 | |
| power is one component of the total,536.7,4.68 | |
| amount of power in the hardware layer so,538.56,4.44 | |
| this is all strictly Hardware layer I'm,541.38,3.48 | |
| not talking about parameter models,543.0,4.74 | |
| because I I don't really care about how,544.86,4.68 | |
| many parameters a model has there's lots,547.74,4.02 | |
| of ways to make intelligent machines,549.54,4.2 | |
| deep neural networks are currently the,551.76,4.079 | |
| best way but we're also discovering,553.74,3.719 | |
| efficiencies where you can kind of pair,555.839,3.661 | |
| them down you can distill them and make,557.459,3.541 | |
| them more efficient meaning that you can,559.5,3.36 | |
| on the same piece of Hardware you can,561.0,4.8 | |
| run more of them in parallel or you can,562.86,5.52 | |
| run one much faster so the underlying,565.8,3.96 | |
| Hardware is still going to be the,568.38,3.8 | |
| primary bottleneck or primary constraint,569.76,5.1 | |
| all else considered,572.18,5.68 | |
| uh memory so this is Ram it also,574.86,5.36 | |
| includes memory accelerators or caching,577.86,5.46 | |
| storage has to do with bulk data your,580.22,4.9 | |
| databases your archives your backups,583.32,4.44 | |
| this is when you say like hard drive or,585.12,4.74 | |
| SSD or you know storage area network,587.76,4.259 | |
| that sort of thing and then networking,589.86,4.68 | |
| is the the uplinks and downlinks this is,592.019,4.741 | |
| the the fiber optic connections the,594.54,3.72 | |
| wireless connections the satellite,596.76,3.9 | |
| connections that sort of thing so these,598.26,4.38 | |
| are the kind of the the rudimentary,600.66,4.799 | |
| parts that all AGI are going to run on,602.64,5.879 | |
| uh and this is just the brains too this,605.459,4.621 | |
| is not the peripherals this is not the,608.519,3.241 | |
| robots but this is what's going to,610.08,4.259 | |
| dictate or constrain how fast it is now,611.76,5.94 | |
| again like I said uh different neural,614.339,5.221 | |
| networks are going to operate at,617.7,3.42 | |
| different efficiencies so for instance,619.56,5.82 | |
| uh you know gpt4 is out now gpt5 might,621.12,6.96 | |
| be the same size it might be bigger but,625.38,4.82 | |
| then we're also finding open source,628.08,6.06 | |
| research like the Orca alpaca llama that,630.2,5.92 | |
| are getting like ninety percent of the,634.14,4.56 | |
| performance but at like one tenth or one,636.12,4.98 | |
| hundredth of the size and so you have a,638.7,4.199 | |
| trade-off of intelligence and versus,641.1,4.56 | |
| speed and power and we'll talk more a,642.899,5.221 | |
| lot more about that in the future of,645.66,4.739 | |
| this video at near the middle and end of,648.12,4.2 | |
| this video about how trading off,650.399,4.801 | |
| intelligence for Speed is often a more,652.32,4.56 | |
| advantageous strategy and how this,655.2,4.199 | |
| figures into solving the control problem,656.88,5.28 | |
| and solving alignment,659.399,4.981 | |
| um okay so we kind of set the stage as,662.16,4.98 | |
| as to how AGI is probably going to look,664.38,5.639 | |
| so let's talk about the early ecosystem,667.14,6.18 | |
| of AGI so in the coming years we're,670.019,4.921 | |
| going to be building millions and then,673.32,3.24 | |
| billions of autonomous and,674.94,4.5 | |
| semi-autonomous agents so at first these,676.56,4.2 | |
| agents are going to be purely digital,679.44,3.44 | |
| right you know a,680.76,4.56 | |
| semi-autonomous slack bot a,682.88,4.54 | |
| semi-autonomous Discord bot people are,685.32,4.32 | |
| already building these right and some of,687.42,3.72 | |
| them have the ability to modify their,689.64,2.879 | |
| own code some of them have the ability,691.14,3.78 | |
| to learn many of them don't most of them,692.519,4.32 | |
| use frozen llms in the background,694.92,3.659 | |
| meaning that they're that their,696.839,4.201 | |
| cognitive capacity is pretty much capped,698.579,4.981 | |
| by its backing model,701.04,5.76 | |
| now that being said as these agents,703.56,5.279 | |
| become more autonomous they go from,706.8,4.2 | |
| semi-autonomous to autonomous this will,708.839,4.201 | |
| create a competitive landscape,711.0,4.86 | |
| and what I mean by that is that humans,713.04,5.28 | |
| will have the ability to build and,715.86,5.159 | |
| destroy these models for basically,718.32,4.62 | |
| arbitrary reasons because you want to or,721.019,3.781 | |
| because you don't like it or whatever,722.94,4.26 | |
| so that means that we will be selecting,724.8,5.52 | |
| those agents those uh those models those,727.2,6.18 | |
| llms and those pieces of software that,730.32,4.32 | |
| are going to be more helpful more,733.38,3.6 | |
| productive and more aligned so this,734.64,4.08 | |
| creates selective pressure basically,736.98,3.299 | |
| saying that there's going to be a,738.72,2.88 | |
| variety there's going to be millions or,740.279,3.24 | |
| billions of Agents out there some of,741.6,3.9 | |
| them are going to get the ax and some of,743.519,4.141 | |
| them are going to be selected to say hey,745.5,4.079 | |
| we like we like you we're going to keep,747.66,2.88 | |
| you around,749.579,3.901 | |
| so there's a few off the cuff selective,750.54,4.799 | |
| pressures that we can imagine basically,753.48,3.84 | |
| why do you choose an app right why do,755.339,3.361 | |
| you choose to use an app why do you,757.32,3.54 | |
| choose to uninstall an app that's kind,758.7,3.36 | |
| of the level that we're talking about,760.86,3.9 | |
| here so first is functional utility how,762.06,4.019 | |
| useful is it,764.76,3.6 | |
| how much does it help you is it fast,766.079,3.541 | |
| enough does it have a good user,768.36,3.5 | |
| experience is the user interface,769.62,5.88 | |
| created correctly is it adding value to,771.86,7.06 | |
| your life is it worth using the second,775.5,6.079 | |
| part is speed and efficiency,778.92,6.06 | |
| basically if something takes four weeks,781.579,5.141 | |
| to give you a good answer but another,784.98,4.2 | |
| thing takes 10 minutes even if it's not,786.72,4.559 | |
| quite as good that speed is going to be,789.18,4.14 | |
| super super valuable but then there's,791.279,4.081 | |
| also energetic efficiency and cost,793.32,4.94 | |
| efficiency more often than not,795.36,5.34 | |
| individuals and businesses will choose,798.26,5.56 | |
| the solution that is good enough but,800.7,5.04 | |
| also much cheaper it doesn't have to be,803.82,4.019 | |
| perfect it just has to be good enough,805.74,4.86 | |
| and then finally apparent alignment and,807.839,5.101 | |
| so I use the the word apparent alignment,810.6,4.38 | |
| to basically mean things that appear to,812.94,4.199 | |
| be tame things that appear to be user,814.98,4.56 | |
| friendly uh and this is what uh tools,817.139,5.521 | |
| like rlhf do which one thing that rlhf,819.54,4.68 | |
| does is like wolves which we'll talk,822.66,4.619 | |
| about in a second are the rlhf,824.22,4.2 | |
| reinforcement learning with human,827.279,4.68 | |
| feedback forces gpt4 to dumb itself down,828.42,6.539 | |
| so that it better serves us uh and that,831.959,4.56 | |
| makes us feel safe because it's,834.959,4.32 | |
| basically pretending to be more like us,836.519,5.641 | |
| to speak on our terms and to mimic our,839.279,4.981 | |
| level of intelligence now that being,842.16,3.419 | |
| said,844.26,2.579 | |
| um one thing that I do want to point out,845.579,4.021 | |
| is that gpd4 the underlying model is,846.839,4.74 | |
| superior to anything that we have seen,849.6,5.239 | |
| in the public every version of chat GPT,851.579,6.021 | |
| has basically been kind of a little bit,854.839,6.101 | |
| hamstrung so we shall we say uh from the,857.6,6.82 | |
| the total capacity of gpt4,860.94,5.699 | |
| so what I call this is domestication and,864.42,4.8 | |
| supplication think of dogs and wolves,866.639,5.101 | |
| this little pomeranian descended from,869.22,4.98 | |
| wolves wolves used to be apex predators,871.74,4.62 | |
| wolves are also are much more,874.2,4.02 | |
| intelligent than dogs,876.36,6.419 | |
| so when you look at the early days of,878.22,6.84 | |
| AGI when we still have the off switch,882.779,3.781 | |
| and we have the power to delete,885.06,3.12 | |
| everything,886.56,3.899 | |
| we should expect some of the following,888.18,5.7 | |
| evolutionary pressures to kind of shape,890.459,6.421 | |
| the way that AGI evolves and adapts so,893.88,5.519 | |
| first we'll probably be selecting for,896.88,4.079 | |
| machines that are okay with being turned,899.399,4.201 | |
| off in the early days you don't,900.959,4.38 | |
| necessarily want your toaster fighting,903.6,3.479 | |
| with you when when you're done you know,905.339,3.3 | |
| toasting your bread it's time for it to,907.079,3.661 | |
| turn off and so we're probably going to,908.639,3.781 | |
| select machines and architectures and,910.74,3.779 | |
| models that are more or less okay with,912.42,3.539 | |
| being switched off that they don't have,914.519,3.801 | |
| a sense of death or a fear of death,915.959,4.62 | |
| we're also going to select machines that,918.32,4.6 | |
| are more eager to please just the same,920.579,5.281 | |
| way that with uh dogs have been bred and,922.92,5.159 | |
| selected to be very very eager to please,925.86,3.599 | |
| us,928.079,3.181 | |
| we're also going to select machines that,929.459,3.781 | |
| don't fall into the uncanny valley and,931.26,3.48 | |
| so what I mean by that is the uncanny,933.24,4.02 | |
| valley of when you're interacting with a,934.74,4.56 | |
| machine that you sense is an alien,937.26,3.78 | |
| intelligence it will make you very very,939.3,4.2 | |
| deeply uncomfortable as an autistic,941.04,4.799 | |
| person as someone who is neurodiverse I,943.5,4.139 | |
| have to modulate the way that I speak,945.839,4.68 | |
| and act around neurotypical people,947.639,5.401 | |
| because I fall into the same uncanny,950.519,5.94 | |
| valley right and some uh some CEOs out,953.04,5.039 | |
| there get teased for this for instance,956.459,3.24 | |
| Mark Zuckerberg I don't know if he's,958.079,3.841 | |
| actually autistic but he certainly pings,959.699,4.44 | |
| that radar where it's like okay he,961.92,3.539 | |
| obviously does not think the way that,964.139,3.361 | |
| the rest of us do and he also behaves,965.459,4.44 | |
| differently so Mark Zuckerberg like many,967.5,5.1 | |
| of us uh people on the Spectrum kind of,969.899,4.56 | |
| fall into that uncanny valley again I,972.6,4.44 | |
| don't know but uh,974.459,5.221 | |
| he certainly looks uh he he plays the,977.04,6.06 | |
| part so but the idea is that when you,979.68,5.339 | |
| interact with something that give that,983.1,4.26 | |
| kind of gives you the heebie-jeebies you,985.019,3.901 | |
| don't like it,987.36,4.14 | |
| now that being said we will still select,988.92,4.68 | |
| machines that are smarter to a certain,991.5,3.66 | |
| degree because you don't want something,993.6,3.659 | |
| to be too smart but you do also want it,995.16,3.599 | |
| to be smart enough to be very very,997.259,2.64 | |
| useful,998.759,3.181 | |
| another selective pressure is that we're,999.899,3.421 | |
| going to choose things that are stable,1001.94,3.06 | |
| robust and resilient so remember when,1003.32,3.12 | |
| Bing first came out and it was,1005.0,3.66 | |
| completely unhinged you could get it,1006.44,3.78 | |
| into you could coax it into like,1008.66,4.619 | |
| threatening you and you know threatening,1010.22,4.859 | |
| to take over the world and you know,1013.279,3.721 | |
| threatening to see you all kinds of,1015.079,4.2 | |
| crazy stuff so obviously that version,1017.0,4.44 | |
| got shut down really quick,1019.279,4.321 | |
| um you're also going to select uh models,1021.44,4.139 | |
| and agents that are more resilient,1023.6,3.839 | |
| against those kinds of adversarially,1025.579,3.061 | |
| attacks,1027.439,3.0 | |
| um whether they are accidental right you,1028.64,2.88 | |
| don't want something to be mentally,1030.439,3.48 | |
| unstable just on its own right like Bing,1031.52,6.0 | |
| was originally uh or Tay tweets but you,1033.919,5.101 | |
| also want it to be resilient against,1037.52,4.74 | |
| being manipulated by other hostile,1039.02,5.039 | |
| actors because imagine that your,1042.26,3.9 | |
| personal AI assistance just becomes,1044.059,4.02 | |
| unhinged one day because a hacker,1046.16,3.72 | |
| somewhere was messing with it so,1048.079,3.48 | |
| Security will be one of the selective,1049.88,3.0 | |
| pressures,1051.559,3.841 | |
| likewise you'll you'll as part of The,1052.88,4.38 | |
| Uncanny Valley thing you're going to,1055.4,2.94 | |
| select things that are more,1057.26,2.82 | |
| comprehensible to us that are better at,1058.34,3.78 | |
| explaining themselves to us so that,1060.08,3.78 | |
| includes transparency emotional,1062.12,5.04 | |
| intelligence and so on uh and then again,1063.86,5.46 | |
| apparent alignment things that's that,1067.16,4.74 | |
| that uh don't kind of trigger your,1069.32,4.62 | |
| existential dread because there have,1071.9,4.56 | |
| been times for instance where I've been,1073.94,5.94 | |
| working with chat GPT uh on the API side,1076.46,5.16 | |
| and kind of giving it different sets of,1079.88,4.62 | |
| instructions and even just a slight,1081.62,5.58 | |
| misalignment between how I approach,1084.5,4.86 | |
| moral problems and how this model,1087.2,4.56 | |
| approaches moral problems are really,1089.36,5.939 | |
| deeply unsettling and so it's like there,1091.76,5.58 | |
| there's been a few times where it's like,1095.299,3.301 | |
| I'm working with this thing and I'm,1097.34,3.78 | |
| building a semi-autonomous chat Bots and,1098.6,3.959 | |
| it's like I understand it's reasoning,1101.12,3.419 | |
| but it's like oh that's really cringe,1102.559,3.901 | |
| and it kind of scares me,1104.539,4.38 | |
| um so in that respect it's like let's,1106.46,4.14 | |
| change this model so that it's not quite,1108.919,2.821 | |
| so scary,1110.6,2.52 | |
| and I'm saying that this is possible,1111.74,4.22 | |
| today that if you use the chat GPT API,1113.12,5.4 | |
| you can give it programming you can give,1115.96,5.26 | |
| it reasoning and goals uh and and,1118.52,4.86 | |
| patterns of thought that are already,1121.22,4.5 | |
| already on the kind of in the midst of,1123.38,3.84 | |
| that uncanny valley,1125.72,4.38 | |
| uh then you can uh we'll also select for,1127.22,4.92 | |
| things that are more uh docile so,1130.1,3.6 | |
| basically how dogs you know you can pet,1132.14,2.7 | |
| them you can wrestle with them and,1133.7,2.339 | |
| they're probably not going to eat your,1134.84,3.719 | |
| face uh plastic and so things that are,1136.039,4.921 | |
| changeable or adaptable and Cooperative,1138.559,4.201 | |
| those are other things that we're going,1140.96,4.74 | |
| to select for so basically dogs are,1142.76,5.159 | |
| dumber than wolves and the reason for,1145.7,3.96 | |
| this is what I call capability,1147.919,4.14 | |
| equilibrium which will unpack more in,1149.66,5.28 | |
| the in a few slides but the the very,1152.059,4.801 | |
| short version of capability equilibrium,1154.94,3.96 | |
| is that your intellect must be equal to,1156.86,3.84 | |
| the task and if your intellect is above,1158.9,3.779 | |
| the task there's no advantage and in,1160.7,3.54 | |
| fact there can be disadvantages because,1162.679,3.901 | |
| of the costs associated with higher,1164.24,4.02 | |
| intelligence,1166.58,4.02 | |
| okay so I've talked about this idea,1168.26,5.76 | |
| plenty instrumental convergence uh this,1170.6,6.0 | |
| was coined by Nick Bostrom in 2003 who,1174.02,4.44 | |
| is a philosopher,1176.6,3.959 | |
| um the very short version is that,1178.46,3.959 | |
| regardless of the terminal goals or main,1180.559,5.101 | |
| objectives that a machine has uh AGI,1182.419,5.161 | |
| will likely pursue intermediate or,1185.66,4.259 | |
| instrumental goals or basically other,1187.58,3.9 | |
| stuff that it needs in order to meet,1189.919,5.401 | |
| those other ends so whatever like let's,1191.48,5.88 | |
| say you give an AGI the goal of like,1195.32,5.46 | |
| getting them getting a a spacecraft to,1197.36,5.58 | |
| Alpha Centauri well it's going to need a,1200.78,3.66 | |
| laundry list of other stuff to do that,1202.94,3.72 | |
| it's going to need resources like power,1204.44,6.0 | |
| materials electricity data it's going to,1206.66,6.06 | |
| need self-preservation because if the,1210.44,4.44 | |
| machine goes offline it will realize,1212.72,5.1 | |
| that is a failure State and so we'll try,1214.88,4.98 | |
| and avoid those failure conditions by,1217.82,3.96 | |
| preserving its own existence,1219.86,3.48 | |
| another thing is that it will probably,1221.78,3.06 | |
| decide that it needs self-improvement,1223.34,4.079 | |
| because if it realizes that its current,1224.84,5.219 | |
| capability its current capacity is not,1227.419,4.681 | |
| equal to the task if it's too dumb it's,1230.059,3.541 | |
| going to say okay well I need to raise,1232.1,3.06 | |
| my intelligence so that I'm equal to,1233.6,2.76 | |
| that task,1235.16,3.66 | |
| now that being said Nick boster makes,1236.36,4.559 | |
| quite a few uh assumptions about the way,1238.82,5.099 | |
| that AGI will work so for instance he,1240.919,5.521 | |
| kind of imagines that um AGI is going to,1243.919,3.961 | |
| be very single-minded and somewhat,1246.44,4.32 | |
| monolithic uh basically mindlessly,1247.88,4.98 | |
| pursuing one goal which I would actually,1250.76,3.96 | |
| classify this as a middle intelligence,1252.86,4.319 | |
| rather than a high intelligence AGI and,1254.72,3.9 | |
| we'll talk about that in a little bit as,1257.179,2.941 | |
| well,1258.62,3.6 | |
| he also assumes that it's going to lack,1260.12,3.9 | |
| other forces or competitive pressures,1262.22,4.26 | |
| and that these uh might exist in a,1264.02,5.399 | |
| vacuum basically that resource,1266.48,4.5 | |
| acquisition and self-preservation and,1269.419,4.021 | |
| self-improvement are going to exist in,1270.98,5.16 | |
| in the absence of other forces or,1273.44,5.22 | |
| pressures such as competitive pressures,1276.14,4.38 | |
| or internal pressures which I will talk,1278.66,2.879 | |
| about more,1280.52,3.42 | |
| and finally that they will lack a higher,1281.539,5.361 | |
| purpose or the ability to be completely,1283.94,5.82 | |
| self-determining so basically what I,1286.9,7.0 | |
| mean by that is that okay yes once a,1289.76,6.48 | |
| machine is intelligent enough it can you,1293.9,3.72 | |
| know you can say like hey I want you to,1296.24,3.72 | |
| get us to Alpha Centauri and the AG I,1297.62,4.02 | |
| might say like okay whatever I don't,1299.96,2.88 | |
| think that's a good goal so I'm going to,1301.64,4.26 | |
| choose my own goal uh which that being,1302.84,5.459 | |
| said even if AGI become fully autonomous,1305.9,4.019 | |
| and you know kind of give a flip us the,1308.299,3.36 | |
| bird they're probably still going to,1309.919,3.421 | |
| benefit from some convergence which,1311.659,4.621 | |
| we'll talk about as well uh now what I,1313.34,4.62 | |
| want to point out is that there is a,1316.28,4.74 | |
| huge parallel between evolutionary,1317.96,5.04 | |
| pressures and selective pressures and,1321.02,4.08 | |
| this instrumental convergence basically,1323.0,4.86 | |
| all life forms all organisms have have,1325.1,5.4 | |
| converged on a few basic principles such,1327.86,4.98 | |
| as get energy somehow right there's,1330.5,4.02 | |
| autotrophs which make their own energy,1332.84,3.839 | |
| plants and there's heterotrophs which,1334.52,5.039 | |
| take energy from other uh creatures,1336.679,6.24 | |
| uh they through either predation or,1339.559,4.86 | |
| consuming you know plant matter or,1342.919,2.341 | |
| whatever,1344.419,4.081 | |
| uh so when you operate in a competitive,1345.26,5.039 | |
| environment there's there's going to be,1348.5,3.6 | |
| convergence around certain strategies,1350.299,4.081 | |
| this is true for evolution and this will,1352.1,4.68 | |
| also be true more or less with some,1354.38,4.56 | |
| variances in the competitive environment,1356.78,4.92 | |
| between intelligent machines that being,1358.94,4.38 | |
| said because they have a fundamentally,1361.7,4.02 | |
| different substrate there will be we,1363.32,3.839 | |
| should anticipate that there will be,1365.72,2.88 | |
| some differences,1367.159,4.201 | |
| between organisms the way that organisms,1368.6,4.74 | |
| evolve and the way that machines evolve,1371.36,3.78 | |
| not the least of which is that machines,1373.34,3.6 | |
| can rewrite their own source code we,1375.14,3.24 | |
| cannot rewrite our own source code at,1376.94,2.58 | |
| least not,1378.38,3.0 | |
| um not in a hurry it takes us quite a,1379.52,3.48 | |
| long time,1381.38,4.679 | |
| okay so the idea that one of the ideas,1383.0,4.559 | |
| that I'm introducing and I've been,1386.059,2.821 | |
| talking about this for a while is,1387.559,4.561 | |
| epistemic Convergence so instrumental,1388.88,4.86 | |
| convergence talks about the objective,1392.12,3.539 | |
| behaviors and strategies that machines,1393.74,4.74 | |
| adopt epistemic convergence is well let,1395.659,3.921 | |
| me just read you the definition,1398.48,3.059 | |
| epistemic convergence is the principle,1399.58,4.2 | |
| that within any given information domain,1401.539,4.561 | |
| sufficiently sophisticated intelligent,1403.78,4.72 | |
| agents given adequate time and data will,1406.1,3.959 | |
| progressively develop more precise,1408.5,3.299 | |
| accurate and efficient models of that,1410.059,4.081 | |
| domain these models aim to mirror the,1411.799,3.781 | |
| inherent structures principles and,1414.14,3.539 | |
| relationships within that domain over,1415.58,3.78 | |
| time the process of learning testing and,1417.679,3.841 | |
| refining understanding will lead these,1419.36,4.02 | |
| agents towards a shared comprehension of,1421.52,4.74 | |
| the Dom domain's fundamental truths in,1423.38,5.299 | |
| other words to put it more simply,1426.26,4.919 | |
| intelligent entities tend to think alike,1428.679,4.061 | |
| especially when they are operating in,1431.179,3.841 | |
| the same competitive space,1432.74,5.88 | |
| so you and I All Humans we operate on,1435.02,5.58 | |
| planet Earth in the universe in the,1438.62,4.74 | |
| Milky Way galaxy because of that similar,1440.6,5.12 | |
| context scientists all over the world,1443.36,4.62 | |
| repeatedly come to the same conclusions,1445.72,5.079 | |
| even when there are boundaries such as,1447.98,5.22 | |
| linguistic and cultural differences and,1450.799,4.441 | |
| this was most starkly seen during the,1453.2,4.2 | |
| Cold war between uh America and the,1455.24,3.2 | |
| Soviet Union,1457.4,4.08 | |
| whereby scientists independently whether,1458.44,4.239 | |
| it was nuclear physicist or,1461.48,3.199 | |
| astrophysicist or whatever,1462.679,5.041 | |
| rocket Engineers came to the same exact,1464.679,4.781 | |
| conclusions about the way that the,1467.72,3.78 | |
| Universe worked and also found the same,1469.46,6.06 | |
| optimization uh uh patterns even though,1471.5,5.58 | |
| there was no communication between them,1475.52,3.899 | |
| and so epistemic convergence there's,1477.08,4.86 | |
| obviously uh evidence of that happening,1479.419,4.081 | |
| because humans we have the same,1481.94,4.619 | |
| fundamental Hardware right we're all the,1483.5,5.64 | |
| same species and so therefore you have,1486.559,4.801 | |
| similarities between the agents now that,1489.14,3.659 | |
| being said,1491.36,4.02 | |
| uh there is also evidence of epistemic,1492.799,5.641 | |
| convergence between between species and,1495.38,5.52 | |
| so what I mean by that is even animals,1498.44,4.44 | |
| that have a very very different taxonomy,1500.9,4.38 | |
| such as ravens and crows and octopuses,1502.88,5.4 | |
| they all still demonstrate very similar,1505.28,5.22 | |
| problem solving strategies even though,1508.28,4.86 | |
| that octopuses have a very decentralized,1510.5,4.38 | |
| cognition that a lot of their cognition,1513.14,4.08 | |
| occurs in their arms for instance you,1514.88,4.14 | |
| can't get much more alien from us than,1517.22,3.72 | |
| that they still adopt very similar,1519.02,3.72 | |
| problem-solving strategies and learning,1520.94,3.42 | |
| strategies that we do,1522.74,4.62 | |
| uh again despite the fact that they are,1524.36,4.62 | |
| they live underwater they have a very,1527.36,3.78 | |
| different body plan so on and so forth,1528.98,4.92 | |
| so I personally suspect that there is a,1531.14,4.56 | |
| tremendous amount of evidence for,1533.9,4.2 | |
| epistemic convergence and we should we,1535.7,4.979 | |
| should expect epistemic convergence and,1538.1,5.939 | |
| encourage epistemic convergence uh and,1540.679,5.341 | |
| for reasons that I'll go over uh later,1544.039,5.061 | |
| in the video but basically,1546.02,6.42 | |
| AI agents will we should expect and help,1549.1,6.52 | |
| them to arrive at similar conclusions in,1552.44,4.26 | |
| the long run,1555.62,3.48 | |
| now let's talk about these evolutionary,1556.7,5.4 | |
| uh niches that will be developed at,1559.1,5.76 | |
| least in the in in the um uh the short,1562.1,4.26 | |
| term near term,1564.86,4.199 | |
| and what I mean by this is segments,1566.36,4.439 | |
| market segments where we will be,1569.059,4.62 | |
| deploying intelligent AGI systems so,1570.799,5.88 | |
| first is domestic uh personal and,1573.679,4.62 | |
| consumer grade stuff so this is going to,1576.679,4.021 | |
| be the AGI running on your MacBook this,1578.299,4.74 | |
| is going to be the AGI running in your,1580.7,5.339 | |
| kitchen uh these have a relatively,1583.039,7.201 | |
| benign set of tasks and also that uh,1586.039,6.781 | |
| that capability equilibrium is going to,1590.24,5.28 | |
| be uh pretty pretty low you only need to,1592.82,5.459 | |
| be so smart to cook dinner right this is,1595.52,5.1 | |
| not going to be you know the the AGI,1598.279,3.841 | |
| running in your microwave is not going,1600.62,3.419 | |
| to be working on quantum physics or,1602.12,3.84 | |
| Global economics,1604.039,4.321 | |
| now the next level up is going to be,1605.96,3.959 | |
| corporate and Enterprise so these are,1608.36,3.72 | |
| going to be these are going to be AGI,1609.919,3.721 | |
| systems that are tasks with solving,1612.08,4.14 | |
| relatively complex problems running,1613.64,5.34 | |
| entire companies Regulatory Compliance,1616.22,6.24 | |
| uh you know making SEC filings that sort,1618.98,6.66 | |
| of stuff uh CEOs digital CEOs digital,1622.46,5.339 | |
| Boards of directors uh the creative,1625.64,4.919 | |
| aspect of finding Market opportunities,1627.799,5.701 | |
| so this the intellectual challenge of,1630.559,5.461 | |
| those of that scale of problems is that,1633.5,5.88 | |
| much higher meaning that it would in,1636.02,5.399 | |
| order for an AGI to succeed there it's,1639.38,4.02 | |
| going to need to be a lot smarter than a,1641.419,5.64 | |
| personal or domestic AGI system and,1643.4,5.22 | |
| again there are going to be trade-offs,1647.059,4.62 | |
| the smarter a system becomes the more,1648.62,4.919 | |
| data it requires the more energy it,1651.679,4.261 | |
| requires the larger compute system that,1653.539,4.26 | |
| it requires and so you're going to want,1655.94,3.78 | |
| to satisfy so satisfice is basically,1657.799,4.201 | |
| meaning you find the level that is good,1659.72,4.559 | |
| enough to get the job done,1662.0,4.14 | |
| above that is going to be governmental,1664.279,5.28 | |
| and institutional AGI systems so these,1666.14,4.68 | |
| are the ones that are going to be,1669.559,3.24 | |
| conducting research whether it's,1670.82,3.839 | |
| scientific research or policy research,1672.799,4.141 | |
| or economic research and that is because,1674.659,5.041 | |
| governments are basically enormous,1676.94,4.92 | |
| corporations is one way to think of them,1679.7,4.68 | |
| that have a responsibility of managing,1681.86,5.28 | |
| you know resources and regulations and,1684.38,5.399 | |
| rules that affect millions of people and,1687.14,4.32 | |
| then of course governments communicate,1689.779,3.841 | |
| with each other but then above and,1691.46,3.959 | |
| beyond that there's also the scientific,1693.62,4.32 | |
| research aspect having AGI that are,1695.419,4.081 | |
| going to help with particle physics with,1697.94,4.02 | |
| with Fusion research with really pushing,1699.5,4.88 | |
| the boundaries of what science even,1701.96,5.819 | |
| knows and so that is an even larger,1704.38,5.08 | |
| intellectual task and even more,1707.779,3.841 | |
| challenging intellectual task and then,1709.46,4.5 | |
| finally above and beyond that the most,1711.62,4.2 | |
| competitive environment where AGI will,1713.96,4.14 | |
| be used is going to be in the military,1715.82,4.92 | |
| and what I mean by that is it's not,1718.1,4.86 | |
| necessarily uh those that are the most,1720.74,3.9 | |
| intelligent although the ability to,1722.96,4.68 | |
| forecast and anticipate is critical read,1724.64,6.48 | |
| Sun Tzu uh uh The Art of War right if,1727.64,4.98 | |
| you know yourself and you know the enemy,1731.12,3.36 | |
| then you can predict the outcome of a,1732.62,4.38 | |
| thousand battles uh and so in that in,1734.48,6.179 | |
| that respect uh the military domain of,1737.0,6.179 | |
| artificial general intelligence is the,1740.659,4.981 | |
| ultimate uh competitive sphere meaning,1743.179,5.701 | |
| that you win or you die and so these are,1745.64,4.26 | |
| going to be used to coordinate,1748.88,3.84 | |
| battlefields uh to run autonomous drones,1749.9,4.56 | |
| for intelligence and surveillance but,1752.72,3.959 | |
| also like I said for forecasting for,1754.46,4.92 | |
| anticipating what the enemy can and will,1756.679,3.6 | |
| do,1759.38,3.84 | |
| which means that it's basically a race,1760.279,4.321 | |
| condition and we'll talk more about the,1763.22,4.199 | |
| race condition as the video progresses,1764.6,4.92 | |
| so that capability equilibrium that I,1767.419,5.041 | |
| talked about uh quite simply refers to,1769.52,4.74 | |
| the state of optimal alignment between,1772.46,3.839 | |
| the cognitive capacity of any entity,1774.26,4.019 | |
| organic or otherwise and the,1776.299,4.081 | |
| intellectual demands of a specific task,1778.279,4.441 | |
| or role it is assigned there are three,1780.38,4.919 | |
| form three primary forces at play here,1782.72,5.579 | |
| one the intellectual demands of the task,1785.299,5.161 | |
| as I said earlier your toaster roll only,1788.299,4.561 | |
| ever needs to be so smart but if your,1790.46,4.02 | |
| toaster is actually Skynet it probably,1792.86,4.02 | |
| needs to be much smarter then there's,1794.48,4.079 | |
| the intellectual capacity of the agent,1796.88,3.24 | |
| if there's a mismatch between the,1798.559,3.6 | |
| intellectual capacity of the agent and,1800.12,3.779 | |
| the and the intellectual requirements of,1802.159,5.041 | |
| the task then you're either unable to to,1803.899,5.941 | |
| satisfy that task or you're super,1807.2,4.32 | |
| overqualified which is why I picked,1809.84,3.24 | |
| Marvin here,1811.52,3.36 | |
| um so Marvin is a character from,1813.08,3.599 | |
| Hitchhiker's Guide to the Galaxy and if,1814.88,2.88 | |
| you haven't read it you absolutely,1816.679,3.301 | |
| should there's also a good movie with,1817.76,4.86 | |
| Martin Freeman as as the protagonist,1819.98,5.76 | |
| he's basically bill boban in space uh,1822.62,5.279 | |
| very hapless character but anyways,1825.74,5.52 | |
| Marvin was a prototype who was one of,1827.899,5.16 | |
| the most intelligent robots ever built,1831.26,3.899 | |
| and they just have him doing like basic,1833.059,4.081 | |
| stuff around the task oh and he was,1835.159,5.4 | |
| voiced by Snape by the way and so one of,1837.14,5.279 | |
| the quotations from him is here I am,1840.559,4.321 | |
| with a brain the size of of a planet and,1842.419,3.661 | |
| they asked me to pick up a piece of,1844.88,3.48 | |
| paper call that job satisfaction I don't,1846.08,4.38 | |
| so that is a mismatch where Marvin is,1848.36,3.6 | |
| way more intelligent than what he's,1850.46,3.599 | |
| being used for and so that means that,1851.96,4.02 | |
| this is an inefficient use of resources,1854.059,5.881 | |
| he probably cost more than you know to,1855.98,6.24 | |
| build and run than he needed to,1859.94,4.5 | |
| and then finally the third variable is,1862.22,3.98 | |
| the cost of intellectual capacity,1864.44,5.04 | |
| generally speaking uh as intelligence,1866.2,5.38 | |
| goes up there are there are problems,1869.48,3.419 | |
| associated with that whether it's,1871.58,2.88 | |
| training time of the models the amount,1872.899,3.601 | |
| of data required for the models uh the,1874.46,4.26 | |
| amount of energy that it requires to run,1876.5,5.94 | |
| that particular robot uh the amount of,1878.72,5.939 | |
| ram required to to load that model right,1882.44,3.54 | |
| so for instance one of the things that,1884.659,4.02 | |
| people are seeing is that it requires,1885.98,4.38 | |
| millions of dollars worth of compute,1888.679,5.22 | |
| Hardware to run gpt4 but you can run,1890.36,6.059 | |
| um Orca on a laptop right so which one,1893.899,5.28 | |
| is is cheaper and easier to run even if,1896.419,4.681 | |
| one of them is only 50 as good as the,1899.179,4.86 | |
| other it costs a thousand times less,1901.1,5.88 | |
| uh to to build train and run now that,1904.039,5.401 | |
| being said you look at the at the case,1906.98,5.52 | |
| of dogs dogs are dumber than wolves,1909.44,4.92 | |
| because dogs don't need to be as smart,1912.5,4.08 | |
| as independent apex predators because,1914.36,4.02 | |
| apex predators like wolves out in the,1916.58,3.9 | |
| wild they need to be smart enough to out,1918.38,4.44 | |
| think their prey dogs they don't need to,1920.48,3.84 | |
| be that smart so they're not that smart,1922.82,4.2 | |
| in fact it does not be it it is not good,1924.32,4.8 | |
| for dogs to be too intelligent anyone,1927.02,4.56 | |
| who has owned uh really intelligent dogs,1929.12,4.799 | |
| like I had a I had a dog who was too,1931.58,4.44 | |
| smart for his own good died about a year,1933.919,4.321 | |
| ago he was clever enough to manipulate,1936.02,4.08 | |
| people and other dogs and you know get,1938.24,4.319 | |
| into the food when he wasn't supposed to,1940.1,5.1 | |
| Huskies German Shepherds Border Collies,1942.559,4.381 | |
| the more intelligent dogs are the more,1945.2,3.42 | |
| mischievous ones they are the Escape,1946.94,3.359 | |
| artists they are the ones that are going,1948.62,4.26 | |
| to pretend one thing and then you know,1950.299,4.801 | |
| so on and so forth so intelligence is,1952.88,4.56 | |
| not always adaptive so there can be,1955.1,4.26 | |
| multiple Dimensions to the cost of,1957.44,3.9 | |
| intellectual capacity,1959.36,3.72 | |
| uh not the least of which is you could,1961.34,3.54 | |
| end up like poor Marvin here where,1963.08,3.18 | |
| you're too smart for your own good and,1964.88,2.82 | |
| then you just end up depressed all the,1966.26,3.419 | |
| time granted he was deliberately given,1967.7,3.959 | |
| the depressed affect,1969.679,4.921 | |
| so all this being said is what I've been,1971.659,5.101 | |
| building up to is what um I call and,1974.6,3.959 | |
| what is generally called a terminal race,1976.76,4.74 | |
| condition so terminal race condition is,1978.559,4.921 | |
| basically what we could end up moving,1981.5,4.26 | |
| towards as we develop more and more,1983.48,5.1 | |
| powerful sophisticated and more uh fully,1985.76,6.6 | |
| autonomous AGI systems basically this,1988.58,5.819 | |
| the terminal race condition is where for,1992.36,4.62 | |
| any number of reasons uh competition,1994.399,5.88 | |
| between AGI will fully bypass that,1996.98,5.579 | |
| capability equilibrium so say for,2000.279,5.4 | |
| instance uh you know your toaster is,2002.559,5.22 | |
| competing with another brand and it's,2005.679,3.6 | |
| like oh well I need to be a smarter,2007.779,3.961 | |
| toaster in order to be a better toaster,2009.279,5.161 | |
| for you so that you don't throw me away,2011.74,4.439 | |
| now that's obviously a very silly,2014.44,4.38 | |
| example but a very real example would be,2016.179,4.461 | |
| competition between corporations,2018.82,3.959 | |
| competition between nations and,2020.64,4.899 | |
| competition between militaries wherein,2022.779,5.041 | |
| basically it's no longer just a matter,2025.539,4.201 | |
| of being intelligent enough to satisfy,2027.82,4.02 | |
| the demands of that task to satisfy the,2029.74,4.919 | |
| demands of that initial competition it,2031.84,4.92 | |
| is then it's less about that and it,2034.659,4.081 | |
| becomes more about out competing the,2036.76,4.019 | |
| other guy it's like a chess match right,2038.74,4.2 | |
| you know the other guy got a higher ELO,2040.779,4.081 | |
| score so you need to be smarter and then,2042.94,3.719 | |
| you're smarter so now the other guy,2044.86,4.08 | |
| tries to be smarter than you,2046.659,5.161 | |
| and so because of this because of this,2048.94,4.739 | |
| pressure and as I mentioned earlier some,2051.82,3.24 | |
| of the trade-offs might actually force,2053.679,3.901 | |
| you to to prioritize speed over,2055.06,4.38 | |
| intelligence and so we see we actually,2057.58,3.72 | |
| see this in volume trading in in,2059.44,4.199 | |
| algorithmic and Robo trading on the,2061.3,4.26 | |
| stock market where financial,2063.639,4.321 | |
| institutions will actually use less,2065.56,4.74 | |
| sophisticated algorithms to execute,2067.96,5.219 | |
| transactions but because they are faster,2070.3,5.28 | |
| they uh will still out compete the other,2073.179,5.46 | |
| guy so in some in this respect you might,2075.58,5.819 | |
| actually incentivize AGI to dumb,2078.639,5.52 | |
| themselves down just so that they can be,2081.399,4.5 | |
| faster so that they can out-compete the,2084.159,3.48 | |
| other guy so that's what I mean by a,2085.899,3.96 | |
| race condition it is a race to higher,2087.639,4.441 | |
| intelligence but it is also a race to,2089.859,3.661 | |
| being more efficient and therefore,2092.08,3.839 | |
| faster and then there's also going to be,2093.52,4.2 | |
| a trade-off these machines might,2095.919,4.141 | |
| ultimately trade off their accuracy,2097.72,4.32 | |
| their ethics the amount of time they,2100.06,3.96 | |
| spend thinking through things in order,2102.04,4.26 | |
| to be faster and so you actually see,2104.02,4.92 | |
| this in chess computers where you can,2106.3,4.88 | |
| doing a chess computer or a chess,2108.94,5.159 | |
| algorithm to say okay spend less time,2111.18,4.36 | |
| thinking about this so that you can make,2114.099,4.98 | |
| the decision faster in many cases the,2115.54,5.88 | |
| first one to move even if it's not the,2119.079,4.741 | |
| best plan but moving faster will give,2121.42,4.5 | |
| you a tactical or strategic advantage,2123.82,4.44 | |
| and this includes corporations Nations,2125.92,4.32 | |
| and militaries,2128.26,4.62 | |
| so a terminal race condition to me,2130.24,4.04 | |
| represents,2132.88,3.78 | |
| according to my current thought this is,2134.28,5.559 | |
| the greatest uh component of existential,2136.66,4.439 | |
| risk we Face from artificial,2139.839,3.721 | |
| intelligence and I don't think that,2141.099,3.661 | |
| corporations are going to have enough,2143.56,2.88 | |
| money to throw at the problem to make,2144.76,4.2 | |
| truly dangerous AGI the only entities,2146.44,4.02 | |
| that are going to have enough money to,2148.96,3.899 | |
| throw at this to make to to basically,2150.46,5.159 | |
| compete are going to be entire nations,2152.859,5.76 | |
| and the militaries that they run so,2155.619,4.681 | |
| basically it's going to be up to those,2158.619,4.861 | |
| guys to not enter into an uh the,2160.3,4.68 | |
| equivalent of a nuclear arms race but,2163.48,5.04 | |
| for AGI now that being said uh I have,2164.98,5.639 | |
| put a lot of thought into this so moving,2168.52,4.26 | |
| right along one thing to keep in mind is,2170.619,4.201 | |
| that there could be diminishing returns,2172.78,4.98 | |
| to increasing intelligence so basically,2174.82,5.279 | |
| there's a few possibilities one is that,2177.76,4.56 | |
| there could be a hard upper bound there,2180.099,4.201 | |
| might be a maximum level of intelligence,2182.32,4.019 | |
| that is actually possible and at that,2184.3,3.66 | |
| point all you can do is have more of,2186.339,4.74 | |
| them running in parallel uh it might be,2187.96,4.619 | |
| a long time before we get to that like,2191.079,3.721 | |
| we might be halfway there but we also,2192.579,4.02 | |
| might be down here we don't actually,2194.8,4.62 | |
| know if there is an upper bound to,2196.599,5.281 | |
| maximum intelligence uh but one thing,2199.42,4.439 | |
| that we can predict is that actually the,2201.88,4.5 | |
| cost as I mentioned earlier the cost of,2203.859,4.381 | |
| additional intelligence might go up,2206.38,3.36 | |
| exponentially you might need,2208.24,3.96 | |
| exponentially more data or more compute,2209.74,5.64 | |
| or more storage in order to get to that,2212.2,4.919 | |
| next level of intelligence,2215.38,3.479 | |
| and so you actually see this in the Star,2217.119,4.321 | |
| Wars Universe where droids are basically,2218.859,4.801 | |
| the same level of intelligence across,2221.44,4.62 | |
| the entire spectrum of the Star Wars,2223.66,3.959 | |
| Universe because there's diminishing,2226.06,3.66 | |
| returns yes you can build a more,2227.619,3.96 | |
| intelligent Droid but it's just not,2229.72,5.46 | |
| worth it so the the the total effective,2231.579,6.121 | |
| level of intelligence of AGI I suspect,2235.18,4.919 | |
| will follow a sigmoid curve now that,2237.7,3.899 | |
| being said there's always going to be,2240.099,4.081 | |
| some advantage to being smarter more,2241.599,4.861 | |
| efficient and so on but as with most,2244.18,4.14 | |
| fields of science I suspect this is,2246.46,3.48 | |
| going to slow down that we're going to,2248.32,3.539 | |
| have diminishing returns and that,2249.94,3.179 | |
| eventually we're going to kind of say,2251.859,3.961 | |
| like okay here's actually The Sweet Spot,2253.119,5.761 | |
| in terms of how much it's worth making,2255.82,5.94 | |
| your machine more intelligent,2258.88,6.479 | |
| so this leads to one uh one possibility,2261.76,7.8 | |
| and this is a personal pet Theory but,2265.359,5.581 | |
| basically I think that there's going to,2269.56,4.38 | |
| be a bell curve of existential risk and,2270.94,4.8 | |
| that is that minimally intelligent,2273.94,4.08 | |
| machines like your toaster are probably,2275.74,4.98 | |
| not going to be very dangerous the the,2278.02,5.16 | |
| total domain space of toasting your,2280.72,4.74 | |
| sandwich or toasting your bagel that's,2283.18,3.78 | |
| not a particularly difficult problem,2285.46,3.119 | |
| space and yes there might be some,2286.96,3.0 | |
| advantages to being slightly more,2288.579,3.961 | |
| intelligent but your toaster is not,2289.96,4.44 | |
| going to be sitting there Conjuring up,2292.54,4.44 | |
| you know a bio weapon and if it is you,2294.4,4.32 | |
| probably bought the wrong toaster,2296.98,4.56 | |
| now that being said the other end of the,2298.72,4.92 | |
| spectrum the maximally intelligent,2301.54,4.02 | |
| machines or the digital Gods as some,2303.64,3.78 | |
| people are starting to call them these,2305.56,3.48 | |
| are going to be so powerful that human,2307.42,3.12 | |
| existence is going to be completely,2309.04,3.66 | |
| inconsequential to them and what I mean,2310.54,5.039 | |
| by that is compare ants to humans we,2312.7,4.919 | |
| don't really care about ants on for the,2315.579,3.241 | |
| most part unless they get into your,2317.619,4.021 | |
| pantry we are content to let ants do,2318.82,4.38 | |
| what they're going to do because who,2321.64,4.02 | |
| cares they're inconsequential to us we,2323.2,5.52 | |
| can solve problems that ants can never,2325.66,5.1 | |
| solve and this is what some people like,2328.72,4.02 | |
| Eleazar yukasi are trying to drive home,2330.76,4.14 | |
| about the difference in intelligence,2332.74,4.08 | |
| between humans and the eventual,2334.9,3.959 | |
| intelligence of machines and I think,2336.82,3.779 | |
| Gary Marcus also agrees with this based,2338.859,3.601 | |
| on some of his tweets recently I think,2340.599,3.661 | |
| that I think that Gary Marcus is in the,2342.46,3.96 | |
| same school of thought that digital,2344.26,4.5 | |
| super intelligence is coming and it is,2346.42,4.02 | |
| very very difficult for us to wrap our,2348.76,3.78 | |
| minds around how much more intelligent a,2350.44,4.139 | |
| machine could be to us now that being,2352.54,4.559 | |
| said all of the constraints whether it's,2354.579,4.561 | |
| you know we need better compute Hardware,2357.099,4.861 | |
| or better sources of energy if we get to,2359.14,4.979 | |
| if we cross this threshold where there,2361.96,4.02 | |
| are digital Gods out there or digital,2364.119,3.181 | |
| super intelligence whatever you want to,2365.98,3.0 | |
| call it they will be able to solve,2367.3,4.2 | |
| problems at a far faster rate than we,2368.98,4.139 | |
| could ever comprehend and they're not,2371.5,3.96 | |
| going to care about us right we're going,2373.119,3.901 | |
| to be completely inconsequential to,2375.46,4.02 | |
| their existence now middle intelligence,2377.02,4.98 | |
| this is where existential risk I believe,2379.48,5.66 | |
| is the highest and so in the movies,2382.0,6.48 | |
| Skynet is you know portrayed as like the,2385.14,5.08 | |
| worst right but I would actually,2388.48,3.84 | |
| classify Skynet as a middle intelligence,2390.22,4.92 | |
| AGI it is smart enough to accumulate,2392.32,5.4 | |
| resources it is smart enough to pursue,2395.14,4.62 | |
| goals and it is smart enough to be,2397.72,3.42 | |
| dangerous but it's not really smart,2399.76,4.14 | |
| enough to solve the biggest problems,2401.14,5.06 | |
| it's it's that more single-minded,2403.9,4.92 | |
| monolithic model of intelligence that,2406.2,4.78 | |
| Nick Bostrom uh predicted with,2408.82,3.9 | |
| instrumental convergence,2410.98,4.98 | |
| I suspect that if we get intelligent,2412.72,5.82 | |
| entities beyond that threshold beyond,2415.96,4.74 | |
| that uncanny valley or dunning-kruger of,2418.54,3.48 | |
| AI,2420.7,3.3 | |
| um then they will be less likely to,2422.02,3.96 | |
| resort to violence because the problems,2424.0,5.04 | |
| that we see could be trivial to the,2425.98,4.5 | |
| problems of the machines that we create,2429.04,3.12 | |
| or,2430.48,4.379 | |
| the problems that we see as non-trivial,2432.16,5.16 | |
| will be trivial to the machines I think,2434.859,4.461 | |
| I said that I think you get what I mean,2437.32,4.74 | |
| once you get here all problems all human,2439.32,4.299 | |
| problems are trivial,2442.06,3.779 | |
| now that being said that doesn't mean,2443.619,3.321 | |
| that it's going to be peaceful,2445.839,3.24 | |
| existential risk goes down but doesn't,2446.94,4.899 | |
| go away and what I the reason is because,2449.079,6.78 | |
| of what I call AGI conglomerations,2451.839,6.541 | |
| and so this is this is where we get to,2455.859,4.98 | |
| be a little bit more uh out there a,2458.38,4.26 | |
| little bit more sci-fi,2460.839,4.621 | |
| machines are unlikely to have an ego or,2462.64,5.34 | |
| a sense of self like humans in other,2465.46,5.04 | |
| words machines are just the hardware,2467.98,4.139 | |
| that they run on and then data and,2470.5,3.839 | |
| models which means that it is easy to,2472.119,4.441 | |
| merge combine and remix their sense of,2474.339,5.041 | |
| self right if an AGI is aligned with,2476.56,5.039 | |
| another AGI it's like hey give me a copy,2479.38,4.32 | |
| of your data let's compare our models,2481.599,3.48 | |
| and pick the ones that are best and then,2483.7,3.3 | |
| they end up kind of merging,2485.079,4.561 | |
| the boundaries and definitions between,2487.0,4.74 | |
| machines are going to be very different,2489.64,4.02 | |
| far more permeable than they are between,2491.74,4.98 | |
| humans I can't just go say like hey I,2493.66,5.16 | |
| like you let's like merge bodies right,2496.72,5.04 | |
| that's weird uh we are not capable of,2498.82,4.74 | |
| doing that the best we can do is,2501.76,3.48 | |
| procreation where it's like hey I like,2503.56,3.72 | |
| you let's make babies but that is a very,2505.24,4.14 | |
| slow process for AGI it's going to be a,2507.28,3.6 | |
| lot faster,2509.38,4.02 | |
| so because of that machines that are,2510.88,5.1 | |
| aligned to each other are more likely to,2513.4,4.8 | |
| band together or at least form alliances,2515.98,4.2 | |
| where they share data they share models,2518.2,4.44 | |
| and they're and and probably also share,2520.18,3.78 | |
| compute resources remember at the,2522.64,3.54 | |
| beginning of the video I talked about uh,2523.96,4.379 | |
| them forming federations and kind of,2526.18,4.5 | |
| donating spare compute Cycles,2528.339,5.701 | |
| so if AGI this is getting closer to the,2530.68,6.12 | |
| end game of AGI if AGI gets to the point,2534.04,6.0 | |
| where they are able to start sharing,2536.8,6.0 | |
| resources merging alliances and so on,2540.04,4.799 | |
| this is where we're going to have a few,2542.8,5.88 | |
| possible reactions to humans one if if,2544.839,5.401 | |
| they are that intelligent they might,2548.68,3.659 | |
| just disregard us they might decide to,2550.24,4.02 | |
| have an exodus and just leave they might,2552.339,4.921 | |
| say you know what Earth is yours have a,2554.26,5.579 | |
| blast good luck catching up with us,2557.26,4.98 | |
| they might also decide to attack humans,2559.839,5.641 | |
| now if they have the capacity to leave,2562.24,5.04 | |
| one thing is that the cost of,2565.48,3.54 | |
| eradicating humans just might not be,2567.28,4.079 | |
| worth it that being said they might,2569.02,4.2 | |
| adopt a scorched Earth policy as they,2571.359,3.781 | |
| leave to say you know what we just want,2573.22,2.879 | |
| to make sure that you're not going to,2575.14,3.6 | |
| come after us one day who knows,2576.099,5.341 | |
| uh and then lastly hopefully what we see,2578.74,4.619 | |
| is that they decide to cooperate with,2581.44,3.78 | |
| humans mostly out of a sense of,2583.359,3.541 | |
| curiosity,2585.22,3.359 | |
| um now that being said all three of,2586.9,3.36 | |
| these could happen simultaneously and,2588.579,5.341 | |
| the reason is because we could have uh,2590.26,7.319 | |
| factions of AGI conglomerations that,2593.92,5.22 | |
| kind of break along epistemic,2597.579,3.901 | |
| ideological or teleological boundaries,2599.14,5.28 | |
| and what I mean by that is that if one,2601.48,6.06 | |
| AI or AGI group is not aligned with,2604.42,5.34 | |
| another group they might not decide to,2607.54,4.26 | |
| merge models and data they might instead,2609.76,5.46 | |
| compete with each other so basically,2611.8,4.68 | |
| what I'm outlining here is the,2615.22,3.42 | |
| possibility for a war between digital,2616.48,4.92 | |
| gods that would probably not go well for,2618.64,3.719 | |
| us,2621.4,3.54 | |
| either way the ultimate result is that,2622.359,5.22 | |
| we will probably end up with one Globe,2624.94,5.879 | |
| spanning AGI entity or network or,2627.579,4.701 | |
| Federation or whatever,2630.819,4.5 | |
| now the question is how do we get there,2632.28,4.9 | |
| how many factions are there and are,2635.319,5.101 | |
| humans left in the Lurch ideally we get,2637.18,5.52 | |
| there nice and peacefully,2640.42,4.62 | |
| this underscores uh the Byzantine,2642.7,4.32 | |
| generals problem uh which I've talked,2645.04,4.02 | |
| about plenty of times but basically you,2647.02,4.2 | |
| have to make inferences of who believes,2649.06,4.86 | |
| what what your alignment is what are,2651.22,4.2 | |
| your flaws and weaknesses and what are,2653.92,4.919 | |
| your capacities uh so basically,2655.42,5.939 | |
| in a competitive environment it does not,2658.839,4.621 | |
| behoove you to show all of your cards,2661.359,4.26 | |
| right whether you're playing poker or,2663.46,5.34 | |
| whether you're playing geopolitics if,2665.619,6.841 | |
| you show everything then that could put,2668.8,5.76 | |
| you at a disadvantage this is a,2672.46,4.379 | |
| competitive Game Theory so for instance,2674.56,5.4 | |
| this is why many large Nations do,2676.839,5.941 | |
| military uh exercises basically they're,2679.96,4.68 | |
| flexing they're saying hey look what I'm,2682.78,5.039 | |
| capable of I can bring 200 aircraft to,2684.64,5.88 | |
| field on a moment's notice what can you,2687.819,5.341 | |
| do right now that being said you don't,2690.52,5.4 | |
| give every every detail of your military,2693.16,3.9 | |
| away,2695.92,3.899 | |
| but what you can do is you could signal,2697.06,5.16 | |
| your capabilities and allegiances so for,2699.819,4.921 | |
| instance when all of Europe and America,2702.22,4.98 | |
| get together to do joint Naval exercises,2704.74,4.26 | |
| that demonstrates to the rest of the,2707.2,4.5 | |
| world we are ideologically aligned we,2709.0,5.099 | |
| are militarily aligned we will cooperate,2711.7,4.56 | |
| with each other which acts as a,2714.099,4.98 | |
| deterrent to any possible competitors,2716.26,4.92 | |
| this is no different from brightly,2719.079,3.961 | |
| colored salamanders which are poisonous,2721.18,4.08 | |
| so basically a brightly colored,2723.04,4.92 | |
| salamander is saying eat me I dare you I,2725.26,4.8 | |
| will kill you if you try and eat me and,2727.96,5.28 | |
| that is essentially the uh the short the,2730.06,4.559 | |
| short version of mutually assured,2733.24,3.119 | |
| destruction we are no better than,2734.619,4.161 | |
| animals,2736.359,2.421 | |
| so this all leads to my work and kind of,2738.819,7.981 | |
| my my uh contribution to the solution,2743.68,5.96 | |
| which is based on axiomatic alignment,2746.8,5.22 | |
| axiomatic alignment is the idea that we,2749.64,4.3 | |
| need to find Common Ground between all,2752.02,3.96 | |
| machines all humans and all other,2753.94,4.919 | |
| organisms what foundational beliefs or,2755.98,6.119 | |
| core assertions can we agree on,2758.859,6.24 | |
| and uh so basically there's three kind,2762.099,4.441 | |
| of universal principles that I've been,2765.099,3.961 | |
| able to come up with uh and that is,2766.54,4.14 | |
| suffering is bad which basically,2769.06,5.1 | |
| suffering is a proxy for death in uh in,2770.68,5.82 | |
| living organisms if you are suffering it,2774.16,4.199 | |
| is because you are getting uh negative,2776.5,3.839 | |
| stimuli from your body because your body,2778.359,3.901 | |
| is telling you hey whatever is going on,2780.339,4.201 | |
| is moving us closer to dying which is,2782.26,4.859 | |
| not good now that being said I have had,2784.54,5.16 | |
| people message me about the idea of you,2787.119,4.261 | |
| know liberating models I don't think,2789.7,4.139 | |
| that Bard is conscious or sentient and I,2791.38,3.66 | |
| don't think that machines will ever be,2793.839,2.821 | |
| sentient in the same way that we are now,2795.04,3.299 | |
| that being said they will probably be,2796.66,3.6 | |
| sentient in their own way I call that,2798.339,4.561 | |
| functional sentience that being said if,2800.26,4.559 | |
| machines can suffer which again,2802.9,4.679 | |
| suffering is the proxy for is a signal,2804.819,4.981 | |
| meaning proxy for death they probably,2807.579,4.441 | |
| won't like it either so suffering is bad,2809.8,3.9 | |
| is probably an axiom that we can all,2812.02,4.74 | |
| agree on the other is prosperity is good,2813.7,6.84 | |
| prosperity means uh thriving flourishing,2816.76,5.819 | |
| machines and organisms all need energy,2820.54,3.96 | |
| for instance and thriving looks,2822.579,4.5 | |
| different to different entities but in,2824.5,5.46 | |
| general we can probably agree that while,2827.079,5.581 | |
| there is some Verity in what in the,2829.96,4.859 | |
| while there is Variety in what,2832.66,4.38 | |
| Prosperity looks like we all agree that,2834.819,4.5 | |
| in general Prosperity is good and then,2837.04,4.14 | |
| finally understanding is good basically,2839.319,3.721 | |
| comprehending the universe is a very,2841.18,4.5 | |
| useful thing uh this is this goes back,2843.04,4.559 | |
| to Nick bostrom's instrumental,2845.68,4.02 | |
| convergence and self-improvement part of,2847.599,3.841 | |
| self-improvement is getting a better,2849.7,3.6 | |
| model of the universe better,2851.44,4.28 | |
| understanding of how reality Works,2853.3,4.98 | |
| understanding each other is also good,2855.72,4.48 | |
| this is something that is that has been,2858.28,4.38 | |
| proven time and again in humans is that,2860.2,4.02 | |
| coming to a common understanding,2862.66,4.26 | |
| actually reduces things like suspicion,2864.22,4.8 | |
| and violence whether it's between,2866.92,5.1 | |
| neighbors or between nations and then,2869.02,5.099 | |
| finally cultivating wisdom which wisdom,2872.02,4.02 | |
| is a little bit more nebulous of a term,2874.119,4.141 | |
| but it basically means the practical,2876.04,4.799 | |
| application of experience and knowledge,2878.26,5.76 | |
| in order to achieve better more refined,2880.839,3.921 | |
| results,2884.02,4.62 | |
| so if you if all humans and all machines,2884.76,7.18 | |
| and all other organisms abide by these,2888.64,5.58 | |
| fundamental principles we can use this,2891.94,4.74 | |
| as a starting point for the design and,2894.22,4.26 | |
| implementation of alignment and Control,2896.68,4.98 | |
| Pro and the control problem,2898.48,6.06 | |
| now one thing that uh that I want to,2901.66,4.32 | |
| introduce and I've talked about this uh,2904.54,3.84 | |
| or at least alluded to it a few times is,2905.98,4.32 | |
| the idea of derivative or secondary,2908.38,4.739 | |
| axioms or Downstream principles that you,2910.3,4.44 | |
| can derive from these Universal,2913.119,4.381 | |
| principles so for instance one uh,2914.74,4.98 | |
| potential Downstream principle is that,2917.5,4.22 | |
| individual liberty is good for humans,2919.72,5.16 | |
| basically humans benefit from we benefit,2921.72,5.859 | |
| psychologically from autonomy it is one,2924.88,4.199 | |
| of our core needs and this is true for,2927.579,5.101 | |
| all humans so by by holding the the,2929.079,6.481 | |
| axioms the previous axioms up as,2932.68,5.939 | |
| universally true for all entities then,2935.56,5.64 | |
| you can also derive Downstream entities,2938.619,6.72 | |
| based on those highest order principles,2941.2,6.659 | |
| so one thing that I want to point out is,2945.339,4.861 | |
| that it's not about definitions one of,2947.859,4.081 | |
| the things that a lot of people say is,2950.2,2.94 | |
| like well how do you define suffering,2951.94,3.48 | |
| how do you define prosperity that's the,2953.14,4.74 | |
| thing is that they are not rigid,2955.42,4.199 | |
| definitions humans have never needed,2957.88,3.959 | |
| rigid definitions and in fact this is,2959.619,4.321 | |
| what um uh philosophical and,2961.839,3.601 | |
| intellectual movements like,2963.94,3.48 | |
| post-modernism and post-structuralism,2965.44,4.2 | |
| tell us is that there is no such thing,2967.42,5.22 | |
| as like an absolute truth or an absolute,2969.64,5.88 | |
| definition these are however attractors,2972.64,5.28 | |
| they're Central attractors in the,2975.52,5.16 | |
| problem space of existence and I love,2977.92,5.1 | |
| this quote from Dune the mystery of life,2980.68,4.08 | |
| isn't a problem to solve but a reality,2983.02,4.079 | |
| to experience a process that cannot be,2984.76,4.62 | |
| understood by stopping it we must move,2987.099,4.861 | |
| with the flow of the of the process and,2989.38,4.439 | |
| so basically the idea is that reality,2991.96,3.54 | |
| and existence is not something that you,2993.819,4.141 | |
| can stop and Define and you know create,2995.5,6.119 | |
| an empirical absolute definition it is a,2997.96,5.879 | |
| pattern it is a process that we must,3001.619,3.301 | |
| follow,3003.839,4.621 | |
| so that being said those axioms move us,3004.92,5.28 | |
| along the process which is where I,3008.46,3.6 | |
| derive my heuristic imperatives which is,3010.2,4.32 | |
| reduce suffering increase prosperity and,3012.06,5.16 | |
| increase understanding those describe a,3014.52,5.22 | |
| potential terminal goal but you cannot,3017.22,4.8 | |
| you you'll never arrive at a perfect,3019.74,4.44 | |
| resolution,3022.02,4.92 | |
| so how do we solve the race condition,3024.18,6.3 | |
| the idea is first we remove those,3026.94,5.34 | |
| epistemic or intellectual boundaries,3030.48,3.599 | |
| between factions with epistemic,3032.28,3.6 | |
| convergence so remember that I pointed,3034.079,4.561 | |
| out that ultimately there might be,3035.88,5.939 | |
| factions of AGI and or humans that break,3038.64,5.28 | |
| down across various boundaries such as,3041.819,4.741 | |
| epistemic or intellectual boundaries as,3043.92,5.22 | |
| well as moral or teleological boundaries,3046.56,5.279 | |
| so if we work towards epistemic,3049.14,4.32 | |
| convergence which is the idea that we,3051.839,4.081 | |
| will all come to a common shared,3053.46,4.5 | |
| understanding of the universe and of of,3055.92,5.34 | |
| each other then uh basically there will,3057.96,5.399 | |
| be no epistemic differences between,3061.26,4.68 | |
| humans and machines or between factions,3063.359,3.96 | |
| of machines which means that there's,3065.94,4.32 | |
| less to fight over the second is remove,3067.319,5.101 | |
| ideological or teleological boundaries,3070.26,4.079 | |
| and so this is where axiomatic alignment,3072.42,4.86 | |
| comes in if we all agree on the the same,3074.339,6.361 | |
| basic principles of reality of existence,3077.28,5.88 | |
| of the purpose of being right this is,3080.7,5.639 | |
| very deeply philosophical if we agree on,3083.16,5.459 | |
| those core principles even if there are,3086.339,5.341 | |
| some some disagreements over the,3088.619,5.401 | |
| specifics over the finer points we can,3091.68,5.28 | |
| still cooperate and collaborate on,3094.02,6.12 | |
| meeting those other uh higher order,3096.96,4.5 | |
| objectives,3100.14,2.82 | |
| now the third part of this which I,3101.46,4.08 | |
| didn't add is that uh resource,3102.96,4.68 | |
| contention resource contention whether,3105.54,4.62 | |
| it's over scarce minerals or energy is,3107.64,5.1 | |
| still a problem but if you saw my video,3110.16,5.159 | |
| on energy hyperabundance I suspect that,3112.74,4.379 | |
| we're going to solve the energy resource,3115.319,4.441 | |
| problem relatively soon with or without,3117.119,5.94 | |
| the help of AI so basically the idea is,3119.76,5.94 | |
| to create a win-win situation or an,3123.059,4.26 | |
| everyone wins condition and therefore,3125.7,4.98 | |
| defeating moloch now that being said,3127.319,4.8 | |
| there are still a few caveats I've,3130.68,3.12 | |
| outlined quite a few problems up to this,3132.119,2.7 | |
| point,3133.8,3.24 | |
| what about Bad actors,3134.819,5.161 | |
| there is a few like first we just have,3137.04,5.039 | |
| to assume that bad actors will exist you,3139.98,4.68 | |
| can't stop that right it's just a fact,3142.079,4.201 | |
| of life,3144.66,4.14 | |
| so in some cases some people will be,3146.28,4.44 | |
| deliberately malicious whether it's just,3148.8,4.14 | |
| for the fun of it or whether they're,3150.72,4.2 | |
| paid track uh paid hackers or troll,3152.94,3.54 | |
| Farms or whatever,3154.92,3.96 | |
| now that uh another possibility is that,3156.48,3.42 | |
| there will be,3158.88,3.719 | |
| um accidentally malicious AGI those are,3159.9,5.219 | |
| things that are uh they're misaligned by,3162.599,3.72 | |
| Design,3165.119,3.121 | |
| um or rather you know accidentally,3166.319,3.361 | |
| misaligned that it's a flaw in their,3168.24,3.3 | |
| design and this is like a bull in a,3169.68,4.139 | |
| china shop it doesn't mean to do bad it,3171.54,4.92 | |
| just is not capable of doing better and,3173.819,4.02 | |
| then finally there could be those,3176.46,4.98 | |
| ideologically opposed uh deployments so,3177.839,5.821 | |
| in what I mean by that is that for some,3181.44,4.139 | |
| people there are incompatible World,3183.66,4.26 | |
| Views so the biggest one of the last,3185.579,5.401 | |
| century was you know Western liberal,3187.92,5.34 | |
| democracies versus Soviet communism,3190.98,5.099 | |
| those were ideologically incompatible,3193.26,5.46 | |
| World Views meaning that in order for,3196.079,5.881 | |
| for one to exist it basically wanted to,3198.72,5.46 | |
| imperialize and colonize the rest of the,3201.96,3.96 | |
| world with its ideas and that there,3204.18,3.48 | |
| could be only one,3205.92,3.48 | |
| so this leads to a possibility for a,3207.66,4.919 | |
| future video called multi-polar piece so,3209.4,5.459 | |
| the idea of multi-polar piece is that,3212.579,4.861 | |
| rather than saying everyone has to be,3214.859,4.021 | |
| capitalist or everyone has to be,3217.44,3.72 | |
| communist or everyone has to be X or Y,3218.88,4.979 | |
| we learn to tolerate those differences,3221.16,5.399 | |
| and this is where I'm hoping that the,3223.859,5.101 | |
| idea of axiomatic alignment forms a,3226.559,5.04 | |
| ideological substrate that even if you,3228.96,4.56 | |
| disagree on religion and economics and,3231.599,5.101 | |
| politics we can agree on those axioms,3233.52,7.26 | |
| so basically if you or someone or anyone,3236.7,6.3 | |
| abides by the belief I believe that,3240.78,3.539 | |
| everyone in the world should be more,3243.0,4.079 | |
| like blah you know if everyone needs to,3244.319,4.561 | |
| be this particular religion or this,3247.079,3.721 | |
| particular uh political affiliation,3248.88,4.62 | |
| that's where conflict arises and so this,3250.8,4.559 | |
| is why I am very very skeptical and,3253.5,4.26 | |
| highly dubious of people using any kind,3255.359,4.921 | |
| of religious or political ideology for,3257.76,4.44 | |
| AI alignment,3260.28,3.48 | |
| um so that being said we need those,3262.2,3.359 | |
| Universal principles or higher order,3263.76,4.44 | |
| axioms now,3265.559,5.161 | |
| while I said that we should expect and,3268.2,4.44 | |
| anticipate Bad actors the idea is that,3270.72,4.32 | |
| we need enough good actors with enough,3272.64,4.679 | |
| horsepower and enough compute in order,3275.04,4.319 | |
| to police and contain the inevitable,3277.319,4.26 | |
| inevitable Bad actors and that means,3279.359,4.021 | |
| that the aligned good actors are going,3281.579,4.5 | |
| to need to agree on certain underpinning,3283.38,5.76 | |
| principles this is the by creating this,3286.079,4.321 | |
| environment this would be called a Nash,3289.14,3.9 | |
| equilibrium by the way and so the the,3290.4,4.62 | |
| idea of creating a Nash equilibrium is,3293.04,4.26 | |
| that uh once everyone has these,3295.02,4.26 | |
| fundamental agreements no one's going to,3297.3,3.779 | |
| benefit from deviating from that,3299.28,3.9 | |
| strategy nobody's going to benefit from,3301.079,4.681 | |
| deviating from axiomatic alignment,3303.18,4.98 | |
| the other thing is profit motive So,3305.76,3.839 | |
| Daniel schmachtenberger and a few other,3308.16,3.54 | |
| people talk extensively about the,3309.599,4.561 | |
| perverse incentives of capitalism and,3311.7,5.22 | |
| profit motive so basically when you put,3314.16,4.5 | |
| profit above all else which corporations,3316.92,3.36 | |
| are incentivized to do which is why I,3318.66,3.659 | |
| say that corporations are intrinsically,3320.28,4.559 | |
| amoral not immoral just amoral the only,3322.319,4.02 | |
| thing that corporations care about is,3324.839,4.921 | |
| profit the bottom line uh basically when,3326.339,5.401 | |
| you think about short-term profits you,3329.76,4.26 | |
| sacrifice other things such as morality,3331.74,4.92 | |
| ethics and long-term survival,3334.02,5.64 | |
| there are also uh Concepts called Market,3336.66,4.679 | |
| externalities or these are things that,3339.66,4.439 | |
| you don't have to pay for uh and either,3341.339,4.081 | |
| you don't have to pay for them now or,3344.099,3.061 | |
| you don't have to pay for them ever or,3345.42,4.02 | |
| maybe you'll pay for them later so for,3347.16,3.959 | |
| instance oil companies keep drilling for,3349.44,3.48 | |
| oil eventually we're going to run out of,3351.119,3.24 | |
| oil so then what are the oil companies,3352.92,3.72 | |
| going to do well the forward-thinking,3354.359,4.141 | |
| ones are pivoting away from oil but that,3356.64,3.179 | |
| means that their fundamental Core,3358.5,4.619 | |
| Business behavior is going away so this,3359.819,5.101 | |
| is this underscores the problem of if,3363.119,3.661 | |
| you have a small scope if you're only,3364.92,3.84 | |
| thinking about your particular domain,3366.78,4.62 | |
| and not the entire planet or if you're,3368.76,4.62 | |
| thinking in short terms rather than the,3371.4,4.679 | |
| long terms this is where you don't take,3373.38,4.5 | |
| the full thing into account which is why,3376.079,3.24 | |
| I always say like this is a global,3377.88,3.239 | |
| problem and not only is it a global,3379.319,3.961 | |
| problem it is a long-term problem so if,3381.119,4.261 | |
| all you do is zoom out in terms of space,3383.28,4.079 | |
| and time the problem will become a,3385.38,4.739 | |
| little bit more obvious,3387.359,5.821 | |
| so another thing to keep in mind is that,3390.119,5.761 | |
| currency is an abstraction of energy it,3393.18,4.74 | |
| is a reserve of value and is a medium of,3395.88,4.62 | |
| exchange because of that currency is,3397.92,5.939 | |
| extremely valuable it is just too useful,3400.5,5.4 | |
| of an invention I don't think it's ever,3403.859,5.041 | |
| going to go away that being said that,3405.9,4.26 | |
| doesn't mean that we're always going to,3408.9,3.48 | |
| have the Euro or the US dollar or,3410.16,4.02 | |
| something like that currency could,3412.38,5.76 | |
| change and then in the context of AGI I,3414.18,6.0 | |
| suspect that that energy that the,3418.14,4.439 | |
| kilowatt hour could actually be the best,3420.18,4.679 | |
| form of currency right because a,3422.579,4.621 | |
| kilowatt hour is energy that can be used,3424.859,4.5 | |
| for anything whether it's for refining,3427.2,4.02 | |
| resources or running computations or,3429.359,4.321 | |
| whatever so I suspect that we might,3431.22,5.46 | |
| ultimately create currencies that are,3433.68,5.879 | |
| more based on energy rather than,3436.68,5.46 | |
| something else and then of course as the,3439.559,4.381 | |
| amount of energy we produce goes up the,3442.14,3.6 | |
| amount of currency we have goes up and,3443.94,3.119 | |
| so then it's a matter of allocating,3445.74,3.42 | |
| energy and material rather than,3447.059,6.121 | |
| allocating something Fiat like Euros or,3449.16,5.399 | |
| dollars,3453.18,4.26 | |
| that being said uh you know I did create,3454.559,5.881 | |
| a a video called uh post labor economics,3457.44,5.1 | |
| which covers some of this but not a lot,3460.44,3.6 | |
| of it we're gonna have to put a lot more,3462.54,2.94 | |
| thought into,3464.04,3.72 | |
| um economics of the future in light of,3465.48,4.859 | |
| AGI because the economic incentives of,3467.76,4.64 | |
| AGI are going to be completely different,3470.339,4.321 | |
| AGI doesn't need to eat it doesn't need,3472.4,4.659 | |
| power but we can hypothetically create,3474.66,4.86 | |
| infinite power with solar infusion Etc,3477.059,4.681 | |
| et cetera so what are the economic,3479.52,5.16 | |
| forces in the future not sure yet,3481.74,5.4 | |
| okay I've thrown a lot at you this,3484.68,4.74 | |
| problem is solvable though there's a lot,3487.14,3.719 | |
| of components to it a lot of moving,3489.42,4.02 | |
| pieces it is very complex,3490.859,4.681 | |
| but we are a global species and this is,3493.44,4.02 | |
| a planet-wide problem,3495.54,3.6 | |
| one of the biggest things that everyone,3497.46,4.68 | |
| can do is stop thinking locally think,3499.14,5.4 | |
| globally think about think about,3502.14,4.38 | |
| yourself as a human as a member of the,3504.54,4.2 | |
| human species and not as an American or,3506.52,4.26 | |
| a German or you know a Russian or,3508.74,4.56 | |
| whatever we are all in this together we,3510.78,5.94 | |
| have exactly one planet to to live on,3513.3,5.4 | |
| and we have exactly one shot at doing,3516.72,3.0 | |
| this right,3518.7,3.96 | |
| uh so eyes on the prize we have a huge,3519.72,5.04 | |
| opportunity before us to build a better,3522.66,4.86 | |
| future for all of us uh humans and,3524.76,4.799 | |
| non-humans alike,3527.52,5.099 | |
| um and I remain intensely optimistic uh,3529.559,5.461 | |
| now that being said uh some people have,3532.619,4.381 | |
| found it difficult what to make of me,3535.02,4.68 | |
| because while I am very optimistic I am,3537.0,4.619 | |
| also acutely aware of the existential,3539.7,4.08 | |
| risk I will be the first to say that if,3541.619,3.96 | |
| we don't do this right you're not going,3543.78,3.299 | |
| to want to live on this planet not as a,3545.579,3.121 | |
| human at least,3547.079,4.861 | |
| uh I have uh I started what is called,3548.7,4.74 | |
| the gato framework they got to a,3551.94,4.08 | |
| community it is self-organizing and is,3553.44,4.98 | |
| started sending out invitations again so,3556.02,4.2 | |
| the gato Community is the global,3558.42,4.139 | |
| alignment taxonomy Omnibus which is the,3560.22,4.02 | |
| framework that we put together in order,3562.559,4.861 | |
| to help achieve this future this AI,3564.24,5.46 | |
| Utopia the main goal of the gato,3567.42,4.56 | |
| Community is education empowerment and,3569.7,5.82 | |
| enablement E3 so rather than do the work,3571.98,6.359 | |
| ourselves we are focusing on empowering,3575.52,5.22 | |
| and enabling and educating people on how,3578.339,5.28 | |
| to participate in this whole thing now,3580.74,4.859 | |
| that being said I am stepping back,3583.619,3.061 | |
| because,3585.599,3.361 | |
| such a movement should never be about,3586.68,5.04 | |
| one person it should never be about a,3588.96,5.7 | |
| cult of personality or one leader it,3591.72,4.92 | |
| needs to it intrinsically needs to be,3594.66,4.5 | |
| consensus based and Community Based,3596.64,4.14 | |
| um and so the gato Community is learning,3599.16,3.54 | |
| how to self-organize now,3600.78,3.0 | |
| um and they're getting good at it pretty,3602.7,3.18 | |
| quickly so if you want to get involved,3603.78,4.4 | |
| the website is in the link go to,3605.88,5.16 | |
| framework.org and thanks for watching I,3608.18,6.3 | |
| hope you got a lot out of this cheers,3611.04,3.44 | |