Welcome to Newsletter 5. I am mid-edit on ToKCast episode 115 which is the next instalment in my breakdown of the Fabric of Reality. It will be chapter 5 of that book called “Virtual Reality” and as I describe it there - it is the “synecdoche” chapter because it contains so much of the rest of the book and the rest of the worldview right there in that single chapter. Hopefully that comes out in the next few days - it’s a long one.
I listened to Sam Harris latest podcast which is number 180 from “Making Sense” and called “The Future of Artificial Intelligence” and is a conversation with Eric Schmidt who was the CEO of Google and he’s a businessman and has written about technology and of course AI and its risks.
So although Sam’s interview with Eric tried to focus on the upsides of AI and technology we were of course drawn, as often happens in these conversations, into the deep well of catastrophic musings. The AI apocalypse - or just concerns about runaway AI and so on, as with the climate catastrophe and - well name your fear about the future - capture us for the same reason blockbuster disaster movies do. It’s fun to be thrilled. But the reality we tend to occupy is rather often less catastrophe and more a steady climb towards better days. Yes: there is some hell on Earth like Ukraine right now - but these are becoming the exception - for us - in our Enlightened times rather than the rule as they are for static societies.
Prophecy is biased towards pessimism.
Prophecy is guessing at a future which will be impacted by knowledge creation and the choices people make. In other words: guessing at the impossible to know.
There are two ways of speaking about the future. We can predict or we can prophesy. The difference is that a prediction is a logical derivation from a scientific theory where we can explain why human choice will have no impact. For example, I can predict that adding NaOH and HCl in a beaker will produce some NaCl and water. That’s chemistry. I can predict that dropping a rock out my window which is about 20m above the ground will take roughly
20 = 1/2 x 10 x t^2
4 = t^2
t = 2 seconds to hit the ground. That’s just physics. I assume no one catches it mid flight. Those are scientific predictions.
However when people get involved they create knowledge and that knowledge can cause them to make choices not possible before the knowledge creation.
So if you try to guess the future years from now - how the climate might be or what AI might be like - that all depends on the content of knowledge yet to be created. And predicting the content of future knowledge - future explanations - is impossible because doing so would mean having the knowledge then - prior to its creation. A contradiction. Thus guessing at times when knowledge creation will have an impact is prophesying.
But almost all intellectuals do it and indeed it is why so many get the ear of podcasters, media, corporations and governments. The intellectuals are asked for their guesses about the future. They call those guesses “predictions” but actually they are prophecies.
And the thing is this: anyone can guess what problems there might be. How things will go wrong. What tragedy will befall us. AI? It will go bad and here is how. Climate? It will go bad and here is how. Population? It will go bad here is how. Nuclear energy: it will go bad here is how.
We know - it’s science that the Earth will warm, icecaps melt, sea levels rise and cities will be inundated. It’s science. Don’t you care about the science?
But the thing is that those problems have potential solutions and by definition those are far more difficult to imagine. And the reason for that is that the solutions are the explanatory knowledge. Explanatory knowledge is created for a reason: to solve a problem. Sometimes many problems.
Imagining problems is rather like generating fictions. We can all do it because we all have imaginations and it’s fun to imagine - even the bad stuff. But solutions? Well now we’re into science and other things. The tough stuff (well it’s fun for some of us) - but many find physics and geology and chemistry and mathematics, unfortunately, boring. So they find it hard and THUS they find it harder to imagine the solutions. Well, in fact we all do. It’s difficult to come up with scientific solutions which is to say scientific theories: hard to vary explanations of the physical world. And it’s just logic that imagining problems is easier than the conjunction of imagining the problems AND their solutions.
And that is why guessing the future is always biased towards bad news. To pessimism. To the world without the solution. Because your guess about the future - unsurprisingly - did not also guess the million, billion or trillion dollar technology that will solve the problem you guessed. It was harder to think of the new scientific theory needed to overcome the disaster your mind can readily conjure.
There is no unproblematic state (besides death). Every new solution creates new problems - but the problems are better problems. You’re hungry so you cook yourself a nice meal. The dirty dishes you are left with are a preferable problem to going hungry all night. We were cold and in the dark and without electricity and we died early, hungry and sick. But we also found that burning fossil fuels in combustion engines and inside power stations kept up warm and enabled efficient cooking and got us from A to B faster. We became wealthier and lived longer and more happily and more healthy. But: we created some pollution along the way. The pollution from power stations is a preferable problem compared to dealing with all the problems of being without…power stations.
None of this is an argument for being cavalier about anything. I don’t want to be cavalier. Optimism is not cavalier. It’s just “not-pessimism”. We can consider the problems but we can also expect that problems are soluble and things will continue to get better. Even with climate change-which we can and will solve. And even with AI and AGI.
And just on AGI: they will be people.
They might be made from silicon - perhaps - but there is only one way for them to learn. Like us: to create knowledge. And there is only one way to do that: conjecture and refutation: guessing at the truth about reality and checking it against that objective reality. If silicon people decide to do something we do not want, then we will treat them the way we treat people around us now who do what we do not want. We will communicate with them. In some cases - where the disagreement is with laws - we will have a legal system and a police force to take care of such matters.
The concern that they are a special danger is a concern that some people should be treated differently. And that is actually a worse problem. But, if that episode of “Making Sense” is anything to go by: one solution is to “switch off” the AGI. Whether that means the death penalty or imprisonment I don’t know. But I do know this: the people of the future who have to deal with these matters will be more enlightened than we are. They will know more and they will be more moral. They will understand personhood better and extend it to the AGI.
I know all this because all evils are due to a lack of knowledge. And the evil some fear now from the coming AGI is just due to a certain kind of ignorance. But when we have AGI we won’t be ignorant of them anymore. They’ll teach us as we teach them and then we’ll know the fears of 2022 were misplaced.
Share this post