The American computer scientist and techno-optimist Ray Kurzweil is a long-serving authority on artificial intelligence (AI). His bestselling 2005 book, The Singularity Is Near, sparked imaginations with sci-fi like predictions that computers would reach human-level intelligence by 2029 and that we would merge with computers and become superhuman around 2045, which he called âthe Singularityâ. Now, nearly 20 years on, Kurzweil, 76, has a sequel, The Singularity Is Nearer â and some of his predictions no longer seem so wacky. Kurzweilâs day job is principal researcher and AI visionary at Google. He spoke to the Observer in his personal capacity as an author, inventor and futurist.
Why write this book?
The Singularity Is Near talked about the future, but 20 years ago, when people didnât know what AI was. It was clear to me what would happen, but it wasnât clear to everybody. Now AI is dominating the conversation. It is time to take a look again both at the progress weâve made â large language models (LLMs) are quite delightful to use â and the coming breakthroughs.
Your 2029 and 2045 projections havenât changedâ¦
I have stayed consistent. So 2029, both for human-level intelligence and for artificial general intelligence (AGI) â which is a little bit different. Human-level intelligence generally means AI that has reached the ability of the most skilled humans in a particular domain and by 2029 that will be achieved in most respects. (There may be a few years of transition beyond 2029 where AI has not surpassed the top humans in a few key skills like writing Oscar-winning screenplays or generating deep new philosophical insights, though it will.) AGI means AI that can do everything that any human can do, but to a superior level. AGI sounds more difficult, but itâs coming at the same time. And my five-year-out estimate is actually conservative: Elon Musk recently said it is going to happen in two years.
We do have to be aware of the potential here and monitor what AI is doing â but just being against it is not sensible
Why should we believe your dates?
Iâm really the only person that predicted the tremendous AI interest that weâre seeing today. In 1999 people thought that would take a century or more. I said 30 years and look what we have. The most important driver is the exponential growth in the amount of computing power for the price in constant dollars. We are doubling price-performance every 15 months. LLMs just began to work two years ago because of the increase in computation.
Whatâs missing currently to bring AI to where you are predicting it will be in 2029?
One is more computing power â and thatâs coming. That will enable improvements in contextual memory, common sense reasoning and social interaction, which are all areas where deficiencies remain. Then we need better algorithms and more data to answer more questions. LLM hallucinations [where they create nonsensical or inaccurate outputs] will become much less of a problem, certainly by 2029 â they already happen much less than they did two years ago. The issue occurs because they donât have the answer, and they donât know that. They look for the best thing, which might be wrong or not appropriate. As AI gets smarter, it will be able to understand its own knowledge more precisely and accurately report to humans when it doesnât know.
What exactly is the Singularity?
Today, we have one brain size which we canât go beyond to get smarter. But the cloud is getting smarter and it is growing really without bounds. The Singularity, which is a metaphor borrowed from physics, will occur when we merge our brain with the cloud. Weâre going to be a combination of our natural intelligence and our cybernetic intelligence and itâs all going to be rolled into one. Making it possible will be brain-computer interfaces which ultimately will be nanobots â robots the size of molecules â that will go noninvasively into our brains through the capillaries. We are going to expand intelligence a millionfold by 2045 and it is going to deepen our awareness and consciousness.
It is hard to imagine what this would be like, but it doesnât sound very appealingâ¦
Think of it like having your phone, but in your brain. If you ask a question your brain will be able to go out to the cloud for an answer similar to the way you do on your phone now â only it will be instant, there wonât be any input or output issues, and you wonât realise it has been done (the answer will just appear). People do say âI donât want thatâ: they thought they didnât want phones either!
Kurzweil in Cambridge, Massachusetts in 1977 with this Kurzweil Reading Machine that converted the printed word into synthetic speech. Photograph: Bettmann Archive
What of the existential risk of advanced AI systems â that they could gain unanticipated powers and seriously harm humanity? AI âgodfatherâ Geoffrey Hinton left Google last year, in part because of such concerns, while other high-profile tech leaders such as Elon Musk have also issued warnings. Earlier this month, OpenAI and Google DeepMind workers called for greater protections for whistleblowers who raise safety concerns.
I have a chapter on perils. Iâve been involved with trying to find the best way to move forward and I helped to develop the Asilomar AI Principles [a 2017 non-legally binding set of guidelines for responsible AI development]. We do have to be aware of the potential here and monitor what AI is doing. But just being against it is not sensible: the advantages are so profound. All the major companies are putting more effort into making sure their systems are safe and align with human values than they are into creating new advances, which is positive.
Wonât there be physical limits to computing power that put the brakes on?
The computing that we have today is basically perfect: it will get better every year and continue in that realm. There are many ways we can continue to improve chips. Weâve only just begun to use the third dimension [create 3D chips], which will carry us for many years. I donât see us needing quantum computing: weâve never been able to demonstrate its value.
You argue that the Turing test, wherein an AI can communicate by text indistinguishably from a human, will be passed by 2029. But to pass it, AI will need to dumb down. How so?
Humans are not that accurate and they donât know a lot of things! You can ask an LLM today very specifically about any theory in any field and it will answer you very intelligently. But who can possibly do that? If a human answered like that, youâd know it was a machine. So thatâs the purpose of dumbing it down â because the test is trying to imitate a human. Some people are reporting that GPT-4 can pass a Turing test. I think we have a few more years until we settle this issue.
Not everyone is likely to be able to afford the technology of the future you envisage. Does technological inequality worry you?
Being wealthy allows you to afford these technologies at an early point, but also one where they donât work very well. When [mobile] phones were new they were very expensive and also did a terrible job. They had access to very little information and didnât talk to the cloud. Now they are very affordable and extremely useful. About three quarters of people in the world have one. So itâs going to be the same thing here: this issue goes away over time.
skip past newsletter promotion
Analysis and opinion on the week’s news and culture brought to you by the best Observer writers
Privacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our Privacy Policy. We use Google reCaptcha to protect our website and the Google Privacy Policy and Terms of Service apply.
after newsletter promotion
My first plan is to stay alive â reaching longevity escape velocity. Iâm also intending to create a replicant of myself
The book looks in detail at AIâs job-killing potential. Should we be worried?
Yes, and no. Certain types of jobs will be automated and people will be affected. But new capabilities also create new jobs. A job like âsocial media influencerâ didnât make sense, even 10 years ago. Today we have more jobs than weâve ever had and US average personal income per hours worked is 10 times what it was 100 years ago adjusted to todayâs dollars. Universal basic income will start in the 2030s, which will help cushion the harms of job disruptions. It wonât be adequate at that point but over time it will become so.
There are other alarming ways, beyond job loss, that AI is promising to transform the world: spreading disinformation, causing harm through biased algorithms and supercharging surveillance. You donât dwell much on thoseâ¦
We do have to work through certain types of issues. We have an election coming and âdeepfakeâ videos are a worry. I think we can actually figure out [whatâs fake] but if it happens right before the election we wonât have time. On issues of bias, AI is learning from humans and humans have bias. Weâre making progress but weâre not where we want to be. There are also issues around fair data use by AI that need to be sorted out via the legal process.
What do you do at Google and did the book go through any pre-publication review?
I advise them on different ways they can improve their products and advance their technology, including LLMs. The book is written in a personal capacity. Google is happy for me to publish these things and there was no review.
Many people will be sceptical of your predictions about physical and digital immortality. You anticipate medical nanobots arriving in the 2030s that will be able to enter our bodies and carry out repairs so we can remain alive indefinitely as well as âafter lifeâ technology coming in the 2040s that will allow us to upload our minds so they can be restored â even put into convincing androids â if we experience biological death.
Everything is progressing exponentially: not only computing power but our understanding of biology and our ability to engineer at far smaller scales. In the early 2030s we can expect to reach longevity escape velocity where every year of life we lose through ageing we get back from scientific progress. And as we move past that weâll actually get back more years. It isnât a solid guarantee of living for ever â there are still accidents â but your probability of dying wonât increase year to year. The capability to bring back departed humans digitally will bring up some interesting societal and legal questions.
What is your own plan for immortality?
My first plan is to stay alive, therefore reaching longevity escape velocity. I take about 80 pills a day to help keep me healthy. Cryogenic freezing is the fallback. Iâm also intending to create a replicant of myself [an afterlife AI avatar], which is an option I think weâll all have in the late 2020s. I did something like that with my father, collecting everything that he had written in his life, and it was a little bit like talking to him. [My replicant] will be able to draw on more material and so represent my personality more faithfully.
What should we be doing now to best prepare for the future?
It is not going to be us versus AI: AI is going inside ourselves. It will allow us to create new things that werenât feasible before. Itâll be a pretty fantastic future.