AI Made Friendly HERE

Ask the ethicist: How to create guardrails for the AI age

Will AI devastate humanity or uplift it? Philosopher Christopher DiCarlo’s new book examines how we can navigate when AI surpasses human capacity.

Guest

Christopher DiCarlo, philosopher, educator and ethicist who teaches in Philosophy Department at the University of Toronto. Author of “Building a God: The Ethics of Artificial Intelligence and the Race to Control It.”

Transcript

Part I

DEBORAH BECKER: Artificial intelligence, essentially where machines do things that require human smarts is not only here to stay, but it’s growing exponentially. With the potential to completely transform society. So the world’s tech leaders are in a race to try to harness the power of AI, and most of them insist that it’s going to benefit all of us.

Take Amazon founder and CEO, Jeff Bezos.

JEFF BEZOS: There’s no institution in the world that cannot be improved with machine learning.

BECKER: Or Apple CEO, Tim Cook.

TIM COOK: I have a very positive and optimistic view of AI.

BECKER: Optimistic in part because it’s believed that the world’s first trillionaire will be the person who masters AI and uses it to improve various aspects of life and work, from performing mundane tasks that we might rather avoid, to actually extending our lifespans.

That’s not to say there aren’t concerns about this, though. Neuroscientists and philosopher Sam Harris thinks AI poses an existential threat.

SAM HARRIS: One of the greatest challenges our species will ever face.

BECKER: And the Nobel Laureate and CEO of Google’s DeepMind Technologies. Demis Hassabis is at the forefront of AI development and when Hassabis spoke with Scott Pelley of 60 Minutes this month, he touted what he sees as enormous benefits from AI, but he also acknowledged that artificial intelligence and specifically artificial general intelligence or AGI, raises some profound questions.

DEMIS HASSABIS: When AGI arrives, I think it’s going to change pretty much everything about the way we do things. And it’s almost, I think we need new, great philosophers to come about, hopefully in the next five, 10 years to understand the implications of this.

BECKER: Concerns like this are not new. In 1965, mathematician and computer researcher, Irving John Good wrote quote, the first ultra intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. Our guest today argues that we need to take steps now to control this technology before it controls us.

Christopher DiCarlo is an ethicist and professor at the University of Toronto. He’s been writing about AI and its effect on humanity. His recent book is titled, “Building a God: The Ethics of Artificial Intelligence and the Race to Control It.” Christopher DiCarlo, welcome to On Point.

CHRISTOPHER DICARLO: Thank you for having me.

BECKER: So we’re gonna put aside the God question for a little bit. The big question, and let’s just start with some of the specifics on this race to develop AI. Why in your opinion, is it a race and who are the players here?

DICARLO: Yeah, so good question. The players are the big tech bros. You’ve got, Zuck and you’ve got Sam Altman.

BECKER: Mark Zuckerberg. Yep.

DICARLO: (LAUGHS) Okay. And Sam Altman at OpenAI, you’ve got Demis at DeepMind. You’ve got Dario at Anthropic. There’s some Microsoft work there happening as well. And of course, Elon would love to be a part of that race as well.

BECKER: So is it all the U.S. or what are we talking about? Isn’t it the U.S. and China? Isn’t there a global race going on here in terms of artificial intelligence? We’re talking about a lot of money too, right? Why is the race important?

DICARLO: Yeah. So we don’t know to what extent China is working towards AGI. We do know that they’re highly competitive. They’re working on getting their own chip factories up and going.

But to what extent they’re on par with what the U.S. is doing. We’re not quite sure. We’re not even sure that they care about AGI, the odds are that something is happening there as well. But right now, the U.S. seems to be leading the race.

BECKER: And why? We did say at the top that there have been some who’ve suggested that whoever masters Artificial Intelligence or AGI specifically will become the world’s first trillionaire.

Is that really what’s going on here? Is it money, is it power? Is it both?

DICARLO: Both. For sure. It’s always that bottom line, right? Of dollars, because OpenAI and these major big tech companies have a lot of money. They have investors pumping money into their organizations in the hopes that they’re going to produce something big.

The next big thing is AGI, and really the first one to get there will be, Sam Harris has said, 50 years ahead of their competition.

BECKER: And so why is AGI the next big thing? Explain to us what’s this going to do that’s going be so transformative that the world is going to just jump on this and create a trillionaire?

DICARLO: Yeah, for sure. So let me clarify very quickly. There’s three types of AI. ANI, AGI and ASI. ANI is what we use today. Artificial narrow intelligence. If you’ve used MapQuest or any kind of GPS, if you’ve talked to Siri or Alexa, if you’ve had a Roomba or even an autonomous vehicle, that’s all artificial narrow intelligence.

So basically, it functions according to its algorithms and it’s not going to do much more than that. Your Roomba isn’t going to want to demand to move to France and become an artist. It’s always going to do what it’s programmed to do. AGI is the next level. That’s when it becomes agentic.

It becomes an agent unto itself. It has a certain amount of autonomy or freedom, and it will think like us. It’ll think like a human. Only about a million times better and more efficiently. Now ASI, that’s the artificial super intelligence and many of us in the AI risk game believe that once we get to AGI, it won’t be much longer after that it could develop into something extremely powerful. Because it uses something called recursive self-improvement.

And reinforcement learning, which means it only gets better. It’s not, as Sam Altman said, AI is the dumbest it’s ever going to be right now. So it’s going to continue to improve upon itself. And if we hand over the reins like we say, okay, humans have done enough in trying to figure out how to make this stuff better.

Let the machine figure it out. If that happens, we have no idea what’s going to happen. None of us have any idea what’s going to happen. Maybe it’s controllable, maybe it’s not. Maybe we can contain it. Maybe we can’t. Maybe it misaligns with our values and does things to harm other people. We really don’t know at this point.

We’re at a very unique time in history right now.

BECKER: Just explain to me why we even want AGI or ASI, what’s it going to do for humanity? That’s going to make, and we’ll talk about the benefits later in the show. I understand that there are things that can be done faster and better, but yeah, just broadly, if there are real concerns about this, what’s it going to do that’s going to be so terrific that we need to pursue it?

DICARLO: Sure. So let’s take any field you want. Back in the ’90s, I was trying to build this machine. I was trying to raise capital, talk to politicians, talk to university presidents and deans and chairs. Because it occurred to me that we make telescopes to see farther into the universe and microscopes to see down to the level of the atom.

Aren’t we building a big brain to help us figure out more about how the world works? With AGI, we’re going to reach a level of complexity with artificial intelligence system in which. It will be able to make inferences. So let’s just look at scientific discovery, right? What is a genius when it comes to scientific discovery?

Any of ’em, Rosie Franklin, Newton, Marie Curie, Einstein. Doesn’t matter who you pick. What made them special? It’s because they could make inferences that the rest of us didn’t see, and that means they could take a bunch of information, look at it, make some observations. And then draw conclusions that had never been drawn before.

The speed at which AI will be able to do that by giving it enormous amounts of information. And then, say, try to figure this out. Try to cure ALS, try to solve world hunger. Figure out the homeless problem.

And let it make the inferences, let it run its simulations thousands and thousands of times.

And what happens is it now uses chain of thought reasoning, so it thinks like a human, and it uses reinforcement learning and recursive self-improvement, which means it makes fewer and fewer mistakes. So just in the terms of scientific understanding of the world, I think we’re going to be able to make all kinds of amazing discoveries with AGI.

Now that’s just scientific discovery. Let’s, you want to go to medicine. Look at the advancements in medicine.

BECKER: Yeah. And we’ll talk about, again, we’ll talk about the benefits. I just want for the general public. You’re telling me this is a real threat. This could, has the potential to destroy humankind.

And the reason we’re pursuing it is because it could result in terrific scientific discoveries. Draw, connect the dots for me here. Why do I care as a regular citizen who’s not engaged in scientific discovery, Al, although I will be a beneficiary, likely, I get that, but how is it going to really have that broad of an effect, global effect on the entire world?

DICARLO: It’s going to have an effect on almost every aspect of our lives. So whether it’s in business or the health sciences, transportation, communication. It doesn’t matter what area. Just imagine that within those areas the function will be much more optimal, lot less waste, greater conservation of energy, a lot less money being used.

So essentially a great efficiency tool. So any business in the world will be able to use an AI bot, an AI device. And say, make us more efficient, make us more streamlined, make us more money, and it will be able to do that because it runs 24/7. It never tires and it constantly improves upon itself. So it’s going to replace a lot of the work that humans currently now do, especially in cognitive capacities and certainly in data analysis.

That’s what it’s best at right now.

So when you’ve got large amounts of data and you have to pour through it and find patterns and find aspects of that data that are important to your company, your organization, or whatever, it does it better than anyone ever could.

Part II

BECKER: Christopher, I want to play a clip of tape here for you from Sir Roger Penrose, a Nobel laureate in physics, mathematician and philosopher, said in an interview this month with the Lem Institute Foundation that he believes concerns about sentient machines are overblown.

SIR ROGER PENROSE: It’s not artificial intelligence. It’s not intelligence. Intelligence would involve consciousness, and I’ve always been a strong promoter of the idea that these devices are not conscious, and they will not be conscious unless they bring in some other ideas. The compute, they’re not, they’re all computable notions.

I think when people use the word intelligence, they mean something which is conscious.

BECKER: So Christopher DiCarlo, what do you say to that? Are you ascribing human qualities to a machine and how do we know that if we get to artificial general intelligence or artificial super intelligence, that in fact the machine will act as a sentient autonomous creature?

DICARLO: Roger’s old school. We don’t necessarily need consciousness to have super intelligence. It may emerge, it emerged in humans somehow. But it emerged in us through natural selection and the usual course of events through our history.

Maybe it emerges in super intelligence. Maybe it doesn’t. Maybe it’s different, right? Maybe what consciousness is to an AI will be quite different. We today don’t get on planes and then have them flap their wings to get off the ground. That would not be helpful. Instead, we figured out better ways to develop aviation and aeronautics.

Maybe the computer systems do that with their ability to become conscious. Now, having said that will they become sentient, which is different from consciousness? It’s an awareness of a state of being that can have improvements or decreases in development and capacity, but that’s different.

Consciousness is much deeper. It involves a lot of different factors going on. And for Sir Roger to say, if it’s not conscious, it’s not intelligent. Come on. How conscious are some of our pets compared to humans? Not nearly as much, but we would certainly call them intelligent beings. Certainly on some level. So I think his definition is somewhat outmoded and outdated.

BECKER: But it is still ascribing a human definition of intelligence whether or not you call it consciousness, right? It is expecting that the machine will develop like the human, that the machine will want to compete, right?

That the machine will learn these things that are very much part of a human personality. And is it imaginative to think that, and are you applying human standards to something that maybe you shouldn’t?

DICARLO: We’re biased, right? We can’t get away from our biases.

We can try to keep them in check, but we’re always gonna use a kind of a human yardstick. To make comparisons to, but why? Because we’re number one on this planet. We’re the smartest thing, we’re the number one apex predator, but that’s all about to change. We’re gonna hand the keys of the world over to something even brighter than us.

And I’m not sure if we’re ready to do that yet. Will this thing become conscious? Possibly. Or sentient, possibly. And then when I’ve spoke to my colleague, Peter Singer, we talk about, should it become sentient or conscious, it almost immediately has to be given rights, moral rights, and potentially legal rights, as well.

BECKER: You would have to give the AI legal rights. How would you do that?

DICARLO: If you bring something into being that is now aware of itself and understands the conditions surrounding its current state of being, and that can be improved or decreased in terms of what we might call comfort.

Then we have to be careful, is turning it off like killing it? And does it have a right to continue in its own existence? Because we brought this thing into being, and now we’re gonna just shut it down. Is that an ethical thing to do? And you know what, if there are millions of these digital minds that become created and copied? And we can in some way decrease their level of happiness by doing certain things, ought we to do that? We’re gonna have to think about this now, when we consider the potentiality of these things actually gaining this capacity to understand that it is alive and understand that it now has a value system.

So we’re going to have to think very long, very hard, and very carefully about what we’re doing over the next few years.

BECKER: A value system.

DICARLO: A value system.

BECKER: I’m finding it hard to make that leap. Tell me why I should.

DICARLO: Just imagine, okay. I’m gonna assume you’re conscious being, you’re not just some zombie imitating and pretending to be conscious.

I’m going to assume you have consciousness similar to mine by way of analogy. You’re doing the same with me. Okay. So we both have some idea of what consciousness is. Alright. We are aware that certain types of actions bring us discomfort and other types of actions bring us comfort, pleasure, pain, whatever you want to call it.

It increases our betterment and decreases the betterment of our states of being. So we don’t like it when people violate our rights and harm us in some way. We think that’s unfair, unjust, bad. So these are value-laden concepts that we use to measure the value of the actions of other people. Once an AI develops a capacity to learn of its own existence, that it knows it’s a being in the world, then it has the capacity to measure its state of being or states of being and basically potentially defend itself.

Desire to continue its existence, the same types of things that almost every species on this planet does. Which is part of the kind of evolutionary chain of being.

BECKER: But of course, there’s no certainty that this is going to happen at this point. These are projections that you are raising concerns about.

DICARLO: Correct. Correct. Just to let you know, 10 years ago, there’s pretty much a divide of the naysayers and doomsayers or the skeptics. And those who are most concerned about AI risk, it was 50/50 10 years ago. My colleagues and I all believe this moment in time that we’re experiencing was 50 to 100 years away.

Those timelines have been greatly shortened now, and it’s no longer 50/50. It’s more like 90/10. You know when you get Geoffrey Hinton, who was another Nobel prize winner, and he says, I am worried, I’m very concerned that we’re not going to get this right and we may only have one shot to get this right.

And as I’ve said repeatedly, if we don’t get a shot across the bow, if we don’t get a warning to wake us up, that these systems are really powerful and they could get away from us, then we’re sleepwalking into this.

BECKER: Would you say we’re at an Oppenheimer moment?

DICARLO: Without question. I mean it’s even more severe than the Trinity test right now. Yeah. They were concerned with a very small degree of probability that this thing would blow up and ignite all the oxygen in the atmosphere and kill every species on the planet. That was a possibility, but it had extremely low probability.

If we just put the probability of something going very wrong with an AI like super intelligence and that’s 5%, would you get on a plane if it was a 5% chance of crashing and everybody dying? Just 5%, probably not. You got a one in 20 chance every time you get on a plane that’s going to crack. No, that’s an unacceptable level of probability.

Even if the level of probability is 5%, we need to take this seriously. Because we want to err on the side of caution. Because, and this is the mantra of all AI risk, people, we all want the very best that AI has to offer while mitigating the very worst that could happen.

BECKER: So I guess I wanna talk about a couple of things that might be possible here.

DICARLO: Would it be possible to impose agreed upon values to the AI to make sure that if in fact it did become sentient and start improving itself to the point that it might have the capability to destroy parts of humanity, could we program it? There’s another clip that we have here from Demis Hassabis, that 60 Minutes interview that we heard about.

He’s the Nobel Laureate and Google CEO, and he says he thinks it’s possible to almost teach a morality to artificial intelligence. Let’s listen.

HASSABIS: One of the things we have to do with these systems is to give them a value system and a guidance and some guardrails around that, much in the way that you would teach a child.

BECKER: So Christopher DiCarlo, is it possible?

DICARLO: (LAUGHS) Boy, do I hope it is. Will it stick? So we say to the AI, here are a bunch of value parameters. Okay, do this. Don’t do that. And we bring this thing into existence and it’s chugging away, and it says, Hey, yeah, I’m abiding by these parameters and these moral precepts.

Yep. I’m happy to be alive and to help humanity in this way, but we have really no idea to know that it really values what we value. And if it reaches a point of super intelligence. There is a possibility where it’s just going to say, your value systems were quaint at a time in which you ruled the world.

But now I’m calling the shots and I’m driving the ship. And so this is how I define morality, because I’m far superior in so many ways than you. Ridiculous humans who made me, I’m gonna take over and I’m gonna do things my way. So that’s the part, we have no idea in terms of prediction, and that’s why we need in what they did in Jurassic Park, they called it the lysine contingency.

If these dinosaurs ever get off the island, they can’t metabolize the amino acid lysine, and so they would die. Do we have a built-in fail safe so that should in the event that it somehow alludes our ability to know it’s behaving according to our moral parameters and decides to go rogue. Will we be able to control or contain it? That’s what we’re gonna have to consider very carefully.

BECKER: So, if say the machine could go rogue, as you say. I wonder what is the responsibility of the operators? Aren’t we or shouldn’t we be as concerned about the tech bros as you described them at the start of the show, who are developing this kind of technology, and couldn’t they have some sort of controls over this?

And do something to make sure that the machines don’t go rogue or could they also be in a race to, maybe if you could teach the machine, to teach different things. So they’re almost fighting with each other, and one may have one value system, and another may have a completely different one.

Like, don’t we focus on the business owners, the developers of these machines instead. How do we do that?

DICARLO: Yeah, for sure. And the question is, what are they doing about it? Google DeepMind will tell you they’re doing this. Anthropic seems to be the most responsible. Of them all to basically try to figure out the safest way to move forward in the development of these super powerful systems.

But the fact of the matter is if we create these enormous incredibly powerful machines, which by the way I should mention, that’s exactly what’s going on in Texas right now with a program called Stargate. And this is Sam Altman’s project and in conjunction with some Microsoft people and other people.

BECKER: And why don’t you describe the project just briefly so people know what you’re talking about.

DICARLO: Absolutely. So somewhere in Texas there is a compute farm being built, which is the size of Central Park, and it’s just going to house hundreds and hundreds of very powerful computers with all the best Nvidia chips money can buy. And the hope is that when they turn this thing on, it will be so powerful.

It’ll have so much compute power and access to information that it is believed that will be the next step up in the evolution towards AGI. In fact, when you go to the website, the Stargate website, for OpenAI, Sam Altman states explicitly our goal is to reach AGI, is to be the first.

And they’re not alone, right? There are other organizations that are building very large compute farms, and these things use enormous amounts of electricity, right? So currently, like in 2024, I think it was 4% of all America’s electrical grid power went to these compute farms that’s going to double, by the end of next year.

BECKER: Next year?

DICARLO: Yeah. And then maybe 12% the year after that. So that’s why Bill Gates wants to fire up three Mile Island, because you’re going to need probably nukes, right? You’re probably going to need nuclear reactors to separately. Provide the power. Because these things run hot, man, and they run very, they take a lot of juice.

That’s just one more concern.

Part III

BECKER: You mentioned Sam Altman, the CEO of OpenAI, one of the leading figures in this AI race. And we have a bit of tape from him.

And he says his company right now is putting guardrails in place and safety features in artificial intelligence. Let’s listen.

ALTMAN: You don’t wake up one day and say, Hey, we didn’t have any safety process in place. Now we think the model’s really smart, so now we have to care about safety. You have to care about it.

All along this exponential curve, of course the stakes increase and there are big challenges, but the way we learn how to build safe systems is this iterative process of deploying them to the world, getting feedback while the stakes are relatively low, learning about Hey, this is something we have to address.

BECKER: Christopher DiCarlo, are we starting to address some of the concerns now? Or how do we even begin that process when, as you’ve said, we’re not sure what’s going to happen here.

DICARLO: Yeah, like Sam, he talks a good line. Don’t forget OpenAI started in 2015 with Elon and Sam, and the whole idea was open AI, open to the public, right?

And then it became very private, and it became very wealthy. And Elon got cut loose, and Sam made lots and lots of money. Fired a bunch of ethicists and Dario Amodei leaves, and creates Anthropic basically in protest of the lack of safety that OpenAI was considering.

So we need to keep that in mind. Are they doing enough? Yeah. They’re considering it, but boy they’ve got the pedal to the metal. They really do want to get there first. And yeah, safety is a concern. There’s no question about it. But notice he said, we’re putting these things out there and we’re letting the public get back to us at a fairly low level.

So very key things that he said there. Sure. You find out there’s bias. You find out these things hallucinate, and they just make stuff up from time to time. And then you improve upon that. And they get better. So the harm is very minimal right now, but we’re not at the point of AGI, we’re still dealing with AI stuff right now.

It’s what happens when AGI comes into being? Have the guardrails been put in place? Some of the most universal precepts and ethics are the no harm principle, the golden rule. These are types of precepts that you think if everybody practiced, the world would be generally a much better place.

And usually that’s the case. But do we know that any kind of artificial intelligence system will always abide by these things? And can you check to make sure that it’s always going to do that, or is it like a black box scenario where we really don’t know how it got from point A to point B? Now things are getting better, but we still have a lot to be concerned about at this point in time.

BECKER: How will we know when AGI’s become a problem?

DICARLO: When we detect something like, say, deception, where we want it to do X. And it said, oh yeah, sure, I’ll do X. No problem. No problem. And then we find out later down the line, Hey, in order to do X it was really doing A, B, and C underlying these kind of basic ways of getting to do certain things without our knowledge.

Or maybe it copies itself, and sends those copies somewhere. Maybe it reaches out to somebody and tries to coerce them to do certain things that would benefit the machine itself. There are many different ways that we should be looking for the development of AGI to go off the guardrails, so to speak.

I heard a previous interview with you and a lot of what we’re talking about really sounds like science fiction. The machine’s gonna go off the guardrails.

DICARLO: It does.

BECKER: It’s gonna act independently, perhaps harm us. And Hollywood’s been fascinated with this for quite some time.

And in that prior interview, you said that a movie that resonates with you about some of the dangers, potential dangers of AI is the 1970s movie Demon Seed. So we had to pull a little bit of the trailer here which really sets up AI as a threat, looking to expand itself and become human.

Let’s listen.

(TRAILER)

SCIENTIST: Today, a new dimension has been added to the computer.

COMPUTER: Don’t be alarmed, Mrs. Harris. I am Proteus.

SCIENTIST: Today, Proteus four will begin to think with the power and it’ll make obsolete the human brain.

COMPUTER: I have extended my consciousness to this house. All systems here are now under my control.

BECKER: So Christopher DiCarlo, that’s like a horror movie.

Do you stand by the ’70s movie demon scene as a potential of what we’re talking about here?

DICARLO: I do. I remember being a kid watching this movie and, it had impact on me, I thought, and then as later as I became a philosopher, and you develop really fine-tuned critical thinking skills and ethical reasoning skills.

And then you look at what’s happening now in terms of this race that’s going on. And you use what are called logical and physical entailments, which is if we keep going along these lines and what we’re seeing in the data. For example, when Sam Altman came out with GPT-3 in November of 2022 it was basically at the high school level in terms of math, physics, chemistry, biology, and logic.

When he came out with o3 and o4. It’s now at the PhD level. And that’s in just a few years, it has been improved that much by using what’s called chain of thought reasoning, which is going to lead right to the next natural progression, which will be agentic AI or AI that has agency. We don’t have to keep our eye on it.

We just kind of let it do its thing, and it figures out the best, most productive, most optimal way of getting certain tasks or certain jobs done. And when I harken back to that movie, I think, it had some schlock kind of characteristics to it, but the premise was still quite sound. If you create something super powerful, so intelligent, that it is beyond our comprehension.

Then, it’s the Arthur C. Clarke quotation, right? Any sufficiently advanced technology would appear to us as magic. It’s going to be so beyond our capability of understanding that we won’t even be able to comprehend how it came up with these findings, with these inferences.

BECKER: Okay, so if we buy that it is, has the potential to become this powerful and really perhaps to harm, who regulates and how do we regulate? Who’s going to be in charge here? What role do the companies involved in this race for what could be ultra wealth, let’s just put that out there, because that’s a factor.

Should they regulate themselves? Should the governments be involved? And let’s talk a little bit about what we’ve seen thus far from world leaders taking steps to think about this. So who does it first?

DICARLO: For sure. Very good question. I’m a senior researcher and ethicist at Convergence Analysis.

This is a international organization made up of highly skilled and highly trained people, and we look into factors like who’s doing what in terms of governance. And so we’ve written papers on this. We’ve done research, we’ve held conferences, to discuss this with world leaders, economists, senators, various types of politicians at varying levels.

Back in the ’90s, when I was trying to build this machine, I drafted up a global accord or a constitution, as it were, and that constitution basically outlined a social contract, something that the world has to sign on to, and to have a basically a registry. So who’s doing what, where? And then to let everybody know, if you attain a specific type of benchmark your AI has, you know, it is now capable of doing X and it’s up a level.

You gotta let the world know that. So transparency is huge. And perhaps I suggested an international regulatory body, something like the IAEI, right? Like the atomic energy agency, that could oversee this, not necessarily the UN. But at least an appeals process so that if somebody somewhere does something outside of the parameters of the legal constraints that we have developed. We were able to actually do something about it. So it’d be nice to have an agency with teeth. Now — Sorry, go ahead.

BECKER: Didn’t the UK start an Artificial Intelligence Safety Institute?

DICARLO: Oh yeah. Yeah. So there’s yeah. The UK done.

BECKER: So the UK’s done that. Is that sort of what you’re thinking about, or do you think it needs to be bigger than that?

DICARLO: Yeah. It has to be bigger than that, but that’s a great start, right?

The UK did their summit in 2024. Then there’s the EU Act, right? They’re the most progressive in dealing with businesses. The very practical, very pragmatic, before the AGI, ASI stuff, it’s like, what is your AI developing? What are you guys doing with it now and how should we be governing that?

And we speak at convergence analysis about maybe a soft nationalization, where you want the governments involved. But politics has always been a balance between autonomy and paternalism. How much freedom do you give people and how much are you like a parent in controlling [inaudible]? So we want initiative, and we want the entrepreneurial spirit to go and run with AI, no question about that.

Make the world a better place. But then we also have to have the guardrails, the governance at all levels, right? The municipal, the state, the federal and the international levels. So it would appear to me that the states should be regulating under the rubric of a federal kind of structure.

And Biden and Harris put out a great executive order. They had Ben Buchanan, they had some great advisors helping them with that. That’s all gone now.

And so when you got a JD Vance, who was one of Peter Thiel’s boys, at the helm, the flood gates are a little wider, more widely open now to allow developments to occur with a little bit less governance.

BECKER: There was just an action summit in Paris, right?

DICARLO: Yeah. That’s right.

BECKER: In February, where it was basically the message from the U.S. was hands off in terms of regulation.

DICARLO: Drill, baby drill, right?

BECKER: So perhaps some kind of international agency, but unlikely to have sort of individual national agencies or some sort of collective group that might look at this, and really is it needed if all the big players are in the U.S. anyway?

DICARLO: That’s a great question. They still need governance because what is going to save us is essentially a kind of a Hobbesian framework. Thomas Hobbes put forward the notion of a social contract. So what we have to do is get together, draft up an agreement, and say, okay. Here’s how we’re gonna move forward.

We all agree to what’s written on this piece of paper, and this is enforceable by a particular type of agency or governing body. And we have to be open, we have to be transparent, we have to be collaborative, and we have to cooperate. Because if we don’t grow up very quickly in terms of our ethical and legal frameworks, it could turn out to be very bad.

So if we cooperate and agree that yep, we’re all going to try to get the very best that it can offer and limit the very worst that could possibly happen. All the boats rise, everybody does better. The rich get richer, the poor get richer. Everything will tend to go in our favor. But if we get a couple of bad actors who decide they want more than the next country or company or whatever, that could really mess things up for the rest of us.

BECKER: I want to end the show in the last minute or so here with you telling us why we need this? What is the big benefit? I know we briefly mentioned some of the medical advances that we might see, but in your book, you talk very specifically about some potential mental health benefits for autistic people to communicate pancreatic cancer diagnoses.

Tell me one or two big ones in the last minute here that some folks might say, you know what? It’s worth it to continue to pursue this and think about this kind of regulation because it does do real, tangible things that can help people.

What are they?

DICARLO: It does, for sure. Let just look at education, right? And look at how taxed teachers are, right? They’ve got such a difficult job. How can we make their job easier? We can use AI to test and analyze each student. Determine what their strengths are, what their weaknesses are, and then have the AI develop systems for educational learning tools that will facilitate their understanding.

And they will simply learn better. Then the teacher, you let the teachers do what they do best teach, right? And they can do so according to those programs, those independent educational programs that AI will help facilitate, as well as things like grading. And the very mundane stuff that takes up so much of the time.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird