AI and Us: Computers, Lemonade, and God

A conversation with Ilyas Khanis, the founder of the world’s largest and leading quantum computing company, about what AI is (and is not), why it has captured our imagination, and what AI means for humanity.

(Image of AI chip: Igor Omilaev/Unsplash; of lemonade: Laura Chouette/Unsplash; of hand of God from Michelangelo's "Creation of Adam": Wikipedia)

Ilyas Khan is the founder of Quantinuum, the world’s largest and leading quantum computing company. Quantinuum is an Anglo-USA quantum computing company with joint headquarters in Colorado and Cambridge(Great Britain).

Ilyas Khan

Ilyas was the Chairman of the Stephen Hawking Foundation and was also the founding Chairman of the Topos Institute, which is based in Berkeley, California. Both the Stephen Hawking Foundation and Topos—a research institute taking a uniquely mathematical approach to generating technology for the benefit of society as a whole—are not-for-profit organisations.

Ilyas grew up in Lancashire in the North of England. He was made a Knight of St Gregory by Pope Francis, and is a notable speaker and commentator on matters relating to Ai and Quantum Computing. His account of converting from Islam to Catholicism was published by CWR in 2012.

Mark Brumley: Many people today love the idea of artificial intelligence; others fear it. Perhaps most people have only a very thin slice of understanding about what it is. Can you tell us what Artificial Intelligence is?

Ilyas: Artificial intelligence is an extension of all the technologies that we have ever invented, starting with fire, through to the wheel, to weaving and spinning into cloth and clothes, and, more recently, steam-powered machines and locomotion, and then, lo and behold— heavier-than-air flight. The mark of all these things was that we could look at and touch them, and they did things that human beings could not. So, when we fly across the Atlantic, we fly in a machine that does something that we cannot.

Now, a computer is pretty much the same thing, and you know we’re not very good at remembering things when it comes to numbers. So, we’re certainly not very good at adding and multiplying and taking away.

Mark Brumley: Some of us are better than others. You mathematicians are pretty good!

Ilyas: But not on an industrial scale. And, of course, patterns and numbers—mathematics—is what these Turing Machines are good at. (Turing Machines are computers.) They’re better than we humans are. We’ve invented ways of storing things in memory, retrieving them … and we really weren’t threatened by that. We were quite amused by it. We liked the idea that we can have spreadsheets, send emails, and then look them up again, and they can be sent faster than anything we could have done by the post.

Artificial intelligence represents an extension of all these things; it is looking at patterns that we humans find very hard. The reason AI has captured our imagination is that it is humanlike. Now, there’s nothing humanlike in an aeroplane or a motor car or a pneumatic machine that drills into the road. But a computer that we interact with and ask questions of using human language, and then we get answers back, is very human-like. And so we use this terminology, artificial intelligence, which I think is actually quite good because it is artificial, and it is a form of intelligence—because, for instance, we think people who are good at maths are intelligent. They may not be, but we think they are. (Laugh). There are many different kinds of intelligence. There’s emotional intelligence, musical intelligence—there are wonderful artists and painters who are all intelligent in ways that the majority of us aren’t. Artificial intelligence is an extension of the utilization of these tools, essentially to recognize patterns we humans find hard.

The most compelling example of artificial intelligence is the recent emergence of this thing called a large language model, where vast amounts of data act as a reservoir, and we ask a question of that reservoir through the intermediation of a machine, a computer. We might say, “What’s the capital city of Vietnam?” Or we might say: “I need a speech, because I’m going to a high school commencement. The High School commencement is in such-and-such a city, at such-and-such a time, and the speech has to be 10 minutes long,” and then we get the speech.

Mark Brumley: That’s a robust explanation of artificial intelligence. Many people would connect with it, at least insofar as what they hear and see in the media is concerned. Of course, some high school kids connect very well because they’re doing their term papers utilizing artificial intelligence exactly the way you described, much to the displeasure of their teachers. That’s AI. But another term people are hearing these days is AGI. What does that mean?

Ilyas: AGI: artificial general intelligence. Computer science and computer people who are responsible for coming up with these terminologies are no different from doctors, accountants, and lawyers. There’s a lot of jargon and often jargon sounds familiar, but it has a very different meaning [in fact]. That’s my caution about these [computer] words. They’re just jargon; there’s nothing mysterious about them whatsoever. When we confront artificial intelligence, i.e., machine-generated answers, that’s a very quick taking advantage of data so that it can be at times accurate and at times creative. There’s a jump from things that are still machine-like and mechanical, and where, no matter what, we can see that it’s still a machine, to something that can be generalized into all forms of intelligence. The use of the word general means something that isn’t specific for use. It can be used again and again and again.

Well, guess what? That’s what human beings do. When it comes to language, when we’re little kids we don’t just sit in front of a book. We are listening to the world around us, and we’re saying our first words: mama, or dada, or whatever. We absorb, and we generalize things, because, for example, every cat we see is a cat.

Artificial General Intelligence is the shorthand some people use to say that this device is now sentient, or perhaps maybe verging on being conscious and human-like, and is abstracted up away into the ether and exists and makes decisions. Some people would even say that you should not insult an AGI machine because it might feel bad! Some people say, “Well, they’re just like us. They’re a form of being.” I know, Mark, that you, as all Christians, are particularly interested in the concept of being. However offensive that might seem to us at first glance, that is what people mean when they refer to AGI.

Mark Brumley: Now we’re getting into some of the philosophical aspects of this. We are going to want to poke at this a bit more. Some people, both proponents of AI or AGI, and some critics, question whether the word intelligence used in artificial intelligence has the same meaning as when used with respect to human beings. There are different views of this, of course. What are your thoughts?

Ilyas: It is becoming the question, right? And it’s a question that preoccupies regulators, entrepreneurs, users, the woman and man on the street, and even children. I think it is a very good question to probe, and I do agree with you that there appear to be different flavors of interpretation.

My first response is something Wittgenstein said. You know the meaning of the word in the way you use it. I would encourage all of us not to focus too much on the word intelligence. When people use artificial intelligence, think more about how it’s being used. I could say there’s a wonderful blue sky. But I could also say, “I listened to the blues the other day, an R&B record, or I could say, well, I’m feeling a bit blue today. It’s the same word with very different meanings. It’s not that blue has changed. It’s the way we’re using the word that has changed.

The meaning of intelligence, in artificial intelligence, is actually relatively straightforward and there’s nothing that we need to get emotionally engaged with. It is a description of things that are human-like, and which are impressive because either of the scope of the memory used to generate them, or their accuracy, or, quite often, accuracy and scope. For example, in the old days to be a software engineer I’d have to go off to school and learn how to code. Today, I don’t need to do that because I can tell an AI that I need some code and the system will come back and give me the code. And that is intelligent.

Artificial intelligence is referring to that sort of thing. Now it is a real jump from there to equate AI with human beings. As far as I’m concerned, we’re created in the image of God. The intelligence we have, and the language that we use, is a gift and is unique. Our language is not the same as the language of birds and donkeys and squirrels, and yes, chimpanzees and orangutans. They can communicate, and they can display things. You might with a bit of luck get a chimpanzee who understands a few basic signs. But then so does my dog. That is very different from this conversation you and I are having. It’s very different from abstraction. I can remember fondly a date that I had with my wife, before she was my wife, and I can describe it. That level of intelligence is uniquely human, and I do not believe that there’s anything to be confused about in that context.

I find that people who want to promote the idea that there is an intelligence embedded in these machines that is the same as ours go through contrived loops of argument. If Thomas Aquinas, or Aristotle for that matter, were here, he would then unpack them in about three seconds flat. None of what they say makes sense. And yet if we respond, they get very emotional.

Mark Brumley: You were a friend of the late great cosmologist, Stephen Hawking. In fact, you were the initial chairman of the Stephen Hawking Foundation and served in that capacity for a number of years. Stephen Hawking famously said that artificial intelligence could be the biggest event in human history, and it could also be the last one –if we don’t avoid the risks, he said. What do you think about that?

Ilyas: I have to be careful here because Stephen had an intelligence and a mind the size of a solar system and was rarely, if ever, wrong. [I have] a lot of respect for Stephen. On this, I do not [quite] agree with him, but I also don’t disagree with him. What do I mean by that?

We recently came through a pandemic. Did that pose an existential risk to humanity? Many people, at points along the way, thought it did. We’ve just watched the film Oppenheimer and at the point in time when the two [nuclear] bombs were unleashed in Japan, many people felt that that technology posed an existential risk to humanity.

In fact, it does today. A fraction of the bombs out there, if used tomorrow, could annihilate humanity. This is not a new thing we’re confronting. I think what Stephen meant and what others in that camp still mean, needs to be considered with respect. I do believe that there is a particular set of outcomes, a confluence of circumstances, that could see an artificial intelligence start to make decisions that would be contrary to our interests. And if our interest is survival—let’s forget for the moment about the quality of life or the quality of the people—then it is possible, maybe a fraction of a percentage chance, within the next 100 years that there will be some instruction set that annihilates us.

From a purely factual standpoint, one has to agree that there is something to what Stephen said. But he also said something else: that this is perhaps the most important technological revolution. And that’s where I do agree with him. Because I think that whatever we’re seeing today, machines using these GPU clusters and generative AI—generative meaning that it generates a response that isn’t just limited to a factual answer—are just the first generation. The next generation, and the one after that, particularly when we have quantum technologies, will absolutely be the single most important technological thing that’s happened, and of course it has dangers embedded in it.

Mark Brumley: You mentioned Generative AI. Ignatius Press is in the publishing business, so we often joke that in the future there will be AIs writing the books, AIs publishing the books, AIs buying the books, and AIs reading the books. We’ll just sit back and watch the whole show.

Ilyas: (Laugh.) Well, joking aside, I think that we only use a very small percentage of the capacity given to us in our brains. Maybe our true calling in the future is not to toil in the sun and sweat, but to do things that help the rest of us survive. Maybe, at that point in time, we’ll be able to enjoy (laugh) these things.

Mark Brumley: There’s Aristotle’s idea of the animated tripod, or autonomous tools. The idea was that if we had self-directing devices, people wouldn’t have a slave class—as ancient Greek culture (and others around the world) mistakenly supposed was necessary for a so-called “high culture”. What you describe is like the Aristotelian animated tripods. “Intelligent tools” would free human beings—not from work per se, because there would still be work of a more leisurely or self-improving quality—but from that form of work we call drudgery.

Ilyas: I think that’s right. If we think more carefully about all the technological advances we’ve made, the technological advance was designed—whatever it was—say, fire, as I slightly tongue in cheek mentioned at the beginning, it’s all about making life easier. And the tipping point—this at least is my view—is how can human beings live with dignity and grace to the end of their lives, not to succumb to diseases there because either of genetics or aging. And along with the majesty of this expanded humanity, have then the wherewithal to contemplate the things that matter—love and compassion, and our children, our neighbors, our families, art, music, all the things that bring us joy, because at the moment all that drudgery that you rightly mentioned, you know, gives us four hours of freedom (laugh) that we go home for. So I’m very much in favor of that characterization. I fear that it’s a little bit further down the road, and you and I may still have to do a bit of drudgery.

Mark Brumley: I think so. (Laugh.) Science fiction often comes to mind when we talk about artificial intelligence. It brings to mind an endless supply of ideas and images. I think, for example, of the early George Lucas film THX 1138. There’s a famous scene where Robert Duvall’s character goes to confession in a State-sponsored confessional. He confesses to a sort of AI divinity, OMM 0000. And there’s the image of the AI taken from a medieval painting by Hans Memling, “Christ Giving His blessing”. In that scene, we have an Artificial Intelligence being approached as the priest-confessor, using this image of Christ.

Of course, there are many other science fiction accounts of supercomputers. It’s hard to predict how a technological innovation is going to affect humanity, especially given social dynamics. Do you think it’s possible that artificial intelligence might become so commonplace, widespread, that human beings would develop a kind of almost religious devotion, or have a kind of religious interpretation of artificial intelligence?

Ilyas: Well, the single-word answer is, yes; there is that danger. We deify so many things already: we deify celebrity, we deify money, we deify—in some cases very sadly–these machines around us. We are reluctant to talk about it because there’s a large proportion of our society who either refuses to acknowledge, or vehemently denies, God.

We’re in the early stages of encountering artificial intelligence. The early days when you and I were going online to trade stocks or look up something on Google, we were overwhelmed and fascinated and scared. That is a very different position from the children who’ve known nothing else. But it is still the early days. One lesson we’ve learned from every technology is to be mindful of boundaries. When cars came around, there’s this wonderful story of somebody with a flag walking [around] saying, “Danger. There’s this thing coming. Get out of the way!”

We have elaborate rules that govern air travel because when we get into an airplane we don’t want to die. We don’t want to be in an accident. And yet we do not do anything about this new technology that has been unleashed on us. People are very quick to say, “Oh, it’s a free market, and it’s democracy, and we can’t get involved.” But I take a different view. I think we, as human beings, have an obligation—moral obligation in the Kantian sense—where you know what right and wrong is. This is not right, and unless we turn around and do something about it as a society, then the dangers of deification will overwhelm us. That’s a kind of future Stephen Hawking was talking about [involving an] existential threat. I almost think that this would be the worse outcome: a world where we are the slaves of a machine, we’re not allowed to think against it, and we’re always afraid.

In some ways, we’re already in that place. Think, Mark, about the number of times that you or your loved ones, your friends, have been on a website. And the website says, “Click here” before you go, and we don’t even read what we’re clicking. We have to get to the gratification. I use this example of my Labrador. I’ll tell my Labrador that I’m going to give him a treat. And he will do anything for the treat, even though it might kill him. That is what we are currently doing. We’re selling ourselves to the data gatherers, and we’re losing agency, or we have lost agency in many respects.

Mark Brumley: The perennial problem with things we create is our own weaknesses as human beings. We have a technology that provides a kind of instant gratification and that then becomes attuned to what our desires are and puts those in front of us. It’s a sort of self-inflicted deification of the technology. We could talk of the extreme example of the Paperclip Maximizing image, where the machine takes all of our resources to fulfill its programming to maximize the existence of paper clips. That’s an existential threat from the outside. But what you’re describing is an existential threat from within. We’ve created something; it’s an extension of our wills, and we give over our wills to it to gratify our desires.

Ilyas: Yes. There, there’s so much here that could be said. Maybe one parenthesis I would add to this would be that, historically, technological revolutions have tended to be geographically isolated. This one is global. Gone are the days when the United States of America was a leader in a new global technology. This is not the case. And what that means is that we culturally, socially, intellectually, are not equipped to recognize danger, because sometimes it’s not what we think of as danger. Now, I’m not suggesting for a second. Well, maybe I should take that back. What I’m not talking about here is the topicality of something like Tiktok and the view that maybe our data, and therefore access to controlling our lives, is being taken by a different sovereign power. That is an extension of something else. But more subliminally, when we talk about just clicking through on things and sending our data, the problem is that the data collected doesn’t remain there. It actually then very quickly becomes global. You and I, and most people who are reading this, will have recognized the experience of going onto a different platform and suddenly the platform “remembering” and “recognizing” who you are.

Mark Brumley: You founded what’s become one of the world’s leading quantum computing companies. Some people lump quantum computing together with all the other “computer stuff”. What is quantum computing? And what’s the benefit? What are the risks to humanity?

Ilyas: A great way of answering is to say that for the first time, nation-states, one after the other, have invested in a single technology. And they’ve crafted programs and devoted legislative committees, and lots and lots of money—lots relative to the size of their economies. The only direct, comparable thing I can think of is, historically, armies. All nation states had armies. But that wasn’t a technology; that was the extension of force.

Why [are nation-states investing in a single technology]? It’s happening because a quantum computer breaches the limits of what can and cannot be done using a [classical] computer. What is a quantum computer? The best way of explaining that is to give an example. I’ll give you three examples of what a quantum computer will do or can do that [otherwise] cannot be done.

We have always wanted personalized medicine—something that treats me for what is me. Not something generic. Penicillin is generic. The AIDS vaccines were generic. There are dangers to anything generic—it has to be tested. We have a particular physiology controlled by these amazing things encoded in the DNA and RNA sequences, which exist in the chromosomes in our body and that have eluded and will always elude us unless we have a quantum computer. That’s one example of why these matters.

And, by the way, if you’re thinking about money, there’s a company in Denmark that came up with this anti-fat drug. and it’s now taking the world by storm. This company is worth whatever it’s worth. But an anti-fat drug is not even a comma in the lexicon that will exist when we have quantum computers and all the things that can be done. The economic impact of this is enormous.

Another example is carbon sequestration. We need desperately to sequester carbon. It’s not enough for us to think about alternatives to fossil fuels. Some might say, “Well, it’s a conspiracy, and there’s no global warming.” I would say to them: “It doesn’t matter whether it’s a conspiracy. There’s too much carbon in our atmosphere. We need to sequester it. Finding a material that can do that is within our grasp.”

A third example: a type of problem is computational, which is all about optimization and planning. These are just big words for making things better. We waste innumerable hours in every aspect of our lives because things don’t work properly. We’re very accepting, as human beings, of slippage and costs, but we all pay the price when we order something online, or whether we buy something at a restaurant. [The process] embeds all the inefficiencies in the supply chain. and a quantum computer will make things better. It won’t solve [all] combinatorial optimization problems, but if we start improving them a little bit, we will reap the benefits all around us. These are examples of what a quantum computer will do things that can’t be done classically.

Mark Brumley: The seeming benefits are mind-boggling. It almost becomes like watching an episode of Star Trek. Of course, there’s a lot of work. It’s not just a matter of developing quantum computers, and then tomorrow all the solutions come. There’s still a lot of work between the computer technology presenting solutions and actually developing the solutions. But the prospects are exciting.

Ilyas: Yes.

Mark Brumley: Let’s try to bring together these two elements of the conversation. We do hear a bit about Quantum AI. I’m curious what you make of that idea.

Ilyas: You know, quantum is very sexy and AI is even more so. So we lump them together. This is where I would encourage people to think of it as follows. Whenever we’ve come across new technologies, we think we know how we will use them, but we end up actually not knowing until they arrive. And then suddenly, we have a myriad of applications that we didn’t even think of. Remember when the mobile phone was a method of calling our moms and telling them that we were okay? Look at everything that we do using that technology today. Quantum computers will be used in so many ways we can’t anticipate. The current implementation of AI is first generation—a bit like the big clunky phone of 1989, with limited networks and batteries that lasted about 4 seconds! It’s taken nearly 40 years to get to today’s phones. But this impact on AI of quantum computing, I think, will manifest itself much more quickly. The next generation of AI and the generation after that—let’s just talk about 10 years, not 40 years—will be as different as today’s smartphones are from the early ones. That is what people mean when they talk about quantum AI. “Quantum” here refers to a broad spectrum of things: from quantum technologies to security, all the way through to encryption.

AI relies upon being able to process language and respond to human beings in ways that are more and more accurate, empathetic, and, should we say, insightful? The big thing for me is interpretability. The current language models are not interpretable. They’re not accountable. We have to work very hard before we get to the point where we are comfortable using them in large, regulated industries where the stakes are high, like medicine or banking or law.

So that’s the way I would describe quantum AI. It’s still going to be AI, but it will be powered by quantum technologies in ways that enhance the immediacy of both accuracy and accountability, as well as becoming more widespread.

Mark Brumley: Since we’re talking about enhancement, is it the case that it doesn’t pose any kind of radical amplification of whatever the concerns or issues exist with artificial intelligence? Is Quantum AI just a kind of extension of the technology of quantum computing to AI?

Ilyas: Well, when you phrase it like that, it makes me pause. Quite often, things that are enhanced can be unrecognizable. We look at photographs of people who suffer from being overweight. And then you find a photograph a year later and they’re unrecognizable. Enhancements can be transformational, and I do think the next generation AI will be transformational, and some of the dangers that we’ve been speaking about, of course, will then become much more immediate. When I say transformational, I mean more powerful. So yes, I do think quantum technologies will make AI more powerful if that’s the shorthand for what I said earlier.

Mark Brumley: Some scientists, philosophers, and others see artificial intelligence as posing, not just existential challenges in the sense it can cause chaos in the economy, or in that we might create a supercomputer that wants to rule the world. They speak of an “existential threat” in the sense that AI forces us to think deeply about humanity’s place in the universe. Some thinkers see AI as affirming human dignity, affirming human personhood in theological terms as reflecting the creativity God employed in making us. That creativity, on this view, is now reflected in man’s ability to create artificial intelligence. Such people may see risks, too, as there in all kinds of technological innovations if they’re abused. But they’re not fundamentally concerned about what these technologies mean for how we regard human beings. On the other hand, there are those who say that AI will never be able to do all the things human beings can do. It will never have the deep understanding or profound creativity or the ability to exercise true freedom. Therefore, we shouldn’t talk about AI in glowing terms. When we do, according to this perspective, we risk undermining human dignity and the sense that human beings are special. The more AI gets pushed, they say, the more human beings will be considered simply biological machines rather than persons.

What do you think about all of that?

Ilyas: We all struggle, and I struggle just like everybody else. We have new information. We have new experiences that make us question what we may have thought yesterday or last year. So this is complicated and fast-changing, but I think it’s actually right for us to embrace the uncertainty. This is part of being human. My view is, at the moment, an optimistic one. I base it on two things. First, whenever I talk to anybody who is explaining why AI is humanlike, they end up describing the impact on humans, which is what you just did as well. So an android will do things better. It’ll answer questions. A robot will do this and that, and we will feel happy or we will feel sad. We will feel confident that we might even end up having relationships with these things.

But what about the thing itself? Is it having all of these feelings? Is it responding to emotion? No, it is being robotic. It’s what we want. I’d liken this to a really well-made lemonade. When I have a really well-made lemonade, it’s fantastic! I am very, very happy. I could endow that lemonade with many, many qualities, but it is lemonade. AI is a profound tool. It’s a puzzling tool. But it’s a tool. So that’s one half of my answer.

And the second half is that I don’t think there’s anything we ever confront which is too difficult for us. I believe that our God gave everybody the capacity to deal with anything given to us. There are things that are horrific. You and I have lived privileged lives, but there are human beings who suffer incredible adversity and trials, and we are equipped to deal with this. So the second part of my answer is, we are equipped also to deal with this ahead of us. Right? Like you, I’m confused sometimes about these challenges, and I’m confused about exactly where, on the spectrum of intelligence, AI thing might exist, but I always find a good home in those two things: lemonade and God.

Mark Brumley: Well, Ilias Khan, thank you very much. I appreciate your taking the time to talk about artificial intelligence, lemonade, and God!

(Note: This interview has been edited for length and clarity.)


If you value the news and views Catholic World Report provides, please consider donating to support our efforts. Your contribution will help us continue to make CWR available to all readers worldwide for free, without a subscription. Thank you for your generosity!

Click here for more information on donating to CWR. Click here to sign up for our newsletter.


About Mark Brumley 67 Articles
Mark Brumley is president and CEO of Ignatius Press.

Be the first to comment

Leave a Reply

Your email address will not be published.

All comments posted at Catholic World Report are moderated. While vigorous debate is welcome and encouraged, please note that in the interest of maintaining a civilized and helpful level of discussion, comments containing obscene language or personal attacks—or those that are deemed by the editors to be needlessly combative or inflammatory—will not be published. Thank you.


*