+1 (312) 520-0301 Give us a five star review on iTunes!
Send Buck a voice message!

287: Artificial Intelligence, the Robot Revolution and the New World Order!

Share on social networks: Share on facebook
Share on google
Share on twitter
Share on linkedin

Buck: Welcome back to the show. Everyone today, my guest and Wealth Formula Podcast is Martin Ford. Martin is a futurist, a New York Times bestselling author, and speaker and Silicon Valley entrepreneur. He is known as one of the leading experts on robot revolution, artificial intelligence, job automation, and the impact of accelerating technology and workplaces, the economy and society. He’s also a prolific author, The Lights in the Tunnel Automation, Accelerating Technology in the Economy of the Future, 2009, the Rise of robots, Technology and the Threat of a Jobless Future, for which you won the 2015 Financial Times McKinsey Business Book of the Year award and the latest book, Rule of the Robots, how Artificial Intelligence Will Transform Everything. Martin, thanks for coming on the show. 

Martin: Thanks for having me. 

Buck: So I think we talked a little bit before we started about the nature of our group, where we tend to be, a lot us are sort of math/science people, but we’ve gotten into maybe more on the biological sciences and things like that. But this is a really interesting concept that I think a lot of us are hearing about in the periphery, and you are sort of in the middle of it, which is AI. So why don’t we start with a really basic question? How do you define artificial intelligence? 

Martin: Well, I simply define it as when a machine begins to replicate what we would think of as human intelligence, which is the ability to solve problems and make predictions and optimize things and so forth. These types of problems that previously have been really the province of the human mind are increasingly being taken over by machines. And in some ways, the machines are better, at least at many specific things than human beings are. And I think that that’s just going to be an enormously important tool for us going forward. That’s going to be really disruptive across the board. 

Buck: So to a certain degree, we already have some AI, but I would think that you and other people in your space would think that we’re really just at the Presbyter the beginning. Can you talk a little bit about some of the applications of AI now and where you see it going? Maybe in the next five to ten years or 20 years? 

Martin: Yes. The core thesis of my book is that artificial intelligence is really becoming a systemic technology. I think almost like a utility. It’s going to evolve to be in some ways almost like electricity, in the sense that it’s going to be everywhere. It’s going to touch everything. It’s going to be this kind of universal tool that actually brings intelligence, machine intelligence to bear on virtually any problem. And we’re already seeing that happen. So the short answer is that it’s going to impact everything. I do think that it’s going to be particularly important in science and in medicine, and one of my great hopes for artificial intelligence. I’m a proponent of the technology. This latest book does talk a lot about the dangers and the risks associated with it, but I think that it’s going to be a technology that’s absolutely indispensable to us if we’re going to overcome the challenges that we are going to face in the coming decades. And the primary benefit of it. The most important thing is I think that it will jumpstart innovation across the board because it will, in essence, amplify our creativity, our ability to innovate, to generate new ideas. And this is going to be enormously important. So in areas like science and medicine, you’re already seeing pure applications of this technology. AI is already being used as an important tool in drug discovery. For example, as your listeners know, a drug molecule has a geometric shape to it, right? And it’s that geometric shape that fundamentally defines the function of a particular drug or chemical or molecule. And so it’s possible to use a form of search in order to search for molecules based on shapes and other characteristics. And AI is particularly well suited to that. So there have already been examples of drugs, including a new antibiotic, for example, that was discovered using this kind of machine learning approach. 

Buck: Did they use it much during the vaccination? 

Martin: I think they may have used it to some extent. I don’t think it played a critical role in that. Certainly computer technology did. And they were, as you know, able to develop vaccine candidates, I think, literally within days, in some cases of first getting the genetic sequence that was uploaded in China. So I think it’s an important tool in many areas. It’s also being used in areas like planning the stage two and three trials. Which also requires a lot. There are also applications there. So I think that certainly in terms of the next pandemic, there are going to be important applications for this technology. One of the most important breakthroughs that we’ve seen. Maybe the most important application of advanced artificial intelligence that we have seen so far is what DeepMind recently announced with its AlphaFold application. Again, as your listeners know, the protein folding problem has been an enormous problem in science, right. It’s been at least 50 years that scientists have been working on trying to figure out how can you take the genetic sequence for a protein molecule and then, based on that, predict the geometric configuration that a protein molecule will fold into, which happens within a tiny fraction of a second after the protein molecule is fabricated in the cell. Again, it’s that shape that determines basically the function of the molecule. And so what DeepMind did was they created this AlphaFold system that can do that with very high fidelity in a way that essentially matches what is done with very expensive laboratory techniques which are expensive and much more time consuming. 

Buck: What problem does that solve? Actually, I’m not entirely sure what issues that the protein folding figuring out, how does that help? 

Martin: Well, again, it’s basically understanding the shape of protein molecules which determine their function. Right. And the ways that they can use and also their potential interaction with drugs. So this is also a tool for drug discovery. Again, there are techniques that already allow us to figure that out, but they’re very expensive and they take a long time. So what that means is that only a small number, a relatively small number of the protein molecules that are important in biology. The shape has been determined. So DeepMind is actually now in the process of using this new technology to essentially create a complete database or a library of all these protein molecules in their molecular shapes, which is the first time that’s possible. So this is a truly important breakthrough, which is going to have vastly important ramifications for science, medicine, biochemistry and so forth. So that’s maybe the most concrete example so far, we’re really critical breakthrough that’s going to be disruptive. But the same technologies are being used in other areas, not just in drug discovery, but in the discovery of new materials. I give many examples of this in the book Rule of the Robots. I think that AI is also going to be a critically important tool for kind of assimilating information for accessing scientific information in different sources. For example, in scientific papers and textbooks and clinical studies and experiments, and bringing that all together and finding the connections that might not be obvious to a person because the person would never be able to read all of this material. This is something, for example, IBM Watson has been working on for a while with somewhat mixed success. But there are many initiatives in this area, and I think that this is one of the most important applications going forward of AI to scientific research. And then in the general area of medicine, we’re already seeing many applications in diagnosis, especially in radiology, where you’re looking at the ability to scan images. There already have been systems that have been shown to at least match, maybe in some cases, exceed the capability of radiologists in specific instances, where it comes to looking at a mammogram, for example, and trying to figure out, is there cancer there? So I think what we’re on a path to is maybe not so much that technology is going to displace doctors. I don’t think that will likely happen for a very long time, but I do think we’re very firmly on a course to where artificial intelligence is going to become kind of an important second opinion, a resource that doctors will use to get a second take on their judgment. And that’s going to, I think, ultimately result in a democratization of really the best medical expertise out there. It will be almost as though whatever doctor you happen to be seeing, and it may not be the most talented doctor that’s available, that doctor will have access to this resource that will bring the best available judgment and experience to bear on almost every problem. So that means that we’ll all have access to, in a sense, the best world class medical knowledge and expertise. 

Buck: I’ve been reading a little bit about this in the context of medicine, especially when it comes to various treatments and cancer and that sort of thing. And I know there’s groups already kind of looking into this concept of looking at what kind of chemotherapeutics are treating an individual’s cancer, as opposed to following necessarily what we’re taught in medical school, which is that a certain type of cancer will react to a certain type of chemotherapeutic. But what that makes me think about is really a shift in medicine from what works for most people to what works for the individual person. And I’m just curious what you know about in that space when it comes to the application to personalized medicine. Are you seeing any technology or companies that currently working on anything using AI there? 

Martin: Yeah, definitely. I couldn’t give you names of companies, but I think that’s clearly happening. Many of the drug companies, I think that’s generally considered to be the next frontier, from talking to some of the people running the startup companies doing drug discovery and things. What they say is that a lot of the low hanging fruit in terms of drugs that are really widely applicable and are really going to be effective. That’s probably already been harvested. Right. But that’s probably been solved. So the next frontier in terms of drug discovery and new drugs is likely to be more specialized applications that can help a smaller number of people. But of course, that is going to require manipulating an enormous amount of data. And that’s one of the truths of artificial intelligence across the board. Right. You’re seeing this in many different ways. You see it in agriculture, where you now have systems that instead of spraying a whole field with fertilizer, can actually focus on individual plants and analyze each plant and figure out what’s the status of this plant. Does it need more water? Does it need more fertilizer? And so that’s sort of a general theme with AI that it makes that possible because you’ve got machines that can look at incomprehensible amounts of data and infinite amounts of patience. Right. That no human being would be able to do so whether you’re talking about customizing applications for medicine or also in other areas, that’s one of the primary benefits of the technology. There’s a lot of smart people, Elon Musk, Bill Gates, and they warn about AI, and they talk about the potential pitfalls and dangers of it. 

Buck: What’s your perspective on that? Are we in store for a moment in technology where there’s a balance of disruption that becomes negative because of AI, is the worry in AI, the intelligence itself, almost like out of a science fiction movie where the AI start to destroy human beings, what is the reality in terms of what the real pitfalls and what the dangers are with the technology? 

Martin: Yeah. That’s a really crucial thing that I talk a lot about in the book. The important thing to understand there is that we can first acknowledge there definitely are real risks and dangers that are going to come coupled with artificial intelligence. However, it’s also important to realize that I think there are kind of two categories there in terms of these risks and the category that you just alluded to, which has gotten a lot of press with people like Elon Musk talking about, it is what’s called existential risk. It’s a concern that someday we might have a super intelligent machine, a machine that is smarter than the smartest human being. Maybe even perhaps this machine would be so smart that it would make us look like the difference between a person and a mouse in terms of the various levels of intelligence there. And then there is a legitimate concern at that point that we would lose control of this technology. That not so much that the machine would become overtly malicious, like in the Terminator movie, but rather that we might have this incredibly intelligent resource and we would set it on a path to accomplish some goal. But then it would act in ways that we don’t anticipate, since after all, it’s much smarter than us. And some of the ramifications of that could cause harm to us. Right. That we would essentially then not be able to stop this process.

Buck: In that situation with artificial intelligence, there is the concern that it would have decision making capacity, and therefore that’s what would make it dangerous. I’m just curious generally, to sort of drill down on that fear, especially when it’s coming from these guys who obviously are thinking about it pretty deeply?

Martin: Right. Yes. It definitely would have decision making ability, and it would have the ability to impact the world, to do things, to take actions. And it might have been set forth to accomplish some specific goal. But then it may take actions that we don’t anticipate, which we really didn’t intend, which could potentially cause us harm. And that’s what people are worrying about. And if we truly lost control of a system like that, it could represent an existential threat. It could potentially wipe it. We could tell it to cure disease, and it might decide this is a cartoonish example, but we tell it to eliminate all cancer, and it could decide that. Well, one way to get rid of cancer is to kill everybody. Right. Something like that. I mean, that’s kind of a ridiculous example, but that’s a much more subtle, sophisticated example of that kind of thing is what’s known as the control problem or the alignment problem where we worry about. And that’s what Elon Musk is worrying about when he says that artificial intelligence is more dangerous than nuclear weapons or more dangerous than North Korea. And all this stuff. And what I would say is that, yes, that is a legitimate concern. But it also is at a minimum, decades away, maybe more than half a century away, maybe even 100 years away. So it is something to be concerned with, but it’s certainly not an immediate concern. So I put that kind of in a separate category of risk. I think it’s good that some people are worried about this. And there are some very smart people working on this. For example, the most prominent is Nick Bostrom at Oxford University. He wrote a book called Super Intelligence, and he’s got a group there that’s working on this. And there’s some other small think tanks concerned with this problem. And I think that’s a good thing. However, I wouldn’t want to blow that up beyond that and have it consume our attention because there is another category of risk and danger that is associated with artificial intelligence. And these are things that are absolutely on top of us. Right. These are things that are beginning to manifest now and are definitely going to unfold over the coming years in the coming decade. And this is about unlike the first category. This is not about the machines waking up and becoming intelligent of their own volition and doing something that harms us. This is primarily about other people using artificial intelligence in ways that are going to be disruptive. Okay. And some of the categories here are threats to security, using artificial intelligence to attack our infrastructure, attack computer systems, using artificial intelligence to create what is known as deepfakes, for example, which are very high fidelity fabrications of video or audio. And this could be used to create chaos. I give one fictional example in the book where it’s used to attack a politician, right? To literally put words into the mouth of a politician to make them say something that is self destructive. Right. You could do that on election day, basically, or the day before an election. And create chaos in that way, we’ve all seen the power of viral videos, right. In terms of generating social protests and social unrest, someone could fabricate a video, right? That caused that to happen. So these are all dangers. Another thing that people really worry about is the potential weaponization of artificial intelligence, right. You could have. And I think it’s very likely that we soon will have fully autonomous weapons. In other words, for example, drones that, rather than being controlled by a person, are fully autonomous and have the ability to target someone and attack. Right. One of the people I actually interviewed in the course of this book is a guy named, but his name is Stuart Russell, who’s a professor at UC Berkeley, one of the top people working in AI. And he created a YouTube video which you can watch called Slaughterbots, and it really lays out in a very graphic way the potential for fully autonomous weapons and how dangerous they could be because the issue there is that they might initially be developed by the military. You might have hundreds or thousands of swarming drones that could attack people, for example. But these are weapons that could easily fall into the hands of terrorists. Right. And could be deployed in a very terrifying way. And it’d be very hard to really defend against that. So it’s something to be really concerned with other risks and dangers that are already developing. Of course, there have been known cases of bias in artificial intelligence systems, racial bias and things like facial recognition and some other algorithms, even algorithms used in the court system, for example, to determine whether someone should be released on bail and so forth. Some of these have been shown to be racially biased. There’s been gender bias as well. And, for example, resume screening systems that are being used to determine should someone get an interview once a resume comes in, so that’s another area of real concern and companies are actively working on addressing those issues. So there are real concerns around this. And, of course, the other issue that I’ve talked a lot about, and actually my previous books were primarily focused on is the potential impact on jobs. The fact that a lot of jobs, especially jobs that are more routine and repetitive and so forth are going to be heavily impacted by this. And I think that that is going to be a driver of even more inequality in our society. I think that a lot of jobs that now provide a solid income are likely to be impacted. And this will include not just blue collar jobs, not just working in an Amazon warehouse or a fast food restaurant. Those areas certainly will be impacted. But it will also include a great number of white collar jobs. If you’ve got a job sitting in front of a laptop, doing some relatively routine manipulation of information, cranking out the same report again and again or doing the same type of analysis, all of that is going to be heavily susceptible to software automation going forward. And I think that could potentially be very disruptive. So there are a number of concerns associated with the deployment of AI that we really need to focus on. We’re going to need, I think, some policies and regulation in order to address those. 

Buck: Yeah. And the challenge with that, it seems to me, is even with policies and regulations, we can only control what happens in this country. And so along those lines, I’m curious, where is the US compared to, say, China when it comes to AI technology? Are we sort of ahead of the pack right now? Are we not giving this enough attention in this country or how do you view that? 

Martin: I would say that at the moment, certainly the United States and the west more generally is ahead. But absolutely, China is putting enormous resources into this and catching up and could well surpass us. And this is a real concern. And again, this is something that I really focus on in Rule of Robots. I’ve got a whole chapter on this. And this is definitely one of the greatest concerns because China, first of all, is using this technology in a very dystopian way, both in their own country and in terms of how they’re exporting it. Right? They’re building facial recognition systems. That really is. I mean, they’ve essentially created an Orwellian sort of Big Brother society. And, of course, it’s been especially focused on the Uighurs, right. The ethnic group in Western China that has really been subject to a lot of oppression. But they’re deploying the technology across China, and they’re also exporting it to other, more authoritarian regimes, especially in the Middle East countries like Saudi Arabia and the UAE and so forth. They’re already using these technologies, and it’s beginning to encroach also on the west. So I think the issue of Privacy versus surveillance is something that we really are all going to have to make trade offs in different countries in the west. We really need to look at that. I mean, definitely these kinds of technologies do provide benefits in terms of lowering crime and preventing terrorism and things like this. But there is a real trade off in terms of Privacy. So that’s something that we need to focus on and make sure that we have a public discussion about those kinds of trade offs. And the other thing, the pendulum for a race with China in this technology is really something that we really need to take very seriously. We should be putting more resources into it. It is not just a commercial race. These technologies clearly have applications in the military arena, in the security arena. So to the extent that China becomes a leader in AI and maybe surpasses us is a real concern in those areas, and they do have a number of advantages. China has a population that’s roughly four times the United States. They’ve got an enormous number of very smart, talented, motivated engineers and computer scientists, young people that are extraordinarily motivated to learn about this technology and contribute to it. And partly as a result of that, you see a wide range of startup companies over there that are really at the forefront, especially in areas like facial recognition. So they’re really pushing the envelope. And the Chinese government is putting enormous resources into this. They’ve got an explicit plan to basically make China the world leader in AI by 2030. Xi Jinping is personally involved in this. He gave a talk at one point, and people saw books about artificial intelligence on the bookshelf behind the guy as he was speaking. So, I mean, he’s pretty engaged with this. So it’s a real competition. And again, it’s got real applications to military and security. And in China, they actually have it written into their Constitution that Chinese companies are required to cooperate with the military over there. Right. They call that military civilian fusion, whereas over here, that’s not always the case. We’ve seen a number of instances where employees at companies like Google have revolted because the company took a contract with the Pentagon, right. Obviously, they have the right to do that. This is a free society. But I do think we need to worry about the asymmetry there. Right. Sure. China has got all these top of the line tech companies that are absolutely required to work with their government. And we’ve got a tech industry over here, which is much more ambivalent about that. So we need to worry about an imbalance there because we really and truly cannot afford to fall behind in this technology. I mean, a world where China is vastly more influential and maybe outstrips us in terms of this technology is not going to be a good world for anyone. I don’t think so. This is a real concern, and we definitely do need to be putting more resources into this challenge. And we probably need to have a more concrete plan in terms of what the government is doing and more collaboration with industry here. 

Buck: You alluded to this already just through jobs, changes in jobs, both blue collar and white collar because of AI have you thought about sort of the overall impact, I guess futurist side of you looking at the impact of the economy in the US or the global economy the way we do things right now with all these imports, if we could do them all with robots, I’m just thinking out loud. It just seems like you’re looking at a very different global dynamic. 

Martin: Yeah. I think it’s going to be very destructive. I truly believe that artificial intelligence is going to be one of the most important forces that shapes our future. I mean, I think that it’s arguably our most consequential technology in terms of its impact on the economy. There are going to be two sides to it. Right. Obviously, AI is going to bring enormous benefits. It’s going to make industries more efficient. It’s going to make it possible to produce goods and services more inexpensively. And that’s going to mean lower cost. That’s going to make all the things that people need to drive, whether it’s material things like food or things like education. It’s going to make those things more available at lower cost. And AI is also going to make it possible to create entirely new products and services. I mean, we were talking earlier about drug discovery, right. Sure. So you’re going to have breakthroughs and new medicines and so forth that’s the positive side of it. The side of it that I would worry about is the impact on employment and on inequality and on the distribution of income. Right now, jobs are the primary mechanism that gets money into the hands of the people in a society, the consumers, so that then they can go out and create the demand that drives the economy. Right. You have to have people that have the income and the confidence to purchase the products and services being produced by the economy. If you don’t have that, you’re not going to have vibrant economic growth. So it’s that imbalance that I really worry about. And it’s going to be driven largely by the fact that I think any kind of work that is fundamentally routine, repetitive, predictable is going to be at risk of being automated at some point over roughly the next decade or two. 

Buck: Certainly something, maybe, I guess, on a much more, perhaps a larger level than the industrial revolution. Right. I mean, going from not having farm equipment to having farm equipment, that kind of change. 

Martin: Yeah. Exactly. And that’s a very good example that people bring up. And many people will bring up that example, the mechanization of agriculture as kind of a pushback against the idea that, well, we’re going to have a problem because, of course, that happened. And yet we don’t have people permanently unemployed. Right. Right. So what happened is that yes. If you go back to, like, the late 1800 in the United States, at least half of the population, half of the workforce was engaged in farming. They were working on farms then you had tractors and combine harvesters and all this equipment came along, and those jobs disappeared. And what happened is that you did have short term unemployment and disruption. But over the longer term, what happened is, of course, this new sector, manufacturing appeared and absorbed all those workers. So those people moved from farms, basically to factories. Right. So you can imagine a worker that was once on a farm doing some kind of routine work, maybe by 1950, that worker is now standing on an assembly line, routine work. Right. And then what we saw is that something similar happened to the manufacturing sector. Manufacturing is also heavily automated, and, of course, it’s also offshored. It’s gone to China and other countries. And so as a result of that, a much lower fraction of our workforce is now engaged in manufacturing. So we’ve now got in agriculture where it was once half it’s now less than 2% of the workforce in manufacturing, where it was once at least a third of the workforce. It’s now less than 10% of the workforce. So what that means is that all those people are now working essentially in the service sector, which is what dominates our economy. So people have moved historically. So what you’ve seen is that this impact, this technological impact, increased automation has kind of happened a sector at a time, right. First agriculture, then manufacturing. And now everyone’s working in the service sector. But what you’re going to see this time is that, as I’ve been saying, artificial intelligence is a general purpose technology, a systemic technology. It’s going to be like electricity. It’s going to scale across everything. It’s continuing to automate agriculture. Even at last 2% of jobs are under threat because of robots and machines being used in agriculture. It’s going to continue to force down employment in manufacturing as factories begin to approach full automation. But most importantly, artificial intelligence is going to begin to scale across the service sector. Right. So it’s going to bring automation also to this last remaining sector. What’s happening this time is that any kind of job in any of those sectors that is fundamentally routine and repetitive, where you’re doing basically the same sort of thing or facing the same kinds of problems again and again, is going to be susceptible to automation across the board. And this time, I can’t imagine that there’s going to be some new sector that is going to arise in the way that manufacturing and later services did to absorb all the tens of millions of workers that are potentially going to be displaced. Right. There’s not some labor intensive new sector that’s going to arise for people to do things that are largely routine and repetitive. Right. Because we now have a technology that is going to essentially vaporize all of that. So I think that’s the problem we face going forward is that now, whereas in the past, people have adjusted to technological impacts by switching sectors going from agriculture to manufacturing to services, but still doing something that is more or less routine in the future. They’re going to have to adapt by figuring out how to do something that’s fundamentally non routine, maybe something that’s creative or something that really involves building relationships with people, for example. And these are things that a lot of people will maybe successfully make those transitions, assuming that those jobs are sufficient in number, those jobs that we still can’t automate. Right. But I think there’s a real risk a lot of people left behind. Not everyone has the talent to do creative work. Not everyone has the personality traits to do really people relationship building type work, even doing something like elder care, where you have a very relationship with an older person, you’re helping them with their personal needs. That’s a job that we’re definitely going to need a lot more people to do that. But it’s not clear that you can necessarily take a truck driver and have that person do that job, right? I mean, there are differences in talents and personalities and so forth. So I do think there’s a real risk of a lot of people essentially being left out, and that will drive increased inequality.

Buck: Being in Silicon Valley as you are, our ecosystem, we tend to be we’re generally real estate investors, but it’s always interesting to me to kind of look at where the big money investors in Silicon Valley, where the tech money is going. Is there specific areas of AI that seem to be attracting the attention of the larger and the venture groups and stuff like that, or is it just across the board and AI? 

Martin: It’s definitely across the board. But a couple of the areas we’ve mentioned previously, certainly there are many, many startups out there using artificial intelligence for drug discovery or some sort of general biomedical type application. This is an area that’s gotten a lot of venture capital. Another area that I think is going to be incredibly important is in robotics building more dexterous robots. We’ve got a couple of a number of startups that are trying to build robots that begin to approach human dexterity companies, like Vicarious, for example, out here, they’ve gotten venture capital from not just the BC firms, but some of the top people in the industry. People like Jeff Bezos, for example, have invested in these startups. And the motivation behind that, of course, is to bring more automation to environments like Amazon warehouses. If you’re looking today, Amazon warehouses have been a bright spot for employment, right. They hire a lot of people, thousands of people, and they’re hiring more people. But if you go inside one of those warehouses, what you find out is they have also got hundreds of thousands of robots across Amazon. They’ve got the environment right now. That includes a huge number of robots, but also lots of people and the people are all there doing the thing that the robots can’t yet do. And for the most part, in environments like warehouses that involves doing what’s called stowing and picking products. In other words, putting products on inventory shelves and then retrieving items from inventory shelves to fulfill orders, because that requires human level dexterity, hand eye coordination, spatial recognition, this kind of thing robots can’t yet do that. But that’s exactly the challenge that these startup companies are working on. Right. So Bezos actually said a couple of years ago that he thought within ten years they would have a robotic hand that could approximate human dexterity in terms of its ability to grasp items. And once that occurs, those hundreds of thousands of jobs in Amazon warehouses are going to be at high risk because that’s what the people are doing there. Right. And, of course, it’s not just going to be Amazon warehouses. It’s going to be fast food restaurants, retail stores, many, many other environments. And I think actually, the other thing that we see is that to some extent, the pandemic has actually accelerated this process for a couple of reasons because, of course, now we have this new focus on social distancing. Right. We don’t want too many workers crowded into an environment. Of course, if you can use robots that help to address that, there’s also an increased emphasis on hygiene in terms of food preparation, things like this. So if you can have a robot doing that and a person never contacts the food, that could actually be potentially a marketing advantage. Right. There’s definitely been the shift in consumer preferences. Right. The way we all view human contact and things like this has definitely changed as a result of this pandemic. And I think to some extent those preferences are going to be more permanent. Right. Yeah. And the other thing that’s happened is that right now, partly as kind of an overhang from the pandemic in many areas, there’s actually a worker shortage. Right. Especially in the lower wage restaurant industry and so forth that they’re struggling to hire people. And so they’re definitely examples of restaurants and large change that are investing in these technologies. Exactly. For that reason. So that’s going to push it forward even faster. 

Buck: So one last topic, I have, like, a million questions for you that popped to mind, but I don’t want to keep you all day, obviously. So one last question I want to ask you about is sort of the interplay of what’s happening with AI and what’s happening with Blockchain. Is there some intersection there that’s important for us to notice or is that completely different technology? It is different technology. But is there really no relationship with it? It makes me think that obviously, in terms of cybersecurity and things like that, that are some of these blockchains as secure as we think they’re going to be if there’s AI anyway, just your thoughts on Blockchain and AI. 

Martin: Well, I definitely think there’s a relationship there and many of the people that are working on blockchain and cyber currencies and stuff are definitely leveraging AI and looking for ways to leverage it. And absolutely in the security arena. More generally, AI is going to be absolutely critical, and it’s going to take essentially two sides on that. It’s going to be used. It’d be very much going to be like the kind of Arm trace you see with computer viruses, right where you’ve got the black hat people creating the viruses, and then you’ve got the white hat people at companies like Symantec, right? They’re trying to respond to that. And you’re absolutely going to see the same thing with artificial intelligence. You’re going to see what are called adversarial attacks on systems. You’re going to see it deployed to attack infrastructure. And the only way to potentially defend against that is going to be used be to use artificial intelligence as a tool to promote security. So it’s going to be just an incredibly important technology. There is not a single arena where it’s not going to be applied. And certainly for anyone running any kind of business to leave artificial intelligence off the table and say, I’m not going to adopt this technology would be almost tantamount to disconnecting from the electrical grid. It would be malpractice at that level, right? It would be the kind of thing that would ultimately simply leave you behind. So this is just going to be an incredibly consequential technology that literally is going to touch everything, every industry sector, every aspect of our lives. So no one can afford not to be familiar with this technology and its implications. 

Buck: Again, the latest book is Rule of the Robots, how artificial Intelligence will Transform Everything. This sounds like a book that we should all read and get acquainted with in a hurry. Martin, your website is mfordfuture.com and tell us what you do there. 

Martin: That’s essentially a website. It’s got links to lots of articles that I’ve written and reviews of my books and things like that. I also have a Twitter account. Also Mford future and on that I tweet pretty much every day often to links to articles about artificial intelligence and how it’s progressing and the latest news sources and so forth. So that may be an interesting resource if you want to keep up with this technology as we move forward. 

Buck: Fascinating. Thank you so much for your time today. I appreciate you coming and maybe have you back some time in the years to come. 

Martin: Sure, that’d be great. Thanks a lot for having me. 

Buck: We’ll be right back.