What Happens Next in 6 Minutes with Larry Bernstein
What Happens Next in 6 Minutes
AI and Robotics
0:00
-36:52

AI and Robotics

Speaker: Myron Scholes

Listen on Spotify

Transcript PDF
199KB ∙ PDF file
Download
Download

Myron Scholes

Subject: AI and Robotics
Bio
: Frank E. Buck Professor of Finance, Emeritus at Stanford Graduate School of Business. Winner of the Nobel Prize in Economics for groundbreaking work in Options Theory

Transcript:

Larry Bernstein:

Welcome to What Happens Next. My name is Larry Bernstein. What Happens Next is a podcast which covers economics, politics, and culture.

Today’s topic is AI and Robotics.

Our speaker is Myron Scholes who is the Frank E. Buck Professor of Finance, Emeritus at Stanford Graduate School of Business. Myron was awarded the Nobel Prize in Economics for his groundbreaking work in Options Theory. I want to learn from Myron about why AI needs to work with humans due to uncertainty and the importance of exceptions.

Myron is an investor and on the board of a robotics company that focuses on installing solar panels in the desert where laborers don’t want to work. I want to hear about the challenges that robots have in these kinds of assignments and how AI can improve their performance.

Myron can you please begin with six minutes of opening remarks.

Myron Scholes:

What is AI good at and what can't AI handle? It can summarize the past and make inferences based on past data. It will not be able to handle the unanticipated or be creative or innovative.

AI will help innovators to produce solutions faster. AI will diffuse through the economy. But AI will not produce massive changes all at once.

Induction precedes deduction. AI systems gather data. They observe spatial relations. They are the inductive mechanism. Under uncertainty, induction must proceed deduction. AI will gather data and that data will be used to make deductions. A creator person knows when to stop and can create from the data.

The creator person sees the exceptions. AI will need help to handle the exceptions. There are unknown ones that are not in the data set. For AI to operate, it needs data from which to provide an answer. Needs to know the boundary of its computation.

AI will reduce the cost to answers about the middle of the distribution, the average. AI systems must be built to handle the exceptions. Creative people see the exceptions and concentrate on the exceptions and not the middle of the distribution. In a creative environment, they are crucial.

We have chaos or disorder. Although AI will be able to handle massive amounts of history and cross-sectional spatial activities such as depth height, it won't be able to handle uncertainty and changing uncertainty. Humans have the skills to understand and analyze exceptions.

We can teach AI. We will train it to handle those exceptions, but there'll always be new exceptions.

To summarize, under uncertainty, we model the world around us. If we were certain we don't need a model. It is a fact. Under uncertainty models are an incomplete description of reality.

AI can help us reduce the error of our models by allowing us to access more data quickly suited to analyze our problem, and then we can gather additional data. AI can help us increase the dimensionality of our model that is richer. AI won't be able to tell us whether to ignore the error with a new result or to incorporate the new information from the error.

AI must worry about data mining that is using past data to build a model and the model might fail in the future because it was built only using data from the past, which is the incomplete description of reality.

Time is really volatility time. With low volatility levels or low levels of uncertainty, we have more time to adjust our models and gather new information. However, time compresses the more volatility we have. We have to make decisions more quickly. And sometimes with extreme volatility, nothing works. The model fails and when the model fails, time stops.

Errors are not normally distributed. The exceptions are tails and changing circumstances are the most important considerations. We as humans can handle those more important considerations after analysis and time to digest them. AI will fail here and so we have to train AI how to understand the exceptions to the model it is using and direct it on a different course. The fastest way to evolve the AI systems is to let it discover what it does not know.

Humans are creative. They create through the exceptions. Einstein had his physics books and he read them in his patent office. He read them just as AI must read physics books from all the past and current scholars, but he saw there was something wrong, there was something that he needed to address and he developed a new theory where no one had thought about that energy was preserved. So as a partner, we'll work together and create a more efficient system.

Larry Bernstein:

I want to apply the model you've given us for AI to the problems that robot faces.

Myron Scholes:

I became very interested in robots because of my desire to foster a movement towards more decarbonization. So, I invested in a robotics company to install solar panels. Why? Because solar panels need labor and labor is very expensive, in areas not hospitable. Robots can augment labor where labor is very scarce.

Larry Bernstein:

There are solar panels to install a stand and then they have to put up the mirror and take some screws and tighten them up. Tell us about the challenges that a robot faces to do what would otherwise be pretty normal human tasks?

Myron Scholes:

I've learned in interactions with robotic engineers that you can make a robot be a general robot that does myriad things but that's very expensive and inflexible. The process of the future with AI and robotic design is to make the robot idiosyncratic. One robot would bolt the panels in place, another robot would be able to lift the panel off a cart and put it on the stand. And a third robot would put the pole into the ground. Technology is evolving of making it idiosyncratic and not general. You need to have a robot that has skills. It must be able to move itself, move its arms, have visual to figure out and act on what it sees. It also might have touch and do various things.

If you're installing solar panels to work at night and in the daytime. I call these country robots. They're not city robots which are going to be in your house to make your coffee or fold your laundry.

The first step is to design the robot for the task involved, not to design a general robot that can do any task. Many people are building robots that don't know what the robots are going to be used for specifically. The interesting thing in developing a robot is you need two things. You need a general robot to do things, but you need one that's trained in domain specificity. What I mean by that is they know what their tasks are going to be, and you train the robot to do those tasks because there's no general knowledge.

A foreman is going to train the robot to do the specific task.

Larry Bernstein:

To define some words for the audience idiosyncratic versus general. Idiosyncratic means that it's a very specific task. General means it could do lots of tasks. And with idiosyncratic, let's use an example a solar panel and a stand and the task is to fasten the panel to the stand with six bolts.

Myron Scholes:

In amplifying your statement is that the torque necessary to do that has to be uniform all the time. You must teach the robot the torque that's necessary to do that specific task. When human beings install solar panels, they don't provide the same torque all the time. That panel flies off because the bolts break in the future because there's not enough torque. The beauty of having a robot do that is that the torque would be the same all the time.

Larry Bernstein:

Putting in a bolt seems easy but it's not. You got to find the hole and then you got to put the bolt on the screw. But it's very bright; the sun is glaring and our poor robot is confused by the glare. The bolt is over here but it's not. We now have an error term and we're stuck. We need the help of a combination of AI and humans to get the robot back on track. Take us through an error and how people and AI can solve this problem.

Myron Scholes:

AI can address the database quickly and efficiently to change what it does. But the beauty of having a domain of specific robots such as installing solar panels is that the foreman could be asked by the robot when it has an exception. The key to AI is having a large database of repeated tasks, information for what other robots supply. That's the database in which the robot will now decide how to do things that it sees that are exceptions. It'll be able to address the database to find exceptions of others and how foreman treated them. But once it sees an exception that AI cannot provide the answer for, it'll ask the foreman for advice.

There's time and information that could be used in the AI system to judge the efficacy of what the instructions that the foreman is giving to the robot. Because once you accept it into the system, it becomes dogma. AI's this interaction exception upon exceptions, learning how to do it, which AI can be helpful in, but human beings have to be involved with because they're creative.

Larry Bernstein:

I'm going to repeat that back. We've got our task is to put the bolt on the screw and fasten up the solar panel and because of the glare, it’s struggling. And the first step is asking the AI program, can you give me some advice? I'm having trouble. Here's the factors I'm seeing, here's the conditions I'm in. What should I do first, second, third, and fourth and it still is unsuccessful. And then it calls out to the foreman and he says, I see it. Here's what you're going to do. And then it informs the AI that this was tried and failed, then a human being was able to give it a better result. AI would be informed of that so that in the future it would consider that as a potential option.

Many companies are looking at using AI. Customer relations over the phone and the AI will be informed based on millions of conversations. Customer wants to open a new brokerage account and then they ask, what was this an individual account or a trust? Oh, it's a grantor trust? Do you know who the trustee is? What are your objectives? In the previous example with robots, we were doing a physical task. And the brokerage firm, we're trying to fill out an opening account form. Tell us about the interrelationship between humans and these AI programs to achieve any task.

Myron Scholes:

One of the interesting developments that have been made in manufacturing that there is this thing called the digital twin a simulation of a physical process. That model is put into the simulation, and then it's perturbed and there's an error to the model. In the filling out the brokerage form, you have a simulation that's being built. AI will help you in building that simulation faster and be able to answer questions about it and potentially reduce the error of your model because you're trying to design something that can be applied broadly.

With AI, it'll help you code, it'll move you closer to a solution focus. The richer the database, then it’ll answer questions quickly because it'll have domain expertise that it's developed from the simulation now that it's applied directly to a physical system. Everything we do in life goes from our brain thinking about a model and then implementing it in various activities. AI speeds us up because we can address large databases and get the average more correct quickly.

Whether it's in the robots or in your brokerage accounts, exceptions will always come up. People will be unusual, some question that the person will have that it didn't understand or didn't hear before. Someone has to teach the copilot. And AI can do that because you can program more efficiently and more jointly to have AI working for you to do things faster and give the average solution in the middle of distribution.

In life there is the exception. The AI system must be trained to say, I don't understand. This is an exception. And not just gloss it over, but right now we're getting these things called hallucinations, which are making up answers.

What makes human beings so much fun is we think about the exceptions. How are we going to change what we do from exceptions or how do we expand the dimensionality of what we're dealing with? So in the brokerage house asking for information, it'll speed up the process and get rid of a lot of the infrastructure, which is the constraint on the system. By reducing the constraints with the fear that we worry that the system is not sufficient handling the exceptions.

Larry Bernstein:

I had on the podcast the Dean of Computing at Georgia Tech Charles Isbell and I asked him about the interrelationship between humans and AI, and he gave me two interesting examples similar to what you just described. One was his wife works at Princeton and he's a Professor at Georgia Tech. He drives from Princeton to Atlanta, and he always goes the same way. He takes I-95 and then around Washington DC he heads west.

He was worried about traffic. WAZE came back with a route that he had never considered. It said that he should get on the Pennsylvania Turnpike going west and then go straight down through West Virginia to Atlanta. It would save him five minutes. It was a superior way because he could speed through West Virginia, and it reduced his variance because there's always traffic on I-95.

The second example was you're in Santa Barbara and you want to go to Disney and you're driving on PCH and you see traffic being a disaster and WAZE saying to take a different route, but you look ahead and you see that there's a wildfire and you're like I don't want to get off and find another route to Disneyland. I want to abandon Disneyland as a goal and come up with a completely different objective. AI can't do that. AI can't recognize the environment and that the problem has changed. It's like Apollo 13, we're no longer going to the moon. Our new objective is getting home.

Myron Scholes:

What your example amplifies is the idea that there's history, which is the data we put into the AI system. But the important part is not only the data from the past or cross-sectional information, but what signals you use? What is the signal to tell you what the unusual or exception is, the forest fire is unusual. Your eyes see an exception.

About my robots, if they find an idiosyncratic exception only to that one event, how do I inform the general system that it shouldn't make changes going forward?

You're a good golfer. You have a model, but the model has an error just like intuition has an error. And then over time you garner additional information to reduce the error of your model, which gives you a better approximation to the average of what you should do most of the time.

Occasionally we get a draw from an unusual event. And now do you ignore it or do you recalibrate your model? Pro golfers understand their skill level and the dimensionality of their model is much richer. The professional golfer uses much more data and much more precision than you would ever do in playing golf. But the interesting thing in WAZE bringing in information that it is using crowdsourced information to inform you. But when you get closer to your house and where you know you're going to go you probably don't use Waze, you ignore it because you have so much information about how to get to your home. That's the beauty of AI is to build an average system, but you can know the exceptions and your model might be slightly different from the WAZE model.

Larry Bernstein:

We had lunch with the Founder and CEO of your robot’s company at the Stanford Faculty Club. One of the things that struck me was that you were going to use the same standard screw, bolt, and solar panel as when we had the man do it. It didn't seem right to me. Humans and robots have different sensors and understandings of the physical world in terms of eyes and this glare problem. I would've done a much bigger hole, screw, and bolt, and I would've put in glitter making an arrow from four spots in different colors pointing to the hole where it belongs, so that a lower grade eye knows where the hole was. Any project originally designed for a human, that doesn't mean they shouldn't be designed for a robot that has its own problems, its own error terms to deal with. Humans have error terms, robots have error terms. There are different. Let's design some things for humans. Let's design for robots.

Myron Scholes:

That's entirely correct. You're bringing up the idea of a specific application installing solar panels. Now the robot that is designed to install the solar panel doesn't need to be a humanlike robot. If it is going to be in the field, then it needs to have a better design to have tractor treads, and when we put a robot on Mars to walk around. We didn't have a humanoid robot. We had one that was functional for that particular activity. That's the beauty of designing something for a specific task as opposed to something general.

There's a large use now of general robots, but most of them are humanoid based on the idea that we don't know exactly what they're going to be used for. The future designing robots correctly is to think about the brain of the robot and the physical robot to be integrated. Then you can have a much more efficient application. The robot that does the solar panel bolting could be different from the robot that puts the solar panel on the stand because the robot that puts the solar panel on the stand has to be stronger and be able to lift while the robot that's idiosyncratic and can do the bolting has to have different visual skills, and have much better sensors to be able to distinguish things.

This ability to have flexibility is very valuable. You make a general and it becomes hard-wired. You end up in a situation where you can't handle the uncertainty that I referred to. When you know exactly what you want to do, you can make it as specific as you want to do the task, but when you have not as much certainty, you move it towards software. And everything we do in evolution is moving things towards software from hardware because of uncertainty and the demand for flexibility. In designing the robots as you're talking about, or designing any AI program, we need to worry about flexibility because of changes on uncertainty, change in the knowledge. The beauty of AI feeding the digital twin, it enables you to make changes efficiently.

Larry Bernstein:

30 years ago, I saw a documentary film by Errol Morris called Fast, Cheap and Out of Control. It was about Rodney Brooks, who was head of the robotics department at MIT. 30 years ago, our computer systems were lacking. Our software development was in its infancy. AI didn't exist, and as a result, the robots were not that smart. There was a challenge done by NASA where they were going to Mars and they wanted to do a simulation on earth to encourage different robots to go large distances. There was the competition and Brooks instead of doing one large robot from Lost in Space, he decided to go with nature and use a very small robot, lots of them. And he used something in the shape of a grasshopper, and it would say jump straight ahead, and it hit an obstacle. Then it had a very simple brain and said, okay, that didn't work. How about we go 90 degrees to the right and then 90 degrees and then make it full circle and then keep trying. And sure enough, Brooks' Grasshopper Robot won the competition. and the essence of Fast, Cheap and Out of Control is that it's a cheap robot seemingly random in its application.

At the time, there were limits on software, AI, there were limits on the brain. But now as brains are developing, we might as well use them productively.

Myron Scholes:

That's a brilliant illustration of idiosyncratic about how to use and learn from error. We think of robots that will do one task efficiently, like the car frame lifting and putting it on the car.

I went to China to a Foxconn facility making iPads. The room was maybe a couple hundred meters long, and robots were integrated together and iPads would come out finished. And if they ever had to change this line, which cost them years to assemble, it'd be so expensive they couldn't do it. Apple had to guarantee them that they would have enough iPads to justify this production line which was set in stone.

Apple and Foxconn had made a hardware decision, which is inexpensive and efficient. Bang, bang, bang. iPads would come out. But if they ever had to change the system or change iPads, then it would be necessary to reassemble things. It would be very expensive to change that line. So over time, what we're seeing is moving away from the fixed production robots, the stamping things or the fixed Foxconn line to make things more flexible to handle uncertainty. And uncertainty is not just drawing from nature, but uncertainty is also how things are changing, how demand could change, how new information comes along unknown at the time.

Larry Bernstein:

Daniel Kahneman was awarded the Nobel Prize. In his work he noticed that humans were making mistakes and not learning fast enough from them. I'm interested in thinking about that set of problem sets that Kahneman's referencing and how AI will assist us to get around these errors.

Myron Scholes:

Kahneman and Tversky the fathers of behavioral finance, the idea that individuals make systematic behavioral decisions, which are different from what you would have on a basis in which you looked at the probabilities. For example, that individuals would have more loss aversion. They wouldn't want to take their losses, but would be willing to take wins.

AI’s a learning system. There's repeated information, it'll address historical data, and if the exceptions are handled correctly, that'll learn.

In the behavioral environment, if you have one play of that particular decision; it's far different than when you have repeated plays. And that's number one. Number two is the reward that you are given if you're doing experiments at the university with students getting a small amount of money. The larger the amount of money that you have to deal with, such as industrial processes or the brokerage house or the robot example, then it pays to improving your model to make profits or supply services efficiently to clients.

It's a much different situation for two dimensions. One is the repeated nature of evolution of the AI system, and the digital twin is continuing to learn to know what an error is and when to ignore it. That's what the human being has to come in or when to expand its dimensionality.

Time and how uncertainty changes is what we have the skills to do, which AI systems don't have the skills to do. They only have the data from the past. They're not creative in the same sense.

Larry Bernstein:

Let's use Einstein's theory relativity as a case. Newtonian physics and you'd use the model and there was an error term. In astrophysics there was an error. And is it a measurement error? What's driving this error? Is the model wrong?

Einstein came up with a new model, and then he derived an experiment where there would be an eclipse and he could look at that from two points on the earth. And he had a prediction as to what that error term would be. And scientists went to those two locations and sure enough, there was an error with the Newtonian model and the Einstein model prediction was accurate, and therefore this theory had legs.

AI could be informed of an error term, and they could say, here's a model, give me some new models that would remove this error term. There are limitations to what AI can do. Describe what AI strengths and weaknesses are, what Einstein's strengths and weaknesses are, and how they would tackle this problem differently and how they would tackle it better together.

Myron Scholes:

AI is great on historical data and having all the data from the past and the ability to have any model to address history. It could be full information but that is not valuable. I asked my students what book is sufficient to determine everything, and they would say the Bible, and I said, no, it's a dictionary.

It has every word. You can reconstruct Shakespeare or any other book you want, but it's not valuable. Can you deduce strengths from the past that will give you different insights into the future? You can make better predictions of the future because you have that data readily available, and you can use it to garner information, but it won't be able to tell you because it's the past data, it won't be able to tell you about things that were not in the past data and the unusual, which you'd have to be able to deduce on your own. Creativity is a combination of induction first, then deduction, you add up, and then you differentiate from the chaos around. You try to find order, but the past data only gives you what we have discovered previously.

Einstein saw things from the past data that others didn't see, which gave him a better viewpoint of the future. AI is going to be very efficient at using past data. But how we handle the exceptions is crucial to building a system. Human beings will be necessary to work in conjunction with AI systems to facilitate new learning. It will speed things up. It'll make it more individualized, give you flexibility to change your model, and change your thinking. But AI is a tool just the same way as an Excel spreadsheet is a tool that makes your life more efficient, but it's not going to replace what human beings can do.

Larry Bernstein:

Thanks to Myron for joining us.

If you missed the last podcast, the topic was Deporting Illegal Aliens. Our speaker was Andrew Arthur from the Center of Immigration Studies. Andrew is a former immigration judge and a former prosecutor with the INS. Andrew explained what due process is required in deportation proceedings for individuals here in the US illegally. We also discussed ways to expedite this legal process.

If you are interested in hearing more from Myron Scholes, he spoke two podcasts ago about Investing with Uncertainty. Myron explained why uncertainty is core to investing. He discussed why popular investment strategies that optimize the portfolio with 60% in equities and 40% in debt may be suboptimal. We also reviewed Warren Buffett’s investment success with reinsurance, the Burlington Northern Railroad, and very large purchases of convertible preferred stock in firms that were desperate.

I want to give a plug for next week’s podcast with Max Boot to discuss his new biography of Ronald Reagan.

You can find our previous episodes and transcripts on our website whathappensnextin6minutes.com. Please follow us on Apple Podcasts or Spotify. Thank you for joining us today, goodbye.

Check out our previous episode, Deporting Illegal Aliens, here.

Thank you for reading What Happens Next in 6 Minutes with Larry Bernstein. This post is public so feel free to share it.

Share

Discussion about this episode