Managing the Risks of Artificial Intelligence
Chapter #1 | Chapter #2 | Full Webinar Video
Artificial Intelligence (AI) is revolutionizing the business landscape, creating fresh opportunities for technology and life sciences firms to innovate and disrupt markets. Meanwhile, generative AI applications have brought the dialog about the transformative power and challenges posed by AI to the general public.
Chapter #1
The future of AI: What does the past tell us?
“The biggest economic benefits are going to be around what we call the system solutions. AI is the next big technology. It’s like development of the internal combustion engine and it’s like the adoption of electricity.”
Professor Avi Goldfarb, Rotman Chair in Artificial Intelligence and Healthcare at the University of Toronto’s Rotman School of Management, shared valuable insights with us on the economic benefits and challenges of incorporating AI into technology products and services.
(MUSIC PLAYING)
(DESCRIPTION)
Travelers logo. Text, The Future of AI: What Does the Past Tell Us?
A presenter, Avi Goldfarb appears wearing a plaid shirt, speaks against a reflective white background.
Text appears on bottom left corner, Professor Avi Goldfard, Rotman Chair in Artificial Intelligence and Healthcare and Professor of Marketing. University of Toronto's Rotman School of Management
(SPEECH)
AVI GOLDFARB: But the biggest economic benefits are going to be around what we call these system solutions. To get a sense of that, I think it's useful to look at a previous generation of technology, a general purpose technology, and how it played out. So if you've been paying attention to the hype around AI, you've heard people say things like it's going to transform the way we work and the way we live.
And it is the next big technology. It's like the internal combustion engine. It's like computing. And it's like electricity. And if we think about AI as the new electricity, I actually think that metaphor is much more powerful than many people appreciate.
What do I mean by that? Edison's patent for the electric light bulb was in 1880. It was clear in the 1880s that electricity was going to transform the way we lived and the way we worked.
But it wasn't until the 1920s, 40 years later, that half of US households and half of US factories had adopted electricity. It took 40 years of wondering to figure out how this clearly transformative technology would actually impact most people at home and at work.
(MUSIC PLAYING)
(DESCRIPTION)
White screen with the Travelers logo. The red umbrella of the Travelers logo shades the S in Travelers. Text, Copyright 2023 The Travelers Indemnity Company. All rights reserved. Travelers and the Travelers Umbrella logo are registered trademarks of The Travelers Indemnity Company in the U.S. and other countries.
Chapter #2
What are some of the risks with AI?
“As underwriters, we want to understand how things work. We want to consume lots of information so that we can identify the risks so that we can predict the losses. And to do this, we think a lot about the end use of AI. Are the stakes really, really high? Or are they low if something goes wrong?”
Amanda Bohn, Chief Underwriting Officer of Technology and Life Sciences at Travelers, provided perspectives on recognizing and managing evolving AI risks.
(MUSIC PLAYING)
(DESCRIPTION)
Travelers logo. Text, What Are Some of the Risks with AI.
A presenter, Amanda Bohn, she/her. Amanda, wearing a red blouse, speaks against a dark gray background.
Text appears on bottom left corner, Amanda Bohn, CPCU, Vice President and Chief Underwriting Officer, Technology and Life Sciences. Travelers.
(SPEECH)
AMANDA BOHN: We have such trust in our technology these days. There is just a high assumption of accuracy.
And what we're seeing in some of the AI these days is hallucinations. The AI is just making up an answer like my four-year-old. When he doesn't know, he just makes up the answer. And so these hallucinations are a bit of an error with a heck of a lot of confidence.
And in the absence of having a human that can apply that judgment to review that answer, it's just so easy for these hallucinations to perpetuate themselves. So you think about ChatGPT and Bard, they're getting a lot of these hallucinations. And no one in the field has solved for this. And the question is, will we? It's a matter of pretty intense debate.
So because these systems deliver all of this information with what seems like complete confidence, it's so hard for our users to tell what's right and wrong. And the speed at which this misinformation can spread has just vastly increased.
(MUSIC PLAYING)
(DESCRIPTION)
White screen with the Travelers logo. The red umbrella of the Travelers logo shades the S in Travelers. Text, Copyright 2023 The Travelers Indemnity Company. All rights reserved. Travelers and the Travelers Umbrella logo are registered trademarks of The Travelers Indemnity Company in the U.S. and other countries
Watch the full replay: From natural to artificial intelligence: Reaping the benefits while managing the risks of AI
In this webinar, we examined how technology and life sciences companies might navigate new business risks associated with the development or use of AI. Renowned AI expert, Professor Avi Goldfarb, Rotman Chair in Artificial Intelligence and Healthcare at the University of Toronto's Rotman School of Management and author of “Power and Prediction: The Disruptive Economics of Artificial Intelligence” provides valuable insights about the economic benefits and challenges of incorporating AI into technology products and services and Amanda Bohn, Chief Underwriting Officer of Technology at Travelers, shared perspectives on recognizing and managing AI risks.
Navigate to these timestamps in the full webinar below:
- Watchouts and limitations (02:30)
- Opportunities and risks (05:58)
- Economic benefits (08:20)
- A closer look for technology and life sciences companies (14:17)
- New risks for companies that develop AI? (17:55)
- The spectrum of risk (24:12)
- Barriers of resistance (27:20)
- The importance of investing in AI (33:50)
- AI risk management resources (37:33)
- Advice for risk managers (39:46)
(DESCRIPTION)
Text, Travelers. From Natural to Artificial Intelligence: Reaping The Benefits While Managing The Risks of The Evolution of AI. Mike Thoma. Mike, wearing glasses and a cardigan, speaks against a dark gray background.
(SPEECH)
MIKE THOMA: OK, I think we'll get started. Welcome, everyone, and thank you for joining our webcast today. From Natural to Artificial Intelligence: Reaping The Benefits While Managing The Risks of The Evolution of AI.
I'm Mike Thoma. I'm the vice president and national practice leader for Travelers Global Technology and Life Sciences. Our organization specializes in understanding and developing insurance solutions for companies in the advanced technology and life sciences industries. Artificial intelligence is no longer just a buzzword used in tech circles.
It's become a part of our daily lives. From virtual assistants to personalized recommendations on streaming services and online shopping platforms, AI is changing the way we interact with technology. Generative AI models are now even engaging in household conversations.
However, the rise of AI is not without its risks. And as its positive impact continues to grow, so do concerns about its potential downsides. The evolution of AI has transformed it from a futuristic concept into a practical reality, and it's clear we can no longer ignore its impact on every aspect of our lives.
In this webinar, we're going to try and delve into the ways that AI is shaping the world of technology companies and how they can manage its risks while reaping its benefits. Today, I am joined by Avi Goldfarb. Avi is the Rotman chair in AI and Health Care and professor of marketing at Toronto's Rotman School of Management.
He's also chief data scientist at the Creative Destruction Lab, a faculty affiliate at the Vector Institute for Artificial Intelligence, and a research associate at the National Bureau of Economic Research. Avi is also co-author of the bestselling books Prediction Machines and Power and Prediction: The Disruptive Economics of Artificial Intelligence with University of Toronto colleagues Professors Ajay Agarwal and Joshua Gans. Also joining me today is Amanda Bohn.
Amanda is the vice president and chief underwriting officer at Travelers Global Technology and Life Science. In this capacity, she establishes the strategic underwriting direction of the practice and leads a team of underwriters throughout the United States specializing in technology and life sciences. As of right now, we've got about 600 of you on the phone.
So I'm assuming if you're joining this call today, you share my enthusiasm for this exciting and emerging topic. So I really hope that you find today's conversation interesting and informative. And with that, I'm going to kick it off, and I'm going to direct my first question to Avi.
Avi, it seems you can't pick up a newspaper or, in my case, I can't pick up my phone and in my newsfeed not see multiple headlines talking about AI. With the heightened awareness and excitement around emerging AI technologies, like ChatGPT and Bard, what are some of the watch outs and current limitations for individuals and organizations using these services?
(DESCRIPTION)
Text, Avi Goldfarb. Avi, wearing a plaid shirt, speaks against a reflective white background.
(SPEECH)
AVI GOLDFARB: Hi, Mike. Great to be here. And that's a fantastic opening question thinking through, when we're talking about artificial intelligence in 2023, what do we really mean? And the first thing to remember is we are not in a world of machines that can think like you might imagine from science fiction. We're a long way from The Matrix and The Terminator.
And what we have are prediction machines. They're machines that take advantage of advances in deep learning, computational statistics, that use data we have to fill in missing information. So what's changed over the past decade or two in artificial intelligence is that we've become much better at taking information we have and filling in missing information.
And that means they're great when the data are present. So when we have lots of data, prediction machines can fill in missing information of other similar situations. When they break down, they break down when the data that we've used to train the machine is not relevant to a current decision in the current situation.
Now, what we've seen overwhelmingly so far is the use of prediction machines, the use of AI, what we call point solutions, which is you think through your company's workflow. You identify some predictions you're already doing with perhaps some human process. You take out the human. You drop in the machine. And you don't mess with the workflow because that's easier.
Every time you change the workflow, it's a pain. You've got to get all sorts of people to coordinate and cooperate, and it's hard. And so typically, what we've seen so far are these point solutions where you take out a current process, and you drop in the machine at the exact same point. Those work.
But a lot of companies have implemented these point solutions and said, you know what? The juice hasn't been worth the squeeze. We invested millions or more in these data systems to make AI happen. And ultimately, all we did was save 1% on our costs. That's not worth it.
What we emphasize in our book, Power and Prediction, we believe, is that the biggest changes are going to happen when organizations are ready to find new ways to deliver value. This is what we call a system solution, where rather than just doing the same thing you always did but a little bit better, you take advantage of what prediction technology operates, figure out where if you had a little more information, what could you do differently? And that leads to more than doing the same thing you always did but a little bit better, but instead an opportunity to deliver an entirely new kind of value to your customer base.
MIKE THOMA: Thank you, Avi. So if I can summarize what I think I heard you say, it's an emerging technology. There's lots of potential to improve outcomes, but they're not without risks. Yeah, a pretty good topic for a webinar, I think. All right, I'm curious, how do you view the current state of all types of AI, the opportunities and the risks?
AVI GOLDFARB: Right. So again, the starting point is to recognize there are risks to machines taking over the world like they did in The Terminator, but those risks are not relevant to us on a day-to-day basis and to what you guys need to worry about in the short term. The risks we need to worry about are the recognition that these are prediction machines and that-- a few things.
Predictions come with uncertainty. There's variance. And so you're going to get a point estimate out of your prediction, but you're also going to get a confidence interval. And you need to understand and embrace that when the machine gives you a prediction, that it doesn't tell you for sure what's going to happen.
It's like any other prediction. You guys are in insurance. You understand that idea. And so in making decisions based on prediction machines, you need to embrace that uncertainty and embrace that-- and think through the fact that even though you don't know for sure, there's a lot you can understand.
A second very important risk is that even though I think there's many reasons to think that machine predictions are much more accurate than human predictions, they leave a trail. That's good in general. Leaving an audit trail means you can improve them and make the world a better place by seeing what went wrong.
But it also means that anybody can see what went wrong. And with human processes, there's often an ambiguity about whether a mistake was made or everybody actually made the best decision they possibly could have, and there was bad luck. With a prediction machine, there's an audit trail. And that creates another whole set of risks when you implement them in companies.
MIKE THOMA: Yeah, so I think that being in the insurance industry, having some of that transparency can certainly affect the liabilities downstream. So I think it would be something that's interesting to everybody on this call. With these transformational solutions you're referencing, I imagine that there are some economic benefits that companies can expect that will be compelling. What do you think those economic benefits will be?
AVI GOLDFARB: OK, so what we've seen so far are there's a handful of companies that were well positioned to take advantage of these point solutions. So you know what? It was this really expensive part of their workflow. Like in banking, fraud detection was an expensive part of the workflow, and all sorts of people were trying to do fraud protection. And we've had AI as the point solution there.
But the biggest economic benefits are going to be around what we call these system solutions. To get a sense of that, I think it's useful to look at a previous generation of technology, a general purpose technology, and how it played out. So if you've been paying attention to the hype around AI, you've heard people say things like it's going to transform the way we work and the way we live.
And it is the next big technology. It's like the internal combustion engine. It's like computing. And it's like electricity. And if we think about AI as the new electricity, I actually think that metaphor is much more powerful than many people appreciate.
What do I mean by that? Edison's patent for the electric light bulb was in 1880. It was clear in the 1880s that electricity was going to transform the way we lived and the way we worked.
But it wasn't until the 1920s, 40 years later, that half of US households and half of US factories had adopted electricity. It took 40 years of wondering to figure out how this clearly transformative technology would actually impact most people at home and at work. And what took so long is they had to figure out what the technology really could do.
What do I mean by that? In the 1880s, the logic of a factory was determined by power needs because the steam engine or the waterwheel would have been at the center of the factory. And if you remember your high school physics-- you may or may not-- energy dissipates with distance.
And since every single machine in the factory had to be connected to the steam engine by these belts, they tried to locate the machines as close as possible to the steam engine. And so the logic of the factory, the microgeography of the factory, was determined by the power needs of the various machines. And so the workflow in the factory was determined by which machines needed to be closest to the power source.
In the early days of electrification of factories, all they did was take out the steam engine, drop in an electric motor at that exact same point, and that's it. They didn't change anything else. And they might have saved 5%, 10%, or even 15% on energy costs, but that's it.
And for most factory owners, it wasn't worth it to save a little bit on energy to figure out, how do you get electricity distributed into your factory? How do you set up the wires? How do you deal with new fire needs? Within the factory, how do you connect all the machines that used to be connected through belts to this central power source?
And so even by 1900, less than 5% of US factories were electrified. And then around 1900, people started to realize that electricity wasn't just cheap power. Electricity was distributed power.
What electricity did is it decoupled the power source from the machine. And so you could put your machines anywhere you wanted. They were no longer constrained by this need to keep it close to the power source. And once that happened, we invented what you think of as the quintessential 20th century factory, with inputs coming in one end and outputs coming out the other, modular production where the organization of the factory is determined by a logical workflow from inputs to outputs.
Once that happened, we saw a rapid increase in the adoption of electricity in factories and a huge increase in the productivity and the output of those factories that did adopt. It required the invention of an entirely new system in order to really take advantage of what technology can operate. OK, so what does it have to do with AI?
With AI, it feels like we're in the 1890s. We're in these times between recognizing the potential of the technology and figuring out what those new systems look like. Once we figure out what those new systems look like, the ability to deliver value to our customers becomes extraordinary.
We've seen it in a handful of industries already. The advertising industry has been transformed by better targeting. Today's advertising industry looks very little like the Mad Men industry of the 1960s, and that is largely because of prediction technology and targeting.
We've seen it a little bit in personal transportation, where Uber, Lyft, and others combine digital dispatch and navigational predictions to enable almost anybody to be a professional driver, assuming they know how to drive. But in most other industries, it hasn't happened yet. And where we're going to see the huge upside potential is, as we go industry by industry reinventing ourselves like what happened, in the advertising industry and ad tech over the past decade or two.
MIKE THOMA: OK, so that's fascinating. But we focus on the technology space, and I would think that the opportunity for change there is just as great or greater. So if you think about the industries we target, what do you see for technology industries, including life sciences and medical technology companies?
AVI GOLDFARB: OK, so I think the upside is even bigger in health care and medical tech and life sciences. But there's a handful of underlying challenges that are going to be key sources to resistance to really ultimately delivering better care and better medicine. So the two core barriers here, one is going to be regulatory.
For very good reasons, we are careful about what life sciences and medical technology we're allowed to have. But if you see an ad that you don't like, who cares? It doesn't really matter. But if you receive a medical treatment that's the wrong treatment, that's a big deal.
And so because the stakes are so high, there are reasons for a much more cautious regulatory environment. And so that's the challenge number one. Challenge number two is the decision makers, often in life sciences and in health care, are people who have been selected and trained around diagnosis.
And diagnosis is fundamentally prediction. You're taking data about symptoms and filling in the missing information of the cause of those symptoms. And because doctors tend to be so central to decision making in health care, you should expect some resistance to a machine that might displace some of the central role that doctors play and replace them with empowering nurses and pharmacists and others.
But at the same time as those challenges, I just described these incredible opportunities. So if we have machines that can diagnose effectively and at scale, there's a whole bunch of new opportunities, for example, for treatments, that you might not have imagined before. So if diagnosis of disease is slow and not that personalized, not that targeted, it might only be worth it to develop a couple of different treatments for lung cancer, for example.
But if you have a prediction machine that can diagnose not just the high-level disease, but be very specific at scale for the entire population of which particular kind of cancer this might be, then it becomes useful and worth it for the pharmaceutical companies, for example, to develop treatments for these narrowly defined diseases. So rare diseases don't get treatments often because there isn't a big enough market. But if you start diagnosing at scale, that creates a business opportunity on the other side of things.
So there's real barriers in life sciences and health care, but there are some incredible opportunities. And more generally, health care is an industry with a lot of room for productivity improvement and a lot of room for better health for patients and better treatment for patients. I think it's a really exciting place.
MIKE THOMA: All right, I think that that's a great segue to our other panelist. Amanda, from your vantage point, what newer increased risks do you see for those companies that are developing these AI solutions and for those companies that are using the technology?
(DESCRIPTION)
Text, Amanda Bohn, she/her. Amanda, wearing a red blouse, speaks against a dark gray background.
(SPEECH)
AMANDA BOHN: Well, I mean, no question, there is great promise in this technology, especially for life sciences and health care, like Avi was pointing out. But this innovation does come with quite a bit of risk. And that's our tagline in Travelers Tech and Life Sciences. Innovation creates risk, and we insure it. So we understand this quite a bit.
And as business leaders and AI developers and users, we all need to understand the risks that this tech presents. And AI's been around a really long time. And at Travelers within Technology and Life Sciences practice, our underwriters are really familiar with it and how to approach it.
But it's seems like things are shifting right about now. Maybe you might even say we're entering a bit of a perfect storm. Adoption is increasing exponentially. There's this lack of corporate accountability. There's a massive lack of regulation.
There's just so many unknowns. And like Avi said, the AI is predicting. And when the AI doesn't have the data, it gets it wrong. And underwriters predict things too. And believe it or not, we get it wrong sometimes too.
And there are so many things that developers and users really need to think about, and I'm only going to highlight a couple. So to start, when I think about the developers, I think about three things primarily, the security and the privacy risk, the explainability risk, and then the risk to reputation. So for security and privacy, the data could possibly be used for unintended purposes.
There is just no way for the developer to predict or foresee all of the use cases. Secondly, the explainability risks, so the uncertainty in the decisions that are made by the AI system and the lack of understanding of that decision making process, especially if it's a black box system. And what those are is the inputs or the processes are either hidden from public view or they're just so gosh darn complex that a human can't tell how the AI was trained or how it got something wrong.
Lastly, the risk to the company's reputation. So if they fail to mitigate the risk while pursuing all of these awesome benefits, it could lead to public criticism, reputational harm, or costly, investigations and lawsuits. So then I think about the flip side, the risks to users. And I think it's important to point out that, Mike, Avi, you and I, we're all users. We use AI in our day-to-day lives.
But there's business leaders at companies that use AI solutions that are provided by a third party. And I think it's really important to remember that we can't assume that you aren't responsible for potential mistakes or bad outcomes. And in many cases, the businesses that buy this AI-enabled tool are still accountable for the programs, their outcomes, and their effects.
So with that backdrop, when I think about users, I think about a few things, safety, accountability, bias, and accuracy. So safety, the risks associated with the unintended results could possibly lead to injury, to death, to property damage. It really depends on the use case.
Another risk I think about is accountability. So if the AI doesn't work as intended and it leads to injury or loss, who is accountable? Is it me? Am I accountable as the user? Is it the company that made the AI? Is it the creator of the software program that embedded the AI?
These are super tough questions, and I don't have those answers. And what complicates matters is when you use that black box system, we can't tell what caused the error or who is accountable. Then I think about bias risk. The data set just might not be diverse enough, or it just might be incorrect. The data labeling might just be wrong.
And researchers are raising a lot of ethical questions these days, suggesting that it could perpetuate the existing biases that we already have in society, invade our privacy, or spread misinformation. And then, Mike, the last risk I think about is accuracy risk. We have such trust in our technology these days. There is just a high assumption of accuracy.
And what we're seeing in some of the AI these days is hallucinations. The AI is just making up an answer like my four-year-old. When he doesn't know, he just makes up the answer. And so these hallucinations are a bit of an error with a heck of a lot of confidence.
And in the absence of having a human that can apply that judgment to review that answer, it's just so easy for these hallucinations to perpetuate themselves. So you think about ChatGPT and Bard, they're getting a lot of these hallucinations. And no one in the field has solved for this. And the question is, will we? It's a matter of pretty intense debate.
So because these systems deliver all of this information with what seems like complete confidence, it's so hard for our users to tell what's right and wrong. And the speed at which this misinformation can spread has just vastly increased.
MIKE THOMA: Amanda, that's scary. I have this vision of a confident four-year-old out there making all sorts of important decisions for big corporations. But there's this recurring theme that AI is really only as good as the data that supports it. And obviously, depending on the situation, the risks associated with that inaccuracy could be vastly different. So, Amanda, when you think about the spectrum of risk created by AI, are there characteristics that make some AI higher hazard than other AI?
AMANDA BOHN: Yes. In short, Mike, yes, there are. And as underwriters, we want to understand how things work. We want to consume lots of information so that we can identify the risks so that we can predict the losses. And to do this, we think a lot about the end use of AI.
Avi mentioned stakes. Are the stakes really, really high or are they pretty low if something goes wrong? So we put them in a risk spectrum from low to high. So an example of a low side of the risk spectrum would be AI that is optimizing a web server or maybe predicting what show I should watch tonight on Netflix. Those are pretty low stakes.
Then on the other side of the spectrum is AI that's diagnosing a medical condition that I have based on the lab results that were input into it. So those would be the high stakes. The other things I think about that could be characteristics that might help us discern high from low, Mike, is if there's no human involvement or no human oversight, that could be pretty high risk or high stakes.
Is the system transparent? Can we see how it was built? Or can we see how it makes its decisions? And if we can't, I would consider that on the high-risk side of the spectrum.
So making available for review or audit by external parties will really help determine the error and also the liability if something was to go wrong. And it becomes so much more important in that high-stakes environment. And one could argue there's a bit of a responsibility on the AI developer to provide that information.
The other things that come to mind are, are there clear guidelines that are set for the users of the system outlining, how should this AI be used? Are the systems-- do they have limitations? And are those limitations widely known and visible to the user?
Because no AI is perfect. They're just predicting, which is what Avi mentioned. So if it's high risk and we can't tell what the limitations are, that could be concerning. The last thing that comes to mind is, is there a feedback mechanism or a feedback loop so that the user can report information to the developer if something goes haywire or if they happen to observe a hallucination?
MIKE THOMA: OK. That makes a lot of sense. I'm thinking about one of the more highly visible efforts in AI that gains a lot of attention today, and that's autonomous vehicles. And you think about the decisions that we're trying to program into those vehicles.
I don't know about the rest of the folks on the phone. I'm not sure I'm ready to jump into a car with no steering wheel or brake pedal. So I understand the risk there. Avi, I think that raises another question. I'm curious, what do you think are some of the barriers of resistance you foresee companies facing as they contemplate the adoption of AI?
AVI GOLDFARB: So Amanda just went through a whole bunch of things that can go wrong, and all of those are excuses for barriers and resistance. And as we're thinking through those, let's not forget the big picture, which is that humans are terrible drivers. We get into accidents all the time. We're really quite bad at it.
And there's reasons to expect that machines will reduce the number of accidents, even while all those risks that Amanda described will happen. They are auditable. There's biases. All of that will be there. But at the same time, in aggregate, there's reasons to expect that they're going to be better than human.
On writing in ChatGPT, yeah, there's risks and biases again, but again, there's reasons to expect in many places they're going to be better than human. In medical diagnosis, it will identify mistakes and there will be problems. But the 25th percentile radiologist is a lot worse than the 90th percentile radiologist.
And we should expect machines to be at least as good as the 50th percentile. And that can save a lot of people's lives, especially those who currently don't have access to the very best medical care. And so if all of this is so amazing, then the question is, well, why aren't we jumping on all of it?
And there's some technical barriers to it, for sure. And autonomous vehicles, that's a big part of it. But there's more to it than that. So let's take a step back and think, what are the AIs that we have right now? They're prediction machines, and they help us make better decisions.
They don't make decisions. Humans make decisions by deciding which predictions to make and what to do with those predictions once we have them. Now, who's going to resist better and better and better predictions? It's the people who already have it great.
The people who benefit from the biases and the way the current system operates aren't going to like change. People in power tend not to like the revolutions. If we're comfortable with the way things are now, that's where the resistance is going to come from.
And there's a story that happened to Major League Baseball a few years ago that I think demonstrates this idea really clearly. So think about what we ask our human umpires to do. There is a ball, a tiny, little ball, about the size of my fist. It's going at 95 miles an hour over a plate, a piece of wood, that's roughly the size of your computer screen. Depending on the size of your screen, maybe even smaller.
And there's a human who has to decide whether it goes over that plate between somebody's shoulders and somebody's knees. And every 10 times they do something, that person changes, and the height changes. And they have to do this hundreds of times over the course of three hours or so. That's crazy. That is not a human task.
And it's amazing that umpires can even attempt to be close to accurate. And they're pretty good. But about 20 years ago, Major League Baseball realized that they make some mistakes. They thought they could bring in a machine to call balls and strikes better.
And they vetted various technologies, and they found a machine that could identify whether a pitch was a ball or a strike better than the human umpires. And they started to experiment with it. And the human umpires didn't really like it. But ultimately, in Major League Baseball, umpires aren't the source of decisions and power. And so if the system was going to work, they were going to use it even if the umpires didn't like it.
But the umpires weren't the only people who didn't like it. The superstars of the day also didn't like it. Two of the most prominent superstars at the time were Barry Bonds and Curt Schilling. And they hated this new system. Why?
Well, when Barry Bonds was at the plate and he didn't swing, the umpires gave him the benefit of the doubt. If it was close, it was called a ball. And so when we brought in a fair system-- when Major League Baseball decided, oh, you know what, everybody now has the same strike zone, Barry Bonds had a lot more strikes called against him.
He didn't like that. He benefited from the biases inherent in the old system. And bringing a new, better, fairer system, yeah, it might have helped the nobodies, but it didn't help the superstars. And so they resisted so much that baseball ended up giving up on that for a long time and said, OK, we're going to go back to the human decision because we like our superstars benefiting from the inherent biases in the human umpires.
So challenge number one is, where is the resistance going to come from? A lot of the resistance is going to come from the people who benefit from the way things are today. The second challenge is that doing system-level change-- so if we're moving beyond a point solution, moving beyond taking out something in an existing workflow, dropping in the AI, but not messing with anything else and actually trying to deliver a new kind of value to some of our stakeholders, well, that requires coordination in different parts of the decision.
And that means typically that you need CEO-level buy-in for what you're trying to do because you need marketing to talk to finance. You need underwriting to talk to marketing. And once you have everybody talking to each other, well, they might not see the world in the same way.
And so system-level change is difficult. And if system-level change is what's needed to make the millions or more that it takes to invest in an excellent AI system worthwhile, then those coordination-level challenges are going to be a major barrier to making anything happen.
MIKE THOMA: OK. That's interesting, Avi. But it feels like today, using an AI tagline is ubiquitous. Everybody's product or service has AI. So how important is it for technology companies to invest in AI, knowing that the risks of AI are evolving just as fast?
AVI GOLDFARB: The starting point for any strategy shouldn't be the technology. The starting point for strategy should be your mission. What are you actually trying to accomplish as an organization?
And then when you think through, what does this new technology offer? Don't think through how you deliver on your mission well. Think about the various things that you do where you fail to deliver on your mission.
How much of your standard operating procedures are about compensating your customers or other stakeholders for the fact that you don't do what you should do? What do I mean by that? Here's an example. Think about airports. This is Seoul Incheon International Airport.
And by many accounts, it is the best airport in the world. It has fantastic shopping, great restaurants, big, open spaces, greenery. It's about as spectacular as an airport gets.
But this isn't how the super rich fly. The super rich don't fly through these beautiful multibillion-dollar structures. The super rich fly through these tiny sheds. The airports and private terminals look nothing like these beautiful structures. They have low ceilings. They're cramped. They're dark. If they have a magazine rack, it might be the same magazine over and over and over again.
How does that make sense? How do the people who can afford the ultimate in air transportation-- how did they get these crappy airports, and the rest of us get these beautiful multibillion-dollar structures? Well, the reason is nobody wants to spend time at an airport.
The reason we have these multibillion-dollar airports with fantastic shopping and great restaurants and all that is because these airports are failing to deliver on their mission. Seoul Incheon's mission is to deliver smooth air transportation. Restaurants aren't about smooth air transportation. Shopping isn't about smooth air transportation. That's about the fact that you're stuck at the airport and not on the plane.
The ultimate and smooth air transportation would be you have a great prediction about how long it's going to take to get to the airport, through security, into the gate. And you arrive at the airport, walk to the gate, get on the plane, and it takes off. That's how the super rich get to fly. That's the ultimate customer experience.
And you think about airports, these multibillion-dollar structures, so many of their standard operating procedures are about failing to deliver on their mission. In any industry, there are all sorts of things that you do that aren't about delivering what you really could but about the fact that you try to compensate your stakeholders for your failures. And looking at those places, that's where the biggest opportunities for change arise and also where the biggest challenges in terms of startups coming in and disrupting the entire industry are going to take place.
MIKE THOMA: OK, so what I heard you say is that successful adoption of AI requires really giving thought to process and improving competitive advantages. And as you said earlier, the point solutions are really not going to drive huge competitive advantage. It has to be systems-level change.
But at the same time, given the potential that AI offers, I think it's safe to assume that we will see more and more companies incorporating AI into their products and services. So, Amanda, with all of these companies exploring AI, what types of AI risk management resources are available?
AMANDA BOHN: Yeah, Mike. So the development of AI and adoption is just going to continue to increase, and it's going to be leading to more intense debate with big tech, politicians, and litigators. And I think we'll actually see the risk management resources that are available increase a great deal. And, Avi, Mike, maybe we should come back in a year or two and do this again and see what else is available out there for these risk managers to help mitigate this risk.
But for now, there's a couple things that come to mind. So first is there's an organization out there called NIST. It's the National Institute of Standards and Technology. And they created an AI risk management framework that identifies the categories of risk associated with AI. And risk managers at companies that develop AI or use AI, as well as the agents and brokers that might counsel these customers, should really be familiar with this framework.
And what it does is it breaks down the seven categories of risk that's associated with AI. And some of these we already touched on today, accountability, safety, reliability, bias, security, privacy, explainability. And that's all well and good and incredibly helpful. And I think we should all be really familiar with the NIST framework.
The other thing that comes to mind, though, is just good old-fashioned contractual risk transfer, also known as good, strong contracts, to ensure that the liability is placed with the party that has the most control over it. And a good risk management program can help organizations that develop AI better protect themselves by reducing the financial exposures that they may face. And I am going to add that Travelers Tech and Life Sciences, we're producing a technical paper that walks through this in pretty great detail.
And it's coming out next month. And we'll be sure to make sure that all of you guys get a copy of it. So, Mike, that's how I would answer that.
MIKE THOMA: Fantastic. So, Avi and Amanda, this has been a great conversation. I've got one last question that I'll ask both of you. If you could give one piece of advice, what would you give to risk managers and organizations about the steps they can take to protect themselves and their organizations from harm as they start down this AI journey? So, Avi, I'll ask you that first.
AVI GOLDFARB: OK. I want to return to something I said way at the beginning, which is these are prediction machines, and they're statistical predictions. This is computational stats. And we might not have so many people on the webinar if we called it understanding computational statistics, but that's what's happening.
So we talk about it as AI, but really it's computational stats. And computational stats are-- just the advances that we've seen in the last 20 years are extraordinary. But once you recognize that it's computational stats and it's not some artificial intelligence, you realize what you have when you're using them is an estimate, and that estimate has variance.
And the biggest mistake I've seen companies do over and over again in the deployment of AI systems is to think that the estimate, the prediction that comes out of an AI, is ground truth and to forget that it is an estimate with a confidence interval, with a standard error. And once you embrace on the risk and risk management side of things that you have uncertainty, you can deliver much better products, much better services and mitigate the big-picture potential for harm.
If you treated what comes out of the AI as, for sure, the right thing, you are, for sure, going to be overconfident. And if the stakes are high, that will lead to disaster. But if you recognize that there's uncertainty in those predictions, and you build in systems to account for and accommodate that uncertainty, then we can build systems that are much, much better than whatever we have now.
AMANDA BOHN: So what I might add to that is-- and, Avi, it's a good thing you didn't name this webinar because calling it computational stats, we wouldn't have had such great engagement. But, Mike, I can't pick just one. So when I think about a couple things of advice that companies can do to protect themselves is, similar to what Avi said, really recognizing limitations.
But when I think about the creators and the developers, being really transparent about the data's limitations to the users so that the users understand what they're going to get when they use the system. And then additionally, creating that feedback mechanism so that if they experience hallucinations or something doesn't work in the system, the user can circle back to the developer so the developer can further improve the technology.
The last thing I'll mention is for all of those tech and life science companies listening out there, I would really encourage you to partner with an agent and broker, as well as an insurance carrier, like Travelers, to make sure that they can-- to be with a company that understands the technology, as well as how to underwrite it, so that we can all work together in this AI evolution journey.
MIKE THOMA: All right, thank you. OK, with that, we're going to bring the webinar to its close. I do want to thank both Avi and Amanda for their participation and the insights that they shared. I also want to thank all of you for joining us today. Hopefully, you'll be able to take some of what you heard and put it to good use. But with that, I thank all of you for attending.
(DESCRIPTION)
The red umbrella of the Travelers logo shades the S in Travelers. Text, Copyright 2023 The Travelers Indemnity Company. All rights reserved. Travelers and the Travelers Umbrella logo are registered trademarks of The Travelers Indemnity Company in the U.S. and other countries.