It’s impossible to get away from artificial intelligence – machine learning is increasingly drives much of our lives, from the shows we watch on streaming TV to more critical pursuits, like our healthcare. With every positive interaction with AI, there is also negative potential, such as the way AI may introduce bias into results and its seeming lack of transparency.

That’s where Fei Fei Li steps in. A former vice president at Google and former chief scientist of the AI Machine Learning Group at Google Cloud, Li is laser-focused on shaping how AI will be designed and used in the future. 

“The endgame is to create a technologically-driven business or application, or the technology itself to benefit humans,” Fei, the co-director of Stanford University’s Human-Centered AI Institute, told John Battelle on the Signal stage in 2021. “So to do that, one of the most important things is to embed human societal and ethical design into every step of the way, from the research, basic science research, to the education, to the application, to the policymaking of AI.”

Watch the full conversation between Lei and Battelle below, or scroll down for the lightly-edited transcript:

TRANSCRIPT

John Battelle
Next, we’re returning to one of our key pillars at Signal which is artificial intelligence and we have a towering figure in the field as our next guest, Dr. Fei-Fei Li, the inaugural Sequoia Professor in the computer science department at Stanford University, and co-director of Stanford’s Human-Centered AI Institute. Fei-Fei served as the director of Stanford’s AI lab from 2013 to 2018. During her sabbatical from Stanford, for about a year and a half or so, she was a vice president at Google and served as Chief Scientist of the AI Machine Learning Group at Google Cloud. Dr. Li’s current research interests span almost everything there is to know about artificial intelligence, machine learning, deep learning, computer vision, AI, and healthcare and ambient intelligence systems for healthcare delivery. Welcome to Signal Dr. Li.

Fei-Fei Li
Thank you, John. So good to be here.

It’s great to have you. I think this may be the first time we’ve ever spoken. But I’ve read your papers, I’ve read about you. It’s an honor to be speaking with you. It’s wonderful.

This is the first time we’ve met but I also realized we shared a very important part of our lives in the same place, which is Pasadena.

You have to be kidding me. Are you from Pasadena?

No, I did my Ph.D. at Cal Tech.

I went to high school across the street from Caltech. And I always knew that I would not be going there.

A bunch of nerds.

Although I’ve made a career of writing about and talking with people who are far smarter than me about things like artificial intelligence. Thank you for coming to Signal, I appreciate it. Your role at Stanford, among many other things, includes being the co-director of the previously mentioned, Human-Centered Artificial Intelligence Group. What is human-centered AI?

Thank you for asking that question. AI is a technology that is the latest wave of a technological and scientific revolution that we have seen. It’s so massive and horizontal, it involves from very smart data analytics, all the way to machine learning, machine decision-making, and assistive technology to human decision-making and action. So under this context, it’s very important to recognize that AI is impacting and will be impacting everyone’s lives, and every business. This begs the question of what kind of future we want, given this technology is revolutionizing our businesses and lives. The future that I want, and so many of my colleagues at Stanford, as well as in the field of AI, want something that is a human benevolent one. We want this technology to bring positive potential and not the negativity that would impact individuals, communities, and people around the world. So human-centered AI is really about ensuring that the development of AI as well as the practice, deployment, and application of AI, is human-missioned, positive, and benevolent. So that is the definition of human-centered AI.

How do you put that into practice? I’ll say one thing, having been a long-standing student of Silicon Valley, no technology has raised as much proactive concern, as artificial intelligence. You did not see the winners of the early, technology era of the 70s 80s, and 90s, being super concerned about the emergence of Google or Facebook. But you see Bill Gates, Elon Musk, and others taking very strong stances about artificial intelligence and saying we can’t get this one wrong. So how do you take that concern and put it into practice?

Great question. I think beyond the boundaries of Silicon Valley, I think rightfully so, the whole world is concerned about such a powerful technology. I think the bottom line is that we need to recognize what is the endgame here. The endgame is to create a technologically-driven business or application, or the technology itself to benefit humans. So to do that, one of the most important things is to embed human societal and ethical design into every step of the way, from the research, basic science research, to the education, to the application, to the policymaking of AI. We need to embed ethics, human concerns, and human values into this technology.

This gets into some sticky wickets, I would say, because when you start to think about answering a question like, ‘What is ethical?’ people disagree. How do you go about your research when there’s a very thin layer between pure science and politics?

First of all, I am speaking from the perspective of the academic world where the bulk of our activity at Stanford, under the work of the Human-Centered AI Institute, is focused on research, education, and relevant application area and policy outreach. So your question is, ‘What are ethics and whose values?’ I think the most important thing to recognize is not what I think or what anyone thinks, but it’s a methodology, and how we approach this. What’s important to us at Stanford HAI is the multi-stakeholder approach. How is this technology fundamentally impacting everyone’s lives? When we think about how to develop this technology, how to apply it, or how to engage in policy, research, and policy advisory, we need to think about this inclusive, multi-stakeholder and multidisciplinary approach. I’ll give you an example. At Stanford, in my lab, part of our research is healthcare, especially using smart sensors to help doctors monitor patient safety. For example, in a patient room, fall risk, whether it’s in the ICU or your home, is extremely painful, a big concern. It’s extremely painful if it happens, and it costs lives. There is a good cause to use this technology. But even with a good cause, we need to involve multistakeholder and multidisciplinary methods to understand what are the other consequences. For example, privacy, data fairness, and the way to communicate the use of this technology are all very important.

Our team, including myself, is trained as computer scientists. I do not pretend I have the answers. So our team includes bioethicists, computer security experts, law scholars, doctors, nurses, and patients. When we designed the AI algorithm from the get-go, all the way to co-researching this hospital with doctors, we wanted to make sure the multistakeholder and multidisciplinary approach is applied at every point of the way to ensure that kind of inclusive value and the ethical concerns applies to everybody.

I appreciate you bringing that up. The idea of a team of multiple stakeholders involved in both the decision-making as well as the reaction to unforeseen consequences, which is what happens when you’re at the edge of discovery, there’s unforeseen opportunity and risk. So now I get to ask the 12-year-old version question that I would ask if I were talking to you as a kid. I grew up on movies, where AI was the villain. They’re still being churned out every single year. There must be half a dozen of them. Where artificial intelligence is unleashed, and for whatever reason, it’s always mad at us, and always wants to get rid of us. Hollywood needs a bad guy, and artificial intelligence has been that bad guy for the “Terminator” series and the “Matrix.” It just goes on and on. The 12-year-old kid in me wants to ask, ‘Is that possible?’ Is it possible that generalized artificial intelligence could escape and turn us all into paper clips?

I love that question. I have kids at home not far from being 12-year-olds. So we talk about these things. First of all, it’s a fantastic question. So first of all, in defense of the movie industry, there are a couple of movies where AI is not the villain. The one I chose for my kids to watch is “Big Hero Six.” It’s also a healthcare robot, which made me very happy. Another, I grew up watching is “Adam.” It’s a Japanese robot. Adam was a benevolent robot, a boy of robots. I was inspired by that.

Glad to hear it.

In all seriousness, I think it’s fair that we’re concerned about this. Not that the technology is so advanced, but that we think there is AI, or what people call artificial general intelligence, turning into machine overlords It’s not the concern I have. It’s the concern, at the end of the day, about people. Human civilization is a journey that never stopped innovating, from the day we discovered fire and turned some branches into tools and sharpened stones to cut. We are innovating. But every time we innovate, the hero and the villains are within ourselves, how we build tools, how we use them, how we use them, with ourselves, and how we use them with each other is the core of the question.

So I repeatedly say to anyone I come across, let’s stop using AI as a subject in a sentence, ‘AI will do this AI will do that.’ Let’s use humans as the subject because the responsibility is on us. The responsibility is on the makers of the technology, it’s on the practitioners of the technology, it’s on the business leaders. It’s on the lawmakers. It’s on civil society and Hollywood. The movie industry is a lens on ourselves. That concern about how technology is developed and used is an inner cry of who we are and what we want to be. I think that’s a really important question. It’s a responsibility for all of us.

It makes me reflect on two things. One is the conversation I had earlier with Sundar Pichai. The other is the hot water that Facebook has been in recently, and I guess I should say over the past few years. When pressed, both of those companies, which I admire greatly, people in the company and the achievements of those companies, have used AI as a way to say, look, ‘We’re aware of the problem, and our best AI is on it.’ Right? Content moderation, for example, Facebook has leaned heavily on. Look, ‘We’ve got great AI it’s getting better every day, and we’re going to solve this problem.’ When I asked Sundar about the question of whether can we preserve privacy in an advertising setting, where profiling, at least in the eyes of many, has gotten a bit out of hand, he said ‘I think we can engineer our way out of this problem.’

If you were to take what you just said and say, it’s incumbent on human beings, not AI, to solve that problem. It puts the responsibility and the authority back, I think, where it belongs. So I hope that your words resonate out there.

Regarding the debate that we’ve been having over the last two days on the policy response, to what extent do you and or your lab, get involved in that policy conversation? I see that you were appointed to the National Artificial Intelligence Research Resource Task Force. That sounds very official, what is it?

Let me answer this immediate question. I do want to get back to your comments about Google, Facebook, and Silicon Valley tech. So we do believe at Stanford AHI that it is so important to engage with policymakers. Not that we want to be directly making laws. It’s just that our scholarship, our interdisciplinary expertise, and our ability to build a platform, to engage policymakers with multi-stakeholders is an important role we can play. So one of the efforts we did last year was recognizing the dire need of rejuvenating America’s ecosystem for basic science, innovation, and technology. As much as we hear a lot of advances from big tech companies, only a handful of big tech companies today are dominating AI technology within their walls. The resources, the data, and the talent are all being centralized into a handful of companies. There’s nothing wrong with their ambition and practice. But as a nation, especially a nation that’s rooted in democracy, human rights, and human values, we need to make sure America continues to innovate and continues to educate the next generations of technologists and entrepreneurs to continue to lead. And to do that, we need a healthy ecosystem, including our basic science research and education. And the National AI Research Resource Task Force is tasked to partner with the federal government and industry to come up with some ideas on how we can rejuvenate this ecosystem by funneling resources into the public sector into the education and research sectors of this important field.

This connects to the earlier comment you have about the big companies. I do advocate that the big companies do not rest their responsibility on codes and algorithms. I think at the end of the day, it takes human leadership and human responsibility. Also, AHI is trying to help them provide a platform. We have corporate partnership programs at the AHI where, through our forum, not only we can have open exchanges of the latest technology, but even more importantly, we build a platform where the industry leaders and practitioners, policymakers, civil society, scholars, and experts from Stanford and students can have very important and critical conversations. AI is such an important technology that will you know, impact our nation and our world that we need this kind of human-centered, you know, discourse practice and efforts.

I can’t tell you how pleased I am both to hear your story and your passion and commitment to the kinds of conversations and teamwork that will allow us to take this extraordinary technology and use it for the good of society. Dr. Li thank you so much for your time and joining us at Signal. I really enjoyed the conversation.

Thank you, John. Thank you.