Using human-like pronouns like him or her when describing artificial intelligence transfers the responsibility for AI’s behavior away from people, says Rumman Chowdhury. That’s a “slippery slope of language,” she says. Just as a car is not responsible for a highway pile-up, AI is not responsible for the results it delivers, she notes.

The Responsible AI Fellow at Harvard University’s Berkman Klein Center certainly believes that technology, like AI, holds tremendous potential for humanity. But companies looking to adopt AI would do well to consider how these advances will impact their business first.

“You have to ask the same smart questions that you’ve probably asked of any other vendor and other technology,” she says.”How is this helping customers? What am I building here?”

You can hear more from this fascinating conversation between Chowdhury and John Battelle during the Signal Summit on July 12. And you can register here for the event.

You can also listen to Chowdhury in the video below and read our lightly-edited transcript.

TRANSCRIPT

John Battelle
Welcome to another Signal Conversation. I’m very excited today to be joined by Rumman Chowdhury, who I’ve known for many years now, back to her Accenture days. But now she is the founder and chief scientist of Parity Consulting, and a renowned expert on artificial intelligence, and its implications. Welcome to Signal.

Rumman Chowdhury
Thank you for having me.

Let’s get right into it. You have such a varied background. And you been involved in the field of artificial intelligence for a very long time. But in the last year, it has broken into the mainstream dialog. Tell me what brought you to the field.

I was hired by Accenture to be their first Global Lead and Responsible AI, around 2017. Nobody really knew what that meant. It presented an exciting opportunity to work globally to tackle a problem that I thought was really interesting is how can technology serve humanity? But also do it in a really applied sense, right? So this isn’t about philosophical musings, this is about creating good products that benefit people, and how could anybody say no to that opportunity?

Absolutely. You have a background that mixes two elements, which aren’t always mixing well, in our culture. One is political science, and the other is data science. What drew you to be interested in the implications of policy around artificial intelligence?

I see them as being very related fields. In political science I was what is known as a quant, someone who does statistical modeling. The thing about political science that appealed to me when I was an undergrad, this is before data science was even a field, was to take these mathematical equations and use them to extract insights about people and human behavior. Actually, data science is not very different, we do the same exact thing. But it often relates to, in traditional companies, good product development. So why are people buying things when they buy things? What motivates them to have the kinds of behaviors that they have, whether online or in a supermarket. Frankly, I just find human behavior very fascinating. The idea of modeling is very fascinating. That’s where it’s very related. But now it’s sort of taken, as you mentioned, a different turn, where there’s so much policy being written about artificial intelligence. It’s funny, I find myself tapping back into the classes I took on democracy, statecraft, democratic discourse, and that’s become a conversation that I never thought we would be having in artificial intelligence.

It seems very present now, and it seems to have been building right that the digital world was getting more scrutiny. Well before the outbreak of large language models, and Chat GPT. You seem to be having a moment I know you’re incredibly busy. But tell me about the work that you’re doing at Harvard, because that sounds very interesting.

Yes, I am a responsible AI Fellow at the Berkman Klein Center at Harvard University, which has given me a lot of leeway to meet really great minds, talk to some amazing people with a wide range of varied interests. For example, a lot of legal experts, who are thinking a lot about protections for intellectual property or protections for the hacker community, for example, I’m also doing consulting work, advising governments and companies on responsible AI. I’ve also kicked off a small nonprofit called Humane Intelligence. We’re a crowdsourcing platform to help companies make better decisions about how they are shaping their AI products by talking to regular people about it.

I’m glad you gave me that bridge to this next question, because this is a business audience. They have probably a lot more questions than answers when it comes to how to apply AI. There’s also a sense that maybe we are at the peak, or maybe not even at the peak, of something of a hype cycle around the capabilities of AI. And large companies in particular, are cautious. What would be your advice about how to think about all of these tools if you were being given a directive right now to figure this AI thing out and do something with it in a corporate setting?

What a great question. So when I joined Accenture in 2017, we were in the midst of the first AI hype bubble. So I actually had to tackle this with some of the biggest companies in the world and it really boiled down to the same things over and over. What does your customer want?

I think often in this hype cycle of impressive and flashy sounding technologies, some people may forget that you are actually just building things for your consumer and your objective is to solve problems for your customer. While facial recognition at checkout might be an interesting thing to you, as a company, that may not be what your customers want. Their pain point may be somewhere actually totally different. It really just boils down to how can technology solve your problems rather than trying to figure out how to fit problems into a technological mold.

Often, the way technology is imposed upon us is we’re running around trying to fit a square peg into a round hole. So your problem is customer engagement? Well, here’s this AI Chatbot. Is that solving customer engagement for you? I don’t know. Just because I’ve created an AI chatbot does not actually mean people like interacting with it, or it’s producing answers, that makes sense or helpful. It’s really good to do the diligence and the research into whether or not the product is a good fit. Frankly, I’m not telling this audience anything they don’t already know, and they haven’t successfully done themselves. All I’m saying is, there’s actually nothing new about AI in terms of using it successfully for good business outcomes.

You’ve written extensively about data science, political science, the combination of the two and recently have written a piece, featured in The New York Times, along with a number of other interesting people, in an op-ed about the impact of large language models. In one of your pieces, you used a phrase that I love, that I want you to unpack and explain for our audience, “moral outsourcing,” what is that as it relates to this field?

Yes, I did a TEDx talk some years ago where I coined this phrase. It’s a phenomenon that I’ve seen over and over in tech. It is when technology is anthropomorphize. In other words, we attribute humanity, even just in our language. We use pronouns to refer to artificial intelligence as if it makes decisions or thinks or it feels, and none of the above is true. We do it in a way that we’ve never done with any other technology.

What happens when they do that there’s a slippery slope of language. So once I’ve attributed personhood in my language to this artificial intelligence, well, then the engineers’ decisions and the product makers’ decisions are somehow not valid anymore, or they’re not included when we think about who is responsible, and who is liable. So moral outsourcing is literally when companies intentionally use this language to anthropomorphize AI and make it seem like it’s alive and making decisions in order to push off responsibility of decision-making around ethical and responsible use from themselves to this alleged machine.

A really perfect example actually happened in regards to using facial recognition technology, and even the engineers who were building it said, “Well, I don’t really know what it does, it’s making these decisions.” That’s actually deeply untrue. People program this technology to optimize for certain outcomes. If bad things happen, adverse events happen as a result of it. It is not the AI’s fault, any more than it is a car’s fault if it crashes into a wall. It is the fault of the person building it or driving it. That’s how we should be thinking about who holds responsibility for the outcome of artificial intelligence.

That makes a lot of sense, and I wish that we had that sort of built into our understanding of technology 30 years ago, because it applies to much more than just AI. But this particular moment where it seems like the machine has intelligence, because it’s talking back to us and it’s basically passing a Turing test, is somewhat fraught. What would your advice be to business professionals who, you know, are not technologists, nor are they policymakers who feel they’ll be left behind if they don’t have smart ideas about how to apply this, in a day-to-day context? Do you have any advice either about things to look out for, dangers to be aware of, or opportunities or ways to discover opportunities inside their business?

Just to start off, I am very gung ho on using technology for positive outcomes. I think that that should hopefully go without saying. I wouldn’t be in this field if I didn’t think technology held immense opportunity for people. Back to your hype cycle point, we are in a bit of a hype cycle. So this is like how do we parse out the hype? How do we start building meaningful things? Also, I think hand in hand with that is building out a responsible AI practice, or at least reflex in your organization. What I mean by reflex is, teaching your organization and yourselves the level of tech critical behavior needed to interrogate what’s happening. That doesn’t mean you have to have an AI expert, actually. You have to just ask the same smart questions that you’d probably ask of any other vendor and the other technology. How is this serving my customers? What am I building here?

My general rule of thumb is it’s okay to be six to eight months behind in adopting the latest technology. Think of it when a brand new type of tech launches into the market. You don’t usually buy the 1.0 version, at least I don’t. I wait until all the bugs are ironed out for the people who were the early testers and then you buy the 2.0 version, I think we are in that phase of generative AI and language models.

One thing I will add to your point about how convincingly they talk. It’s really interesting that generative AI takes many forms. You can generate videos, you can generate, photos, audio, people have just really latched on to the writing. It’s curious to sort of parse through as a social scientist, why people are so really into language models. I think because it mimics the most human behavior. We like to talk, it looks like a text interface. So it’s really fascinating to me, yes, generative AI is a huge technological leap forward. But actually, the biggest leap forward was in user accessibility. So GPT, four is four for each of those. Three, there was a two and there was a one. So why did the world not explode with GPT Three? Because previously, people have to be able to program to interact with it. So the big revolution has been in accessibility. I think that same mindset, as you apply to anybody sitting in this room, listening to this talk of how it can use technology, is actually a conversation about understanding and accessibility. How do you get it into the hands of people who will do great things with it? And what are the interesting use cases you can use it for?

Speaking of the use cases, one of the constraints of the use case, you’ve just elegantly laid out how, how the interface has changed to the point where now anyone, and everyone can start to understand it, employ it, interact with it. But there is also a regulatory regime, as we discussed at the top of this conversation that is starting to form. In particular in the EU, for a global company like P&G, you’ve got to think very deeply about what they might do in any given market because when there’s a big market, like the EU that already has regulation, you don’t necessarily want to build something that can’t transport between different regions. So can you tell us what is the state of that regulatory framework and what your point of view is about it?

You touched on a couple of things that are so important in the current market landscape. First is interoperability. We’ve already seen this with privacy law. The lack of interoperability within the US or across nationally of different privacy laws with GDPR being the most dominant, and yet still, constantly running into issues and running into walls on implementation as it relates to new technology. So very specifically, the new messaging or like Twitter-style app, called Threads by meta is actually not being launched in the EU, because they were just literally unable to resolve data privacy protections. It’s very interesting to see that battle played out in real time. So

I actually worked with the folks who are doing the Digital Services Act. I know the EU AI Act has gotten a lot of airtime. But actually, I find the digital service sector to be very fascinating because they are starting to regulate, as you mentioned, John, as it relates to responsible and ethical use, which is very different from saying, Is your data stored correctly. Now they’re asking things like, are you causing mental harm or mental anguish or somehow impacting young people? Are you violating human rights? Are you impeding the course of elections. Most likely, folks at P&G will not have to worry about these issues. But these are the kinds of issues that technology does impact. The kinds of issues P&G will face will be on issues of disparate impact, responsible use privacy PII, personally identifiable information, and how that is being mediated through this technology.

I’m actually really impressed at the kind of minds I’m seeing behind this all over the world, not just in the EU, but in places like Singapore, and increasingly the US. Although the US was a little bit behind in catching up with that narrative, there are a lot of smart people working on it. I think the thing that is needed at the moment is interoperability and and that conversation is already starting. So I wrote it a an a Wired op-ed about the need for global governance, and what that might look like and there have actually been next steps and thinking through what a global governance body might be in what sort of actions it might take.

So if you could wave a magic wand and say, these and please use simple language for us lay people here. But these regulations should be adopted worldwide, where would you start? What would be the two or three main things you’d like to see codified by government agencies?

As a data scientist, I like to make rubrics for things or ways of tackling a problem that’s repeatable. The question I’ve been asking myself as it relates to global governance is, “What is the climate change of AI?” I am actually of the opinion that a diversity of opinions and perspectives is actually quite good. We shouldn’t constantly try to homogenize on one way of doing things. Those often tend to be very brittle. But there are just some problems that are so big, that are going to be introduced by technology, too big for a company to solve and too big for a country to solve.

The best analogy really is climate change. What is the climate change of AI asks us to think about? What is the problem that is so very big? For example, election misinformation is one of them. Another one might be terrorist violence and extremist content. Because these are not centralized to one particular country. There are malicious actors living all over the world who are radicalizing people in other parts of the world. All of this is mediated through technology platforms that are global in nature. Well, then it doesn’t make sense for the US to pass a law or for Facebook to have a regulation if it’s not going to actually impact other platforms and other countries. So the way I think about it, and I’ve sort of given a few examples of things that I think are global in nature, is we just have to answer the question, “What is the climate change of AI?”

Sounds like a very large question as yet to be completely answered. but I’m really pleased that you are on the case, Dr.  Rumman Chowdhury thank you so much for joining us here for a Signal Conversation and I really look forward to following your work going forward.

Thank you, John.