In 2020, former Twitter CEO Jack Dorsey, who remains one of the most fascinating characters in tech, joined us on the Signal stage for an exclusive interview with then-Fast Company Editor in chief, Stephanie Mehta.

Mehta’s interview with Dorsey two years ago – in which Dorsey talks about content moderation, security and more – was fascinating at the time. Here’s the full interview, including a lightly edited transcript below. Enjoy!

Watch the full conversation or scroll down for the lightly edited transcript:


John Battelle

We’ve come to our final conversation of PG Signal 2020. And it’s a good one. I want to welcome both the interviewer Stephanie Mehta, the editor in chief of Fast Company, and Jack Dorsey, the CEO of Twitter, a service I’ve used since you launched Jack, and I look forward to this conversation. It’s a timely one. Take it away, Stephanie, and Jack.

Stephanie Mehta

Thanks so much, John.  Jack, thanks so much for being with us here today.

Yesterday was to use your own words, a tough day for Twitter. Hackers launched a coordinated attack to gain access to Twitter accounts for several companies and some high profile users. They used those hacked accounts to try to drive traffic to a cryptocurrency scam. What’s notable is that you have said that the attackers gained access to internal Twitter systems and tools. The last official updates from Twitter came about 14 hours ago. Can you provide any more information about the investigation and the steps you’re taking?

Jack Dorsey

Thank you, Stephanie and thank you, John. And I hope you all are safe and healthy. Thank you for the time.

Yeah, it was a really tough and terrible day for us. It’s important to remember that that security is not an endpoint. It’s something that constantly evolves. We always have to be steps ahead of adversarial parties. We noticed some of this activity early on. We took very quick steps to shut it down and contain it, and had to do some things like blocking the tweeting of anyone with a blue badge, because those were the vectors of attack for some time just to maintain the security and make sure that people were not falling victim to the scam. 

But as we said, we do believe that this is through social engineering of our own employees. That is something that no matter how good the technology is, there is always a weak link there. We can definitely build around it and are looking at the steps to take to do that. But I think the most important thing here with security, and just in general, is transparency. We’re committing to making sure that all of our investigations, we share everything that we find, and that we show the steps we’re taking to mitigate issues in the future. 

The team has been working around the clock to figure out exactly what happened, how it happened, and how we can prevent this issue and related issues in the future. But this is a challenge for our entire industry, especially as we consider these centralized platforms and what they mean and how they’re used. We take that responsibility very seriously. So it’s really now a function of wrapping up our investigation and then sharing what we learned, working with our partners to make sure that they’re informed, and working with our peers so that we can prevent similar attacks on other services as well.

Is law enforcement involved in the investigation? Are you going to pass on the investigation to any legal authorities?

Whenever anything like this happens around the world, we work with local law enforcement. So certainly, yes. It’s something that we have good channels and good partnership around. But right now, it’s just a matter of finding all the evidence and investigating and making sure that we can provide a clear case of exactly what happens and the tick tock of what went down.

You talked a little bit about transparency with your partners. You are talking to a roomful of marketers who rely on being confident in the service. What can you tell them to assure them that Twitter is a safe space for their brands?

I think the biggest focus for us has to be trust and how do we earn trust. Obviously days like yesterday and situations diminish and erode trust. There are multiple ways of building trust in my mind. One is through transparency. You’ve got to be really transparent, and own anything that we made mistakes around. What we find, reliability and dependability is another aspect to our interest, making sure that when we share the steps that we’re going to take, that we follow through with them, and that we continue to be transparent around how those steps are working. And then making sure that we have a constant conversation with all the parties that use our service from our advertisers or our individuals, institutions like governments, to make sure that they understand what we saw and how we’re addressing it, how we’re how we’re fixing it, ultimately. 

But I think the more open we can be, the more we can share, the more we get better, and hopefully the more trust that we earn. But I think with anything security-related or privacy-related, it’s so important to share as much as we can and to own up to any mistakes that we made that would that would cause this issue to arise in the first place.

A lot of commentators who have looked at this region, and I think this next question gets to some of the bigger issues around Twitter that we wanted to talk about today, they looked at the disabling of verified users. You talked about the blue check users who had to be taken offline for a while. It became really apparent just how important Twitter is as a communications channel for really important institutions, whether it’s government agencies, and for some consumers, it is their main way of getting information. It begs the question, and it’s a funny one to ask the CEO and co-founder, but is Twitter too powerful?

I think it’s the people that give it any sort of power. Our role is making sure that we’re providing value to them. They’re seeing that value. There’s a hard decision to take away the ability to tweet from verified accounts. But we weigh that against people falling prey to this, this cryptocurrency scam, and wanted to make sure that we were minimizing and mitigating that damage, while we also did the investigation in terms of what was going on. Security is really a function of understanding the risks and making the right trade offs, and then acting as quickly as possible and being agile as we learn more.

One of the most interesting things about Twitter is how much of the value was created by the people using it. A lot of what has made Twitter unique and special and powerful was actually not created by the company, but the people using it every single day. They continue to show us every day why it’s important. So as long as we are relevant, as long as we continue to show why this is important and why it’s useful, and also make sure that people feel safe and can trust the systemic safety on the system, that power that they wield on the service continues to grow. But it’s really just a function of whether they find value or not.

I want to talk a little bit about Twitter in the age of misinformation. The company has been a leader in navigating some aspects of mitigating misinformation. This was probably most prominently displayed in May when you labeled a tweet from the President of the United States on the grounds that it violated your policy around glorifying violence. I think people would be very interested to understand the processes and the conversations that went on inside Twitter. The policy against glorifying violence had been in place for some time before, but there was clearly a concerted effort to label the President’s tweets. How did that conversation go down inside Twitter?

Misinformation is a massive problem, and not something that anyone is going to fix, because misinformation is a category that includes lying, and we’ve been lying since we learned how to communicate with each other. So when you make the problem a little bit smaller, we’re focusing on misleading information, information that has an intention of changing an outcome, and we wanted to start with the most important aspects of misleading information first in this day and age, which is voter suppression, which is health risk, specifically around COVID and manipulated media, because the tools to create fake images and fake videos are moving so much more faster than our ability to detect them. This is a challenge for the whole industry. So it’s important that we scoped the problem to the highest impact things and focus our efforts there. 

In regard to the tweets by the President, specifically we did label misleading information around the capacity to get a ballot in particular states. So there’s some tweets that suggested that ballots were being mailed out to every citizen of California, which was not factual, and would have people believe that I don’t have to register to vote, a ballot would just show up at my door, when in fact, you did need to register to vote to get a mail-in absentee ballot. That was a clear case of where we could provide more context. 

In terms of our terms of service and violations, we hold every account to the same terms of service. However, we do note that there are certain accounts like global leaders, and this is leaders around the world, that are saying things in the public interest that are being reported widely, that have conversations that are broad. And we made a decision for these tweets, even if they violate our terms of service to keep them up, but to limit their spread, limit the ability to reply to them, and also label with what terms in our terms of service they violated. We did this recently with the shooting and looting tweet. And this places an interstitial, where you have to click through to actually see the content. But it also limits the ability to retweet, and limits the ability to reply, so that there’s not further spread of this. People can still find and talk about the tweet that we labeled, and we found to go against our terms of service. For any tweet that we get reports around or that we see, there’s a team working to figure out if it did, in fact, violate our terms of service, and what to do about it. In terms of a global public leader, we have this interstitial tool, which we created about a year and a half ago, and we’ve used it around the world. This was the first time we used it on the U.S. president. But we have used it around the world for other global leaders as well.

You use the word ‘we’ use a lot in that conversation. Who’s in that brain trust? Is it a combination of people from the legal world? Is it people who understand the values of Twitter as you’re having these conversations? Who gets pulled in when you’re trying to make some of these tough calls, even when it came to building the interstitial tool? Clearly, there were human inputs that went into building that tool?

We have a team that is comprised of folks with legal backgrounds, come from civil society, some come from policy and government. This is a global team. We’re not just focused on U.S., but we have to consider these actions all around the world. For any tweet that is reported, especially when it comes from a very high profile account, the team has a debate and discussion based on the context of what the tweet signifies, and then makes the call. And folks like me ask questions. But most of the time, a lot of those decisions are done purely in that team, without going out of the bounds of that team. All of our policy, all of our product is shown to civil society, NGOs, policymakers, folks like the EFF. So, we have a group of constituencies that we can show things to and get their opinion and feedback on and then make decisions around. That group is fairly broad in terms of its perspective, by design, so that we are looking at every perspective to make the right decisions. But ultimately the decisions come down to the small team. 

We also realize that we make mistakes and when we do we admit them, own up to them and make them right. Any action like this also has to come with a robust appeals process. We do have an appeals process that any account can go through, which can overturn our decision if we find that we’re in error.

Recently, a number of large advertisers have boycotted Facebook over the way that site treats hate speech, and the boycott has spread to other platforms, including Twitter. How are you communicating or framing the differences between your site and everybody else particularly for your marketers and shareholders? How are you underscoring how different Twitter is from the way other platforms have been dealing with the issues of hate speech?

Ideally we want to take on a ‘Show don’t tell,’ mindset and show through our actions. Early on last year, we realized that there was a pretty significant vector of attack in political ads. We made the decision last year to ban political ads from our system. We believe fundamentally, that political reach should be earned, not paid for. That is the principle we use to make that decision. One of the things that was advancing that decision was the amount of misleading information that we were seeing in political ads. This is a global policy. This is again, not just in the U.S., not just for this particular election coming up, but worldwide. Misleading information continues to be a great challenge. When you can pay to spread it, that really changes the playing field dramatically. We just didn’t feel that was fair. So I think that stance is something that that sets us apart, it’s something that we’re staying true to, it’ll be a stance that we take beyond this election, as well, just from that fundamental principle of political reach should be earned and not bought. 

In terms of broader health, and how we’re addressing hate on our service, and harassment and abuse, this is our key priority as a service and as a company. We have made some pretty important strides. Just two years ago, the only way that we would understand if a tweet, or an account was being harassing or abusive, is if someone reported it, which is just fundamentally not fair, because it’s putting all the burden on the victim. So now, over 51 percent of the tweets that violate our terms of service are recognized by a machine learning algorithm, and then can be brought in very quickly to a human agent for review. So we’ve taken away some of the burden from reporting. We want to get that number higher, all the way up to 90+ percent, so that people can experience Twitter in a way that they don’t have to do work for it to be safe. The second thing that is a huge focus for us is giving people more controls, making sure that I have a lot more control over my space. Twitter’s unique in that anyone can come in to any of my tweets and reply with whatever they want. We launched conversation controls recently, so that I can receive replies from anyone, only the people I follow, or only the people I mentioned. So giving people more product controls directly means that we’re not so reliant upon policy and enforcement, which ultimately is the long-term right answer. It’s one that that we’re working the hardest on right now. We started this with individuals and expanded it to advertisers. 

And then finally, just keeping an open conversation with our customers, and that’s our individuals using the service everyday, but also our advertisers. I’ve spoken with the leads, I’ve spoken with a number of our advertisers about their concerns around the service and the broader industry, what they desire in terms of features or policy changes. It’s up to us to take all that feedback and integrate it into a decision that serves all and continues to make the platform healthier and stronger. This is our number one focus and like I said earlier on with security It’s not an area where we’re going to reach perfection. It’s something that has to constantly evolve and be iterated upon, based on everything that we learned. The more agile we are, the more we can experiment. The more feedback we’re getting in terms of whether we’re doing the right thing or the wrong thing. That’s what makes it stronger and healthier ultimately.

Do you feel like Twitter’s policies reflect your personal values and your professional values in any way? Is what the user experiences and the policies that you put in place a reflection of the way you look at the world?

We actually based our policies, we wanted a stable pillar that could point to and spoke to a global population, not just the context of one population, one country, or one political perspective. So we pillared our policies to the U.N. [Universal] Declaration of Human Rights. This document actually has a number of things that speak to human rights, but also speech. So we want to base something that was universal, that was thorough and comprehensive, and then as the world evolves, continue to evolve that as well. A lot of our policy evolutions are really just a function of us looking at how people are using the service, and seeing issues or gaps in what we currently have. Our employees who bring different perspective to the company and their own experience on the service help tremendously. The NGOs and civil society that we talked to help inform that policy as well. It’s less about any one individual or group of individuals. It’s truly trying to reach more of a global standard that we can all point to, and people can see transparently how these two things connect and why this policy exists in the first place. 

I will say we haven’t done a great job at communicating both our policy or enforcement decisions, and also how we crafted them. That is a lot of our work. This is again one of those industry issues where for every service, you sign up for you go through an agreement to its Terms of Service that absolutely no one reads, because it’s just so obtuse and so long, and so not customer-friendly at all. If there’s one area of focus for us, as an industry going forward, it should be that Terms of Service. Because I don’t believe it’s necessarily positioned to protect the people using the service. The lens has always been around protecting the company and the service. We really need to look at what that means, if that’s serving the right function, and simplify it dramatically, so that I could read it, I could understand it, and I could agree to it based on everything that I saw. That’s definitely been a focus of mine in terms of just looking at all the things that people go through, to sign up and use our service and what they’re agreeing to what they read what they don’t, and why they might be confused when we do take action. That’s on us because they likely did not read through all those terms, and are likely not aware of the rules of the service in the first place.

That would be a huge issue for consumers. Do you a timeframe on when you anticipate being able to issue a new and improved Terms of Service?

We’re probably not going to do it as one go, but just a constant iteration. Simplify the language, look for terms that are unnecessary, and look for areas that are not customer-focused, and instead are company-focused. But there’s a lot of issues in those investigations as well. We just we have to do the work and constantly iterate. I think if we try to do too much at once and just unveil it it’s ripe for failure. If we instead take a path of making small changes and doing them as quickly as possible, and then at some point, pointing to the progression and all the changes that we made in order to get to this place, I think it achieves much more because we get to learn along the way instead of just doing it behind the scenes, and unveiling this grand thing.

In our few minutes remaining Jack, I wanted to ask you a little bit about Twitter’s interactions with the civil rights movement. I know you said that your policies are reflective of sort of global principles and global points of view. But Twitter has been as a company, proactive in meeting with civil rights leaders. You personally have been very proactive. Six years ago, you went to Ferguson, Missouri, and you met with Black Lives Matters’ leadership long before the movement had collected the kind of credibility and resonance it has today. Clearly, you have a strong point of view around civil rights. The company recognized Juneteenth as a holiday long before anybody else did. How does how does your worldview affect Twitter as a company?

I have a fundamental belief that one of the powers of Twitter is that it can shine a light on things that do not have enough attention, have not been acknowledged, have not been addressed. We saw protests and activists all around the world from the first year of Twitter’s existence, and they were fairly remote early on. They were in Iran, they were in the Middle East, in Egypt, the Arab Spring. Then suddenly, six years ago, they came to my hometown of St. Louis, Missouri, and Ferguson and I went back not to meet the leaders of the movement, but to protest with them. My parents and myself were on the streets of West Florissant every day for two weeks. It was an incredible experience to see my city rise up against injustices that have been plaguing it for centuries, and to see how the tool that we created was being used, all of its power, but all of its failures and gaps as well. 

A big part of what we do has always been giving voice or providing a platform for voices that otherwise just haven’t been amplified. A lot of the power of the service that that you spoke to earlier, is because anyone can come on to the service and build a following by expressing an idea that resonates with people, by being in the right place at the right time, by sharing something that needs to be addressed. It resonates deeply with people such that more people join in the conversation. I saw that firsthand, and in Ferguson, it’s where I met DeRay Mckesson who is extremely vocal in terms of Black Lives Matters, and all the work he’s doing around police violence in this country, being data-driven and data-led. He was a school teacher in Minnesota, he came to Ferguson just like I did, and was using the service just to share what was happening on the ground and what needed to be changed. So he had zero followers before that moment, and through that event has grown so much of a voice. So that’s less of a policy question and more of are we building a service that enables people to speak in the way they want to, and is sharing a conversation that more people can participate in. That is something we’ve always treasured and always valued and always want to build for.

You have indeed built a very powerful platform and we’ll be watching to see the outcome of this particular investigation. We’re glad to hear you talk about transparency and we look forward to hearing more. Jack Dorsey, thank you so much for your time today.