My Interview with Michael Shellenberger
The Long Version: on the Twitter Files, content moderation, unaccountable private power, and more
This is an interview that Michael Shellenberger conducted with me in January of 2023. This is the long version, and some parts are still in raw notes form - the abridged/edited version can be found here. Shellenberger is the questioner, and I am answering.
What did you think of the Twitter Files?
Some of what’s come up is interesting, though I’ve had reservations about the fact that the stories are being told via anecdotes and screenshots - it’s pretty easy to cherry-pick or to potentially receive incomplete data, and there have been a few examples where lack of subject matter familiarity has led to some interpretations that seemed like overreach to me. We talk about a “lack of trust in media”, but that really means that we have partisan divisions around trust, where different audiences trust different media - and that is showing up in the discussion of Twitter files as well. There was, however, one particularTwitter files I paid a lot of attention to, Lee Fang’s story [“Twitter Aided the Pentagon in its Covert Online Propaganda Campaign”]. [Fang found that Twitter approved U.S. military accounts to shape public opinion in Iraq, Yemen, Syria, and other nations, in violation of Twitter’s own policies]. My team was one of the teams that did that initial report on the Pentagon activity. We could see what the accounts were doing, but we were writing very conservatively because we didn’t want to overstate in any way what it was. We can describe something, we can make educated guesses about what it is or who is behind it, but absent communication or confirmation from the platforms we can’t make a concrete attribution. This is an example of where I think the files reporting added additional context and understanding we wouldn’t have had otherwise.
What about the FBI’s relationship to Twitter?
On the FBI front, it looked like there was just a lot of over-reporting. “Look at this! Look at this! Look at this!” A lot of, “This might be a thing, go look at it,” as opposed to, “We have high confidence in this small set being significant. Go look at it.” The fact that it's not sent with some sort of confidence threshold is, I think, something that needs to change going forward..
MS: What do you think of former CIA media analyst Martin Gurri’s view that existing and former FBI officials likely conspired to “pre-bunk” the Hunter Biden laptop?
I'm a long-time fan of Martin's work, and we are friendly despite occasional disagreements. I admire his thinking. In this case, I do have a hard time believing that there was a "church of identity" driving the bill FBI to deliberately mislead Twitter. I would want more evidence. Did these two offices of the FBI (the team with the laptop, the team focused on state actor election interference) talk to each other? Assuming they operated under "need to know" or compartmentalization principles, I would be surprised if the entire institution was informed about who held that laptop. But see, I'm also speculating. Someone from the FBI itself would be the person to answer that
MS: Does that mean that you support Congressional investigations, a DoJ Special Prosecutor, or something else to get to the bottom of what exactly was going on there?
Congress certainly has a role to play in determining what kind of interactions on content issues should be allowed by government officials, including Congressional members themselves. I’m not a lawyer, but believe a special prosecutor is only appointed when there is a credible accusation of a crime being committed, and I have not seen any evidence of that.
MS: What do you make of Gurri’s point that we shouldn’t be so paranoid about foreign actors engaged in “misinformation,” and that he at the CIA used to help overse translations of foreign Soviet propaganda without fear it would harm the American people or turn everybody into communists?
Martin made an interesting point about harm and how "the federal government used to translate and provide propaganda from the other side to the public without fear of what would happen" that's very valid. "Harm" is undertheorized for sure, which leaves it prone to scope creep, and we need to be clearer about what we mean when we use that term.
However, we've also moved pretty far away from our historical understanding of state actor activity and how to respond to it. As Martin alludes to, for many decades in fact, the USG not only provided material but very deliberately worked to educate the public about what was happening because it was happening. Now, partisan distrust means that people don’t trust what the government says if it’s not their guys in power.
If I'm not mistaken, what Martin is referring to is translated internal media that talked about the US that adversary nation states wrote for their own publics — generally speaking, we allow a lot of that content to exist on social media. In 2020, we began to see platforms label the state-linked accounts that produced or promoted it, to help the public understand what they were seeing -- this created great consternation among accounts that were labeled, ironically, but it's pretty in line with Foreign Agent Registration Act and bedrock values that say the American public should be able to see this content but should also understand its origins...principles I agree with.
But that kind of attributable content is distinct from "black propaganda" - covert propaganda, often actively misattributed to someone else via front media outlets - that targeted the US public. (USG of course had its own share of front media in various eras, obviously) That kind of active misattribution and deliberate manipulation would be more akin to the state actor networks and front media accounts that Twitter, Facebook, and other platforms take down today - because they are actively made to look like other things, their purpose is to deceive the public.
So, state actor propaganda efforts across the white-to-black attributability spectrum have evolved for the technology of the time, and there are different ways we should handle them. However, today the rather severe divide in trust along partisan lines means that the government weighing in on these accounts and trying to explain this sort of manipulation to the public would, in my opinion, not be trusted by half the public at any given moment in time.
MS: Let’s step back What’s the problem in your view with social media today?
RD: If I had to summarize it I would say, simply, unaccountable private power. In the last 20 years, we've moved into an ecosystem in which our conversations about our democracy and society, the ways we come to consensus, ways that people get information, are all increasingly happening on platforms that use particular curation mechanisms and have particular business incentives. Those incentives - for example, promoting the highest-engagement content - don’t always align with what we might have previously thought of as ways of creating productive conversations or helping people reach consensus across different perspectives. Instead, because of the series of design decisions and business considerations by platforms, we are now in an environment in which what we see is curated for us in ways that meet the needs of the business, not necessarily its users.
The people who we follow we often find because of particular nudges. People are pushed to us. Topics are pushed to us. The things that we think are trending are oftentimes more like personalized bait - a handful of people are talking about something, but the algorithm recognizes that you or I are likely to engage because we have strong feelings about it, and so we see it. One group is outraged about something other groups never even see - the nudge actually helps the trend to happen. And so we exist within these environments which are increasingly mediated by something that we don't really have a full understanding of, and no control over.
Content moderation is a huge topic, and a bunch of different things sit under its umbrella. Early on, it tended to focus a lot on various forms of online abuse, CSAM and content encouraging self-harm, bullying, things like that. Beginning around 2014 and 2015, the content moderation conversation was focused on harassment, to some extent, but also on the recognition that powerful figures were using the platforms as well. ISIS became a big topic of conversation in 2015 as they tried to blanket social media with propaganda for the “virtual caliphate”. They realized that they can use these platforms to grow an audience. They can grow amplifiers. They can find recruits. And, as you recall, there was a lot of real-world violence that is tied to the rise of that particular community as well.
As the “what to do about ISIS” conversation is happening, where government and platforms are struggling to find some kind of working relationship around the challenge, other state actors come into the mix with covert propaganda campaigns. Particularly Russia. Unfortunately this becomes a very politicized conversation because Russian interference and Russian “black propaganda” presence on social media, both in the form of the GRU and the Internet Research Agency, get hopelessly mixed up in the collusion conversation, and so it becomes something of a third rail political issue. Again, none of this is new, state actors pretending to be something they're not is very old.But it is adapting for this new medium, and so the question of “what is the appropriate government response?” finds its way into the content moderation conversation. Remember that many people at the time were very wary that the social media companies, the unaccountable private power, should be tasked with identifying and taking down those networks.
You also start to see domestic groups realize the power of the platforms for networked activism. And this spans the entirety of the political spectrum. Ordinary people realize that they can amass massive followers and reach many people. Sometimes they, too, use deceptive tactics…but their audience, who does a lot of the sharing of the content, are real, and they’re expressing themselves. And so this is where the old moderation interventions that were developed to target state actors, which were really oriented around “who is the actor?” and “are they authentic?” - the rubric for moderating state actor campaigns like those from Russia and China - don’t apply when it’s authentic domestic speech. Ultimately, we arrive at a moderation framework that Facebook calls “Remove, Reduce, Inform” but most of the platforms use some variation on - some content that violates policy comes down, some is throttled from curation and promotion, and some gets a label or interstitial that tries to add some context.
I realize I’ve just given you a very long history here. But this was the foundation for a lot of the moderation frameworks, and debates about what role platforms or government should have, that are now being discussed in the files. So now you have this interesting divide: on one hand, questions about the role of the platform as it chooses to curate and create a particular environment for its user community. What role the government should play is another side of it - and in the context of real, authentic, domestic actor speech, the appropriate role is very very narrow. As you are seeing in the files, at times the Trump or Biden administrations are weighing in on some issues. As I understand it, and I’m not a lawyer, government officials do have their own first amendment right to argue for or against certain types of content and how platforms should moderate it. The distinction is in whether or not those comments are coercive or demanding the platforms do something that stifles speech.
The extra challenge with all of this is that moderation policy and algorithmic rankings and such are subject to change relatively quickly, often in response to particular behaviors that users display on platforms. Platforms say, “Here's our policy; here’s how we're gonna enforce it.” But that can change over time, in response to things that come up. Covid policies were an outgrowth of prior health misinformation policies that predated the pandemic - they evolved rather quickly in response to an emerging crisis. Sometimes there is policy overcorrection - we’ve seen this from Elon, too.
Moderation rules and content policy are also tied into business incentives. They are a way for the companies to create a particular type of environment that they think will maximize utility for their users and keep them there.
They don't wanna create a cesspool. Twitter doesn't want, or didn't want, to be 4Chan because most people don't enjoy being in that type of environment. So even if there are types of content that are in line with the First Amendment, some of the platforms choose to moderate more or less heavily in line with the kind of environment they want to create versus having a free-for-all experience.
But why so much attention by FBI to these low-follower accounts? Isn’t that excessive?
It looked from some of the Twitter Files emails that they were not thoughtful in what they sent in. Big lists, not high confidence concerns. That said, a lot of the state actor accounts are found when they are still low-follower because the platforms take them down and then they respawn. It’s whack-a-mole. Still, just sending over lists of accounts with many false positives is bad, and you do see Twitter trying to decide what to do about managing the concern that state actors are everywhere when they’re just not seeing it. They did not take those accounts down, as I understand it.
To make one other point on this issue of respawns - if accounts do not come down, they do grow followings over time. They don’t just stop trying. One of the things that we've seen over the years is what I call the “regulatory arbitrage of content moderation.” When a platform creates an environment that says, “We will go looking for inauthentic state-linked networks, and we will take them down,” you see a lot of those actors investing their time on platforms that do not say that. So, for example, a lot of the action that we’ve seen recently is happening on platforms like Telegram, which are never going to look into “black propaganda” channels, never going to work alongside researchers to say, “This network or this website is linked to state X.” So, it provides them with a safe haven to operate undisturbed. . We actually put out a report recently where we talked about an inauthentic state-linked Kid Rock fan page that had grown to around 50,000 followers on one alt-platform. And two of the platforms that the account was present on did, in fact, very quietly appear to take it down after the report came out. Nobody wants their users to be manipulated. Hosting manipulation networks is potentially also a great way to get yourself regulated by governments who don’t want their politics interfered with by geopolitical rivals. These are global platforms, and while we are talking about things through the lens of the US partisan culture wars, many of the networks target non-US publics.
Most of the manipulative covert stuff that we see on big US platforms now comes down very quickly and that's because the platforms have integrity teams that looked for it and they worked alongside outside researchers who looked for it. There was a channel of communication open where we might say, “Hey, this network talking about Xinjiang appears to be linked to past operations tied to entities within China. We think that for the following reasons...” Twitter or Facebook integrity teams can then go look and say, Yes or no. “We think this is inauthentic”, or “no, we think this is actually authentic” for some reason. And they choose to moderate or take it down or leave it up as they see fit
But what happens there is by taking those state-linked things down, the accounts never become entrenched in communities or persuasive enough to do real damage, to have real influence or impact. What we are talking about today in 2023 is the result of policies that changed after the first set of tech hearings in 2017 and 2018. A lot of the Russian accounts that were undisturbed from 2015 to 2018 grew accounts into the 500,000 follower range, and some were in fact prominent within the communities that they targeted – not popular across the entirety of the American public, but within the communities they targeted. Some of them were very well entrenched in those spaces, serving as kind of agent provocateurs. We don't see networks grow to that size now, but that's also because platforms are looking at them and taking them down.
So what should we do?
The most foundational thing that would address a lot of these questions - unaccountable private power, whether policies are fair, what platforms or government is doing - is actually creating a mechanism for platform transparency, since it would provide us with some visibility into what is actually happening. Right now, we don't have that. We can see that a bunch of accounts came down. We don't know why. We see certain trends appear or disappear. We don't know why. Right now, you can go and pull up what are called “Transparency reports” that the platforms put out every six months or so, which list aggregate action for certain types of moderation. They say, “During this quarter or during this half year, we took down X number of millions of pieces of content.” But there's no visibility into what is actually happening there. Since people are increasingly distrustful both of government and of platforms, transparency that provides some context for action is critical for restoring trust. And also, one way that you increase accountability for private power is with visibility and transparency into what private power is actually doing.
It might be hard for the US government to do much beyond transparency. We've seen a number of efforts like going after Communications Decency Act Section 230, but they’ve been false starts. The Democrats think more stuff should come down, should be moderated, that there are particular harms that the platforms should more aggressively mitigate. Republicans, roughly speaking, often feel that too much is coming down and that the platforms should govern differently. The bills around making CDA 230 protection contingent upon particular types of moderation are not passing in the US because of this sort of gridlock, and I don't think that they're particularly useful anyway.
“How can we better understand the systems that unaccountable private power controls?” is where I think the value of transparency lies. The most prevalent rumors that have persisted for a very long time are about anti-conservative bias or anti some-other-group bias,. Those feelings are very common among many different communities. “My voice isn't being heard,” “My words are being suppressed,” “My trends are not showing up.” Transparency is one area where government regulation ensuring privacy-protecting data access is a step towards actually answering the questions or addressing some of the rumors that people have come to believe.
What should our attitude specifically be toward foreign influence operations? Shouldn’t we be free to hear the points of view of foreign governments and activist groups, including ones we label terorrists? Is the real issue here anonymity — that they be forced to label themselves ISIS, for example, rather than pretending to be a human rights group? Or do you think the US government and social media platforms should prevent foreign voices from speaking here?
I’m arguing that white propaganda is where labeling stuff is designed to give an extra level of clarity. China had a lot to say during early days of covid, and this was before state labeling. There were accounts from the Chinese minister of affairs, and people would just take information from large follower accounts about the origins of covid. None of us argued that they. The argument for labeling is that when Chinese government is making statement about the origin of the disease, the recipient should have the information. A white opinion should be labeled. VOA should be labeled too. Weird that the labeling conversation gets into who should get labeled. If the state has editorial control. Others say bbc and voa should be labeled. There are people with different opinions. Editorial control. Voice of America. No problem with labeling. Chinese and Russian state accounts are not clearly labeled. Do you know that Apple news is a Taiwan news outlet. The other thing in response to Gurri, the black propaganda where they actively misattribute, and Russians communicating like a texas black community, you could argue they should be left up and labeled, but FB and Twitter, so I want to get at the distinction between white and black propaganda. Allows for gray spectrum, eg funded by state but not controlled by and on this medium, black propaganda changed, so you would have front media properties during the cold war and WWII and social media allowed for persona accounts that looked like members of your community. When martin talks about that we used to send out media, but the idea that we would not respond when state actors are pretending to be people like us the takedown policies
What is the reason why you're pessimistic about getting transparency? What are the companies saying, and does any of that change with Elon?
When you asked me the question, we were talking specifically about government regulation and I’m generally pessimistic about the polarized US Congress’ capacity to accomplish anything these days. You can do quite a lot with self-regulatory pressure. This has always been true, across industries, but particularly in tech, public pressure seems to be something that they respond to. Even though we've seen no US regulation passed in the last seven years, we have seen a number of changes, a number of different policy shifts, as platforms have responded to user concerns or public pressure.
For this question of transparency specifically, I do think that it might be possible to get more from Twitter at this point. Elon certainly listed algorithmic transparency among his reasons for buying the company. But as far as the pessimism: if you believe, as I do, that unaccountable private power is the issue, and that it is in society's best interest to have visibility into how powerful platforms act, then being beholden to the goodwill of the present owner of the company deciding to self-regulate is not enough., It’s not how we've treated regulation and oversight in the past, right? We don't rely on the goodwill of corporate owners in other industries, just hoping that they voluntarily decide to consider their societal impact in their business choices.
You've perhaps seen Facebook had some very high-profile leaks, right? We learned interesting things, like that their recommendation engines played a pretty significant role in nudging people toward extreme groups - a majority, over 60%, of the people who joined some of the groups did so because of those nudges. We have focused a lot on transparency in the context of content moderation - the end-state, the reaction to what is on the platform - when a more holistic view of transparency could enable us to answer bigger questions, like, “What are the inadvertent downstream impacts of some of the design decisions that the platform makes?”
One other non-regulatory solution, though: I've liked the theory of Twitter’s Birdwatch [renamed Community Notes]. It is potentially a really interesting way to have information countered or corrected -or counter-spoken or contextualized - by members of the community, as opposed to by a fact-checking organization that some percentage of the audience won’t trust. It's the best way to inject some counterpoint into the conversation in a way that is, in my opinion, not censorship in any way, shape, or form. There’s hopefully no way to frame that intervention as someone trying to take things down because somebody disagrees with them. I hope Elon re-prioritizes it.