My (Abridged) Interview with Michael Shellenberger
Quotes were excerpted from this version of the interview for Congressional testimony, but he never published it.
What did you think of the Twitter Files?
Some of what’s come up is interesting. I have had reservations about generalizations from anecdotes and screenshots, and there’ve been a couple of moments where lack of subject matter familiarity produced interpretations that seemed like overreach to me. We talk about “lack of trust in media”, but that really means a partisan divide around trust between audiences and outlets, and that’s reflected in the discussion of Twitter files as well. There was one particular drop I paid a lot of attention to, Lee Fang’s story [“Twitter Aided the Pentagon in its Covert Online Propaganda Campaign”]. [Fang found that Twitter approved U.S. military accounts to shape public opinion in Iraq, Yemen, Syria, and other nations, in violation of Twitter’s own policies]. My team worked on that initial report on the Pentagon activity. We could see what the accounts were doing, but we’re sometimes limited in the data that’s required for attribution. For me, those Files was where overlapping platform, researcher, and journalist discovery merged into a very comprehensive picture.
What about the FBI’s relationship to Twitter?
It looked like there was a lot of over-reporting on the FBI front. A lot of, “This might be a thing, go look at it,” as opposed to, “We have high confidence in this small set being significant. Go look at it.” Tips can be important in some contexts but the fact that it’s not sent with some sort of confidence threshold needs to change going forward.
MS: What do you think of former CIA media analyst Martin Gurri’s view that existing and former FBI officials likely conspired to “pre-bunk” the Hunter Biden laptop?
I’m a long-time fan of Martin’s work, and we are friendly despite occasional disagreements. I admire his thinking. In this case, I do have a hard time believing that there was a “church of identity” driving the FBI to deliberately mislead Twitter. I would want more evidence. Did these two offices of the FBI (the team with the laptop, vs the team focused on state actor election interference) talk to each other? Assuming they operated under “need to know” or compartmentalization principles, I would be surprised if the entire institution was informed about who held that laptop. But see, I’m also speculating. Someone from the FBI itself would be the person to answer that
MS: Does that mean that you support Congressional investigations, a DoJ Special Prosecutor, or something else to get to the bottom of what exactly was going on there?
Congress certainly has a role to play in determining what kind of interactions on content issues should be allowed by government officials, including Congressional members. I’m not a lawyer, but I believe a special prosecutor is only appointed when there is a credible accusation of a crime being committed. I have not seen any evidence of that.
MS: What do you make of Gurri’s point that we shouldn’t be so paranoid about foreign actors engaged in “misinformation,” and that he at the CIA used to help overseas translations of foreign Soviet propaganda without fear it would harm the American people or turn everybody into communists?
Martin made an interesting point about harm and how “the federal government used to translate and provide propaganda from the other side to the public without fear of what would happen” that’s very valid. “Harm” is undertheorized for sure, which leaves it prone to scope creep, and we need to be clearer about what we mean when we use that term.
However, we’ve also moved pretty far away from our historical understanding of state actor activity and how to respond to it. As Martin alludes to, for many decades in fact, the USG not only provided material but very deliberately worked to educate the public about what was happening because it was happening. Now, partisan distrust means that people don’t trust what the government says if it’s not their guys in power.
If I’m not mistaken, what Martin is referring to is translated internal media that talked about the US that adversary nation states wrote for their own publics — generally speaking, we allow a lot of that “overt” content to exist on social media. In 2020, we began to see platforms label state-linked accounts that produced it, to help the public understand what they were seeing — this created great consternation among accounts that were labeled, but it’s pretty in line with Foreign Agent Registration Act and bedrock values that say the American public should be able to see this content but should understand its origins…principles I agree with.
But that kind of attributable content is distinct from “black propaganda” — covert propaganda, often actively misattributed to someone else — that targeted the US public. (USG of course had its own share of front media in various eras, obviously) That kind of deliberate manipulation is more akin to the state actor networks of fake people and front media that Twitter, Facebook, and other platforms take down today. They are actively made to look like other things, and their purpose is to deceive the public.
Today the rather severe divide in trust along partisan lines means that the government trying to explain this sort of manipulation to the public would, in my opinion, not be trusted by half the public at any given moment in time.
MS: Let’s step back What’s the problem in your view with social media today?
RD: If I had to summarize it I would say, simply, unaccountable private power. In the last 20 years, we’ve moved into an ecosystem in which our conversations about our democracy and society, the ways we come to consensus, ways that people get information, are all increasingly happening on platforms that use particular curation mechanisms and have particular business incentives. Those incentives — for example, promoting the highest-engagement content — don’t always align with what we might have previously thought of as ways of creating productive conversations or helping people reach consensus across different perspectives. Instead, because of design decisions and business considerations, we are now in an environment in which what we see is curated for us to meet the needs of the business, not necessarily its users.
People are pushed to us. Topics are pushed to us. And so we exist within these environments which are increasingly mediated by something that we don’t really have a full understanding of, and no control over.
In 2014 and 2015, the content moderation conversation was primarily focused on harassment and exploitation, but there was concern that powerful figures had begun to take advantage of the platforms as well. ISIS became a big topic of conversation as they tried to blanket social media with propaganda for the “virtual caliphate”, tried to recruit people, some of whom then committed real-world violence.
As the “what to do about ISIS” conversation was happening, as government and platforms were struggling to find some kind of working relationship around that challenge, other state actors came into the mix with covert propaganda campaigns. Particularly Russia, whose efforts carried through the 2016 election and beyond.
Unfortunately, this became a very politicized conversation because Russian interference and “black propaganda” presence on social media, both in the form of the GRU and the Internet Research Agency, get hopelessly mixed up in the collusion conversation, and so it becomes something of a third rail political issue. Again, none of this is new, state actors pretending to be something they’re not is a very old tactic.
But they are adapting for this new medium, and so the question of “what is the appropriate government response?” finds its way into the content moderation conversation. Remember that many people at the time, in 2017–2018, were very wary that the social media companies, the unaccountable private power, should be tasked with identifying and taking down those networks.
You also start to see domestic groups realize the power of the platforms for networked activism. And this spans the entirety of the political spectrum. Ordinary people realize that they can amass massive followers and reach many people. Sometimes they, too, use deceptive tactics…but their audience, who does a lot of the sharing of the content, are real, and they’re expressing themselves.
The moderation interventions that were developed to deal with state actors, which were oriented around “are these actors or behaviors authentic?”, don’t apply as neatly when it’s authentic domestic speech.
Ultimately, we arrive at a moderation framework that Facebook calls “Remove, Reduce, Inform” but most of the platforms use some variation on — some content that violates policy comes down, some is throttled from curation or promotion, and some gets a label or interstitial that tries to add some context.
So now you have this interesting divide: on one hand, there are questions about how the platform chooses to curate and create a particular environment for its user community. The other side of it is when the government should involve itself- and in the context of real, authentic, domestic actor speech, the appropriate role is very very narrow. And yet, the government does have a right to offer its opinion and has been for multiple administrations now — this is distinct from demanding a takedown, or implying it via jawboning.
The extra challenge with all of this is that it moves fast. . Platforms say, “Here’s our policy; here’s how we’re gonna enforce it.” But policies (or algorithms) can change unilaterally, and sometimes very quickly, in response to things that come up. Covid policies were partially an adaptation of health misinformation policies that predated the pandemic — they evolved rather quickly in response to an emerging crisis. Sometimes there is policy overcorrection — we’ve seen this from Elon, too.
Moderation rules and content policy are also tied into business incentives. Platforms don’t wanna create a cesspool. Twitter doesn’t want, or didn’t want, to be 4Chan because most people don’t enjoy being in that type of environment. So even if there are types of content that are in line with the First Amendment, some of the platforms choose to moderate more or less heavily in line with the kind of environment they want to create versus having a free-for-all experience.
But why so much attention by FBI to these low-follower accounts? Isn’t that excessive?
It looked from some of the Twitter Files emails that they were not thoughtful in what they sent in. Big lists, not high confidence concerns. A lot of the state actor accounts are found when they are still low-follower because the platforms take them down and then they respawn. It’s whack-a-mole. Still, just sending over lists of accounts with many false positives is bad, and you do see Twitter trying to decide what to do about managing the concern that state actors are everywhere when they’re just not seeing it.
To make one other point on this issue of respawns — if accounts do not come down, they do grow followings over time. They don’t just stop trying. One of the things that we’ve seen over the years is what I call the “regulatory arbitrage of content moderation.”
When a platform creates an environment that says, “We will go looking for inauthentic state-linked networks, and we will take them down,” you see a lot of those actors investing their time on platforms that do not say that. So, for example, a lot of the action that we’ve seen recently is happening on platforms like Telegram, which are never going to look into “black propaganda” channels, never going to work alongside researchers to say, “This network or this website is linked to state X.” So, it provides them with a safe haven to operate undisturbed. We actually put out a report recently where we talked about an inauthentic state-linked Kid Rock fan page that had grown to around 50,000 followers on one alt-platform. And two of the platforms that the account was present on did, in fact, very quietly appear to take it down after the report came out. Or it deleted itself.
Most of the manipulative covert stuff that we see on big US platforms now comes down very quickly and appears to have very little impact, and that’s because the platforms have integrity teams that look for it, often alongside outside researchers.
What we are talking about today in 2023 is the result of policies that changed after the first set of tech hearings in 2017 and 2018. Some of the Russian accounts from that time frame had few followers and likely accomplished nothing. Some, however, that were undisturbed from 2015 to 2018 grew accounts into the 500,000 follower range, and were networked into the communities that they targeted — not popular across the entirety of the American public, but within the communities they targeted. Entrenched in niches, serving as kind of agent provocateurs. We rarely see networks grow to that size now, but that’s also because platforms are looking for them and taking them down — and globally. This is not just a US issue.
So what should we do?
The most foundational thing that would address a lot of these questions — unaccountable private power, whether policies are fair, what platforms or government is doing — is actually creating mechanisms for platform transparency, to provide us with some visibility into what is actually happening. Right now, we don’t have that. We can see that a bunch of accounts came down. We don’t know why. We see certain trends appear or disappear. We don’t know why. Right now, you can go and pull up what are called “Transparency reports” that the platforms put out every six months or so, which list aggregate action for certain types of moderation. They say, “During this quarter or during this half year, we took down X number of millions of pieces of content.” But there’s no visibility into what is actually happening there. Since people are increasingly distrustful both of government and of platforms, transparency around data that could help outsiders examine those questions is critical for restoring trust.
We’ve seen a number of regulatory efforts like antitrust, or rethinking Communications Decency Act Section 230, but they’ve been false starts. The Democrats think there are particular harms that the platforms should more aggressively moderate for, or take down. Republicans, roughly speaking, often feel that too much is coming down, or that even labels are censorship. The bills around making CDA 230 protection contingent upon particular types of moderation are not passing in the US because of this sort of gridlock, and I don’t think that they’re particularly useful anyway.
What should our attitude specifically be toward foreign influence operations? Shouldn’t we be free to hear the points of view of foreign governments and activist groups, including ones we label terrorists? Is the real issue here anonymity — that they be forced to label themselves ISIS, for example, rather than pretending to be a human rights group? Or do you think the US government and social media platforms should prevent foreign voices from speaking here?
Terrorist organizations that use platforms to recruit people to commit atrocities against civilians is a bright-line no for me — there’s a whole set of dangerous organizations and incitement policies out there to debate, but that’s where I am. In terms of political speech from foreign governments, I think labeling of state media and official accounts provides clarity . China, for example, had a lot to say during early days of covid, before the labeling policies. There were large accounts from the Chinese ministry of foreign affairs, and editors of state media, whose names wouldn’t be familiar to most people. But who they are is part of their message — a government speaker expressing a government’s point of view. It’s good policy for an international platform, regardless of which government we’re talking about.
What is the reason why you’re pessimistic about getting transparency? What are the companies saying, and does any of that change with Elon?
I’m generally pessimistic about the polarized US Congress’ capacity to accomplish anything these days. You can do quite a lot with self-regulatory pressure. Even though we’ve seen no US regulation passed in the last seven years, we have seen a number of policy shifts and other changes as platforms have responded to user concerns or public pressure.
For this question of transparency specifically, it might be possible to get more from Twitter at this point. Elon certainly listed algorithmic transparency among his reasons for buying the company. But at the end of the day, business incentives will shape his decisions. Self-regulatory action does nothing about the foundational problem of unaccountable private power. It’s not enough, and yet the US government doesn’t seem to be able to get out of its own way. Instead, Europe is going to be moving forward on these things.
One other non-regulatory solution, though: I’ve liked the theory of Twitter’s Birdwatch [renamed Community Notes]. It is potentially a really interesting way to have information countered or corrected -or counter-spoken or contextualized — by members of the community, as opposed to by a fact-checking organization that some percentage of the audience won’t trust. It’s the best way to inject some counterpoint into the conversation via a label that is, in my opinion, not censorship in any way, shape, or form. I hope Elon re-prioritizes it.