#310 - Social Media & Public Trust - Transcripts

January 31, 2023

  • Favorite
  • Share
A Conversation with Bari Weiss, Michael Shellenberger, and Renee DiResta

Transcript

Welcome to the Making Sense Podcast. This is Sam Harris. Today I'm speaking with Barry Weiss, Michael Schellenberg, and Renee DiResta. Barry is the founder and editor of the Free Press and host of the podcast, Honestly. From 2017 to 2020, she was an opinion writer and editor at the New York Times. And before that, she was an op-ed and book editor at the Wall Street Journal and a senior editor at Tablet Magazine. And I highly recommend that you sign up for her newsletter and check out what she's doing over at the Free Press. And you can find that at thefp.com. Michael Schellenberg is the bestselling author of San Francisco, Why Progressives Ruined Cities, and also Apocalypse Never, Why Environmental Alarmism Hurts Us All. He's been called an environmental guru, a climate guru, North America's leading public intellectual on clean energy, and a high priest of the pro-human environmental movement. He is the founder and president of Environmental Progress, an independent nonprofit research organization that incubates ideas, leaders, and movements, and a co-founder of the California Peace Coalition, an alliance of parents of children killed by fentanyl, as well as parents of homeless addicts and recovering addicts. And he also has a newsletter over on Substack titled Public.

And finally, Renee DiResta is the technical research manager of the Stanford Internet Observatory, a cross-disciplinary program of research, teaching, and policy engagement for the study of abuse in current information technologies. Renee led the investigation into the Russian Internet Research Agency's multi-year effort to manipulate American society. And she has studied influence operations and computational propaganda in the context of pseudo-conspiracies, terrorist activity, and state-sponsored information warfare. She's advised Congress, the State Department, and other academic, civil society, and business organizations on these topics. She also regularly writes and speaks about these issues and is an ideas contributor at Wired and The Atlantic. And she appeared in the Netflix documentary, you might have seen, The Social Dilemma. So this is a conversation about what I consider to be a very important issue. We focus through the lens of the so-called Twitter files, but it really is a conversation about the loss of public trust in institutions and the way social media seems to have facilitated that. And one thing you might hear in this conversation at various points is a tension between what is often thought of as elitism and populism. And I should say upfront, in that particular contest, I am an unabashed elitist. But that doesn't mean what most people think it means. For me, it has nothing to do with class or even formal education.

It has to do with an honest appreciation for differences in competence, wherever those differences matter. When I call a plumber, I have called him for a reason. The reason is I have a problem I can't solve. I don't know a damn thing about plumbing. So when my house is flooding with sewage, backing up from the street, and the plumber arrives, that man is my God. Jesus never received looks of greater admiration than I have cast upon my plumber in a time of need. And so it is with a surgeon or a lawyer or an airline pilot, that whenever there is an emergency, there is such a thing as expertise. And we are right to want it. Because the alternative is ignorance and incompetence and needless suffering and untimely death. Without plumbers, we live in our own filth. And we've been doing that online for many years now. And it's getting disgusting.

Of course, I'm not saying that we should blindly trust experts. And I'm not saying experts haven't failed us in shocking ways. But we are shocked against a background assumption that expertise is a real thing and that our institutions have a responsibility to provide it. If you didn't think expertise was real, then there would be no basis to be disappointed when our most esteemed institutions fail to function. And of course, there are whole fields of pseudo-expertise. I've often criticized organized religion for this. We ask a Catholic bishop about the Holy Spirit or about the problem of human evil, and you will get a lot of fake knowledge aimed at your brain. We could add a few branches of the humanities here, along with scientific results that don't replicate but are still widely believed. And there are surely situations where real experts, thinking about real problems, can have their thinking distorted by bad incentives. It is absolutely true that institutions become corrupt. All of this is true, and yet none of this suggests that the solution to our problems is for everyone to just, quote, do their own research. Of course, now we have chat GPT, which would seem to make doing your own research more promising.

Eventually, I think it will. I think that many things will become like chess. What's the best next move in a game of chess? Should you ask the best human player on Earth, or should you ask a computer? The answer now is you should ask a computer, and that will be the answer for as long as computers are computers and we remain apes. I think many other areas of human intelligence will be toppled in just this way, but not yet. Someone on my team asked chat GPT why I left Twitter, and it produced the following paragraph. Sam Harris quit Twitter in December 2018. I wish I had quit in December 2018. It took me four years longer than that. He cited the platform's inability to handle discussions about controversial topics without devolving into toxicity and harassment as the main reason for his decision. I agree with that, but that's not really what I said.

At least I'm not aware of saying that anywhere. In a blog post, Harris wrote, quote, Twitter is now one of the worst possible forums for the exchange of ideas, and that he had grown, quote, increasingly disturbed by the way the platform, quote, amplifies the very worst aspects of our culture. I'm actually not sure if those are accurate quotes from somewhere or not, but I definitely didn't write anything in a blog post. He also expressed his dissatisfaction with the way Twitter is used to silence dissenting voices and spread misinformation. After quitting Twitter, Harris continued to communicate with his audience through other means, including his website and podcast. Okay, well, it appears to know that I have a podcast, but it seems that our robot overlords haven't quite arrived yet, and so we still need to rely on human experts for a while. This point between elitism and populism comes down to the following claim. Not everyone's opinion is valuable. On many, many topics, my opinion isn't valuable. I shouldn't even have an opinion. Having a strong opinion when you know nothing about a topic, it's your political right, sure, but it's also a symptom of a psychological problem. And having a society filled with such people becomes a social problem.

And social media has been a vector of strong, divisive, unfounded opinions and lies for over a decade. I mean, really, you just have to react to that thing that AOC said about that thing that Tucker Carlson said about that thing the cops may or may not have done in a city you've never been to, and will never go to even if you live a thousand years, and then you need to respond to all the people who didn't understand what you meant or who were just pretending not to understand what you meant, and you're going to do this a dozen times a day? For what? The rest of your life? Oh, you're not going to do that. You're just going to watch other people do it every day? And then what? You're going to find your real life in between all of that scrolling? what an astounding waste of time that was. But the social consequences of our spending time and attention this way are well worth talking about. And the question of whether it's possible to build a social network that is genuinely good for us is a very important one. And those are among the topics of today's podcast.

But I want you to keep a few distinctions in mind, because there's been an extraordinary amount of misinformation spread about what I think about free speech and content moderation and censorship online. So I just want to put a few clear landmarks in view. The first is that I absolutely support the right of anyone, anywhere, to say almost anything. I don't think people should be jailed for bad opinions. So for instance, I don't think the laws against Holocaust denial that exist in certain European countries are good. As much as I agree that it's insane and odious to deny the Holocaust, people should be free to do it. Now, the question of whether they should be free to do it on a social media platform must be decided by the people who own and run the platform. And here, I think people should be generally free to create whatever platforms they want. So Elon now owns Twitter. I think he should be free to kick the Nazis off the platform, if that's what he wants to do, I might not agree with his specific choices. He kicked Kanye West off the platform for tweeting a swastika inside a Jewish star. I honestly doubt I would have done that.

I mean, can you really have a terms of service that doesn't allow for weird swastikas? That seems impossible to enforce coherently. But the point is, I think Elon and Twitter should be free to moderate their platform however they want. Conversely, I think a Nazi should have been free to buy Twitter and kick all the non-Nazis off the platform. Twitter is a company. It should be free to destroy itself and to inspire competitors. And many people think it's in the process of doing just that. And it remains an open and interesting question what to do when the Nazis or the semi-Nazis start using your social media platform. And similar questions arise about people who spread misinformation or what seems to be misinformation. But where is the line between necessary debate, which I agree we should have about things like how to run an election or vaccine safety, but where's the line between debating these things and simply making it impossible for people to cooperate when they really must cooperate? For instance, after an election, when you have a sitting president lying about the results being totally fraudulent. Or during a global pandemic, when the healthcare systems in several countries seem on the verge of collapse, there is a line here.

And it might always be impossible to know if we're on the right side of that line. It's simply not enough to say that sunlight is the best disinfectant because we have built tools that give an asymmetric advantage to liars and lunatics. We really have done that. Social media is not a level playing field. And the idea that we are powerless to correct this problem because any efforts we make amount to quote censorship is insane. It's childish, it's masochistic, and it is demonstrably harming society. But this is a hard problem to solve, as we're about to hear. As I said, we take the Twitter Files release as our focus because both Barry and Michael were involved in that release. But the four of us speak generally about the loss of trust in institutions of media and the government. We discussed Barry and Michael's experience of participating in the Twitter Files release, the problem of misinformation, the relationship between Twitter and the federal government, Russian influence operations, the challenges of content moderation, Hunter Biden's infamous laptop, the need for transparency, platforms versus publishers, Twitter's resistance to the FBI, political bias, J.K. Rowling, the inherent subjectivity of moderation decisions, the rise of competitive platforms, rumors versus misinformation, how Twitter attempted to control the spread of COVID misinformation, the throttling of Dr. J. Bhattacharya, the failure of institutions to communicate COVID information well, the risk of paternalism, abuses of power, and other topics.

And now I bring you Barry Weiss, Michael Schelandberger, and Rene DiResta. I am here with Barry Weiss, Michael Schelandberger, and Rene DiResta. Thanks for joining me.

Thanks for having us. Thanks for having us. Thanks for having us.

for having us. As I said, I will have introduced you all properly in the beginning, but I was hoping we could have a discussion about the Twitter files and social media generally and the failures of the mainstream media and the government and other institutions to maintain public trust and perhaps a failure of them to be worthy of public trust. But I think the Twitter files is the right starting point here because as luck would have it, we have Barry and Michael, both of whom were part of the journalistic effort to reveal these files. Barry, let's start with you. Perhaps you can really take it from the top and give us the high-level description of what the Twitter files

are and how you came to be part of the release. It's funny because this is one of those stories where I feel like for half of the country it was the biggest thing that has happened in the past decade and the other half of the country had no idea it even existed. It was interesting to test my family in Pittsburgh to find out which news sources they were reading and could tell you everything about the way they viewed the story. Basically what it is, depending on how you look at it, is Elon Musk, the new owner of Twitter, trying to, in his words, have a kind of informal truth and reconciliation commission. He understands that the platform that he just bought has lost a tremendous amount of trust with the public, was claiming to be one thing, and actually in secret was something quite different, was also, probably as he would frame it, cooperating with the government in ways that would make Americans, if they knew about it, extremely uncomfortable, was blacklisting people without their knowledge, and all kinds of other details along those lines. And so another group of people would say, this is all about Elon Musk buying Twitter and trying to shame the previous owners of Twitter and the previous top brass at Twitter, and really what this is all about is embarrassment and vengeance, and where you fall on the answer to that question tells you a lot about where you stand politically, I would say, in general. So basically what the mechanics of it were that Elon Musk decided to make available the inner workings of the company to a number of independent journalists. The first one that he reached out to was the journalist, Matt Taibbi, who has a very popular newsletter. Then he texted me and reached out to me, then I reached out to Michael Schellenberger, and then the group kind of grew from there. It came to include journalists like Abigail Schreier, Lee Fang, Leighton Woodhouse, and a number of other people associated with my company, The Free Press. What was said on Twitter publicly by Elon Musk is that we had unfettered access to all of the inner workings of Twitter, everything from emails, private Slack messages, group Slack messages, and on and on and on. And that was sort of the headline that was trumpeted all over Twitter and all over the press.

In fact, what we had, and Michael can explain this probably in better detail than I can, because he has a meticulous memory, we basically were able to do highly directed searches on at most two laptops between at times up to eight journalists in a room. So what we had the ability to do was to say to a lawyer working through a laborious eDiscovery tool, and it came to include two different tools, tell me everything that happened between the following six C-suite level employees of Twitter on the dates between January 6th and January 10th, basically the dates that Trump got kicked off of Twitter. And basically over the course of a few days, it would spit back to us information. And what came out of that was a number of stories that, depending again on how you look at it, were either enormously important bombshell confirmation of what a number of, or of what a lot of people in the country had thought was actually going on on Twitter or what they denied. And if you're on the other half of the country, and again, I'm being crude here, it was, you know, nut picking. It was cherry picking. It was finding, it was going sort of searching for anecdotal stories that would confirm the political biases of the independent journalists involved in the project. I think the really, really important thing for people to understand, and I think that this wasn't explained well enough by any of those of us who were involved, is how unbelievably laborious these searches were and how if we had the choice, like it's not as if we walked into a room with organized files, according to COVID, masking, myocarditis, the election in Brazil, Modi, Israel, Palestine, like then we could have really told you the comprehensive story. Instead, we had to make some very difficult choices based on the kind of tools we were using to go looking for certain stories where we knew the public story that had been told, and we wanted to see what had actually gone on behind the scenes. And again, in my view, you know, the story of the decision to kick off Trump, very important story. Is it the number one story that was interesting to me? Not at all.

COVID was far more interesting to me, but I knew that if we looked at those set of dates that we could come out with some information that would be worthy of the public interest. And we also knew that we're dealing with someone who is in many ways a mercurial person. You know, any source that gives you information has motivation. You have no idea when their motivation or incentives might change. And so we wanted to harvest as much information as we possibly could in the days that we were there.

Yeah, I hadn't thought to talk about these details, but now I'm interested. So just a couple of follow-up questions. So when you would perform a search or have someone perform a search for you, there wasn't some layer of curation from lawyers or anyone else who was telling you, who were deciding what could be released and what couldn't be released. If you said, give me every email that contains the word Trump between certain dates, they just spit out those

emails. No, one of the ways that I knew that, Sam, I just don't know how much detail you want me to get into here. But in the first few days that I was there with my wife Nellie, who also works with me and building the company, Matt wasn't there. It was just the two of us, Elon Musk and lawyers that we were communicating with over the phone. And I would ask them to do a search, let's say for COVID or cloth masks or Fauci or whatever. And what I was getting back was garbage information. It was such garbage information that it, and I'm not a paranoid person. I would say Michael Schallenberg is way more suspicious than I am in general. I'm pretty naive. But it was so bad that Nellie was saying, this cannot be right. This cannot be right. And that's when I came to discover that the lawyer who was actually doing the searches worked for Twitter and was one of the people that we were hoping to do searches on, which is this guy, Jim Baker, who became a sort of flashpoint in the story later on.

And maybe Michael, I can hand it over to you if you want to explain sort of the mechanics of how this worked. The reason it can be maybe a little boring. The reason I think it's significant is because I think it will help people understand

why we did the stories we did. Right, right. Yeah, Michael, jump in here. What was your experience?

Extracting information from Twitter. Information from Twitter. Yeah. I mean, I think it's really fun conversation. I love talking about it. And I was a little annoyed after that just a lot of people wrote stories about how they thought the process worked without just asking us because we would have said so. And I've always taken all the time to explain it. But as Barry mentioned, Barry brought me in. I do not have a relationship with Elon Musk. I've only criticized Elon Musk in the past. I criticized him and Mother Jones. I wrote about him in Apocalypse Never.

And just obviously when Barry was like, we can get access to the Twitter files, I was like, hell yeah. I mean, there's no for me, there was no just it's a chance to go and get this incredible information. I met when I met Elon, he said he did not know who I was. And basically, it's just like what Barry said. If there was any filtering or curation or removing of any emails, we saw no signs of it. And I would be shocked because the size of the searches we were getting, I can just tell you some of them, we would be like, all the emails for this person over this period of date, and we would get email boxes of 2,000 emails, 890 emails, 2,000 emails, 1,800 emails, 1,800 emails, 2,300 emails. So it's just for somebody like we, I consider myself an extremely fast reader, and I'm able to process a lot of information very quickly. It took me a very long time to go through these emails. I couldn't see anybody being able to have done that. And then when the emails populated in our inboxes, there was no that we never saw any evidence anything had been removed. I don't think anybody, I mean, I'm not saying I can't prove that nothing was, but I just saw no evidence for it. And I didn't see anything in Elon that suggested that he cared about that.

Although Michael, it sounds like that cuts against what I was understanding Barry to be saying, which was initially the search results were so crappy that you thought somebody,

this nefarious lawyer was throttling the results. That was before I should clarify that was before Michael had gotten there. And as soon as Elon found out that that person was involved, he's operating at the highest level of the company. Until I told him, Hey, do you know that Jim Baker is the one doing the search? He had no idea that Jim Baker was the one doing the search. Then the people involved change, he was fired. And like Michael said, the files we got subsequent to that, there was no evidence at all that they were tampered with. The thing I should add is one of the criticisms of the story of the Twitter files is that we focused an inordinate amount on a person who had been at one time the head of Twitter's trust and safety, this guy, Yoel Roth. And the reason for that is that Yoel Roth was a very loquacious person. He talked a lot on Slack and on email and in other places. So it's not as if we weren't interested in other people. It's just that, like any story, you're looking for the person who's going to share the most information.

And he spoke openly and a lot on platforms like Slack to his colleagues. It's not like we were actively going out to interested in Yoel Roth. I barely knew who he was before I walked into Twitter.

Okay. So the process changed. Right. Now, were either of you concerned about the optics of this process that you would appear to be, at least in part, doing PR for Elon rather than actually engaging in journalism that was a more normal sore to make it. There were other constraints. Releases had to be done on Twitter itself, which I think it could be argued was not the best platform for actually discussing this and really anything at length. What were your concerns going into this and is there any residue of those concerns?

Not for me, really. I mean, for me, it was just like we get the access to the data and I just am not. I mean, people say things, but I'm just not, I'm not that concerned about it.

But for instance, what I noticed before I left Twitter, I have to admit now that I'm no longer on Twitter, I don't consider myself even minimally informed about what it's like there now. But when I was there and the first drops were happening, more or less everything Elon said about what was coming and what had dropped was wrong. Right. And he was lying or just delusional about what he was releasing, the level of government involvement.

But he wasn't releasing it. So isn't that kind of the point?

But it was the frame around it. I mean, he was saying here it is. And his summary of what Taibbi was saying was just not, in fact, accurate. In fact, in the case of one of Taibbi's drops, it was the opposite of what Taibbi said. Yeah. That didn't bother you at all?

what Taibbi was saying in case it wasn't Taibbi, so that didn't bother you at all? I mean, it bothered me when he tweeted my pronouns or prosecute Fauci. It bothered me when he said that thing about Yol Roth. I've told him that. I think Barry criticized Elon when he de-platformed those journalists. I retweeted it. We don't control Elon Musk. I mean, we were invited into a smash and grab situation to be able to look at as many emails as we could, and we're thrilled at it, and it's super important what came out of it. But no, I mean, I'm a big Gen X-er. I'm a Breakfast Club type. I go on Tucker Carlson. I talk to Tucker Carlson.

I talk to people that my family thinks it's terrible. I talk to them, and I don't have a view that if I talk to somebody that somehow I'm legitimizing all of their views, or that if I go and look for these emails that somehow I'm agreeing with Elon Musk, I've criticized Elon Musk about his policies around solar in China. I'm not going to stop doing that. I told him exactly what I thought and have told him exactly what I thought, and I'm just with Elon the way that I am with everybody. And so no, I mean, and people talk shit, but it's like people say things, but they're not true, so I can't have a stoic attitude about it, which is like, I'm responsible for the things that I do.

I'm not responsible for what other people do. I just think that this is as old school as it gets. A source had documents that he wanted to leak to the public, and journalists who felt those documents were in the public interest jumped to go look at them. And any source who leaks documents or leaks a story to the New York Times or the Washington Post always has an agenda. That goes without saying. I think the unusual thing in this case is that the source was public about it, and he made his agenda entirely transparent the entire time. And as Michael just mentioned, I think I well proved that I was not in the tank for anyone on this matter. I'm just on the side of the public knowing more information, and people can decide for themselves whether or not that information was in the public interest. I certainly think that it was, and I frankly think a lot of people are resorting to sort of criticizing journalistic practice or standing in or other sort of like technicalities of that sort because they don't want to confront what the actual

story is. Right. Well, I definitely want to get to the story, but Renee, I want to bring you in here.

Do you have anything to say at this point about just the process and the optics? Yeah, it's very interesting. So for your audience members who probably don't know, I started off talking about social media kind of as an activist on the outside in 2015, moved into academia in 2019. And in the intervening time, the relationship between platforms and government and researchers changed very significantly over those four years. We can talk about why and how perhaps. I was part, I'm at Stanford Internet Observatory. We are, we're, I don't know, part of something that was called the Twitter Moderation Research Consortium that I think no longer exists because everybody got laid off. But it was a process by which Twitter could actually share datasets with researchers. And this is relevant because we did, all of our research was done independently. We would receive datasets from Twitter. We would do research independently. And sometimes we would actually go back to them and we would ask, why is this account included?

Why is this, this doesn't feel like it fits. If we're going to tell a story to the public about this Chinese network, this Iranian network, this Saudi network, this Russian network, we want to make sure that we're doing an independent analysis and we are only going to say what we think we can support as researchers. And what we would try to do was look at and enrich the story with as much of a full picture as possible. So the Twitter dataset was almost a jumping off point to a significant process that would involve also looking for these accounts on Facebook, TikTok, Twitter, you know, sorry, YouTube, you name it. And what we would try to do was not tell an anecdotal story, but we would always include both the qualitative, here's what these accounts are saying, here's what we think they're doing, but we would try to include something in the way of summary statistics. Here's how many of them there are, here's the engagements they're getting, here's where they are situated in the public conversation relative to other accounts that talk about these things. And the reason for that is because one of the problems that I think has been, you know, one of the problems with a big driver in the public conversation around content moderation, whether that's related to the kind of foreign influence campaigns or domestic activism or anything else, is that it is so anecdotal. And so when the Twitter files began, as somebody who has worked with platform data and also, you know, testified in front of Congress critiquing platforms and their lack of transparency and who has written about that for the better part of seven years now, what has been interesting to me in the files, I think, I think they're very interesting just to kind of to start with that. I'm not a person who says, Oh, this is all in nothing burger. This is not interesting. But I had kind of three issues with the process. And the first was that I think a lack of familiarity with that multi-year evolution of content moderation policy meant that for me as an observer, there were some of these like wet streets cause rain moments, you know, the Gell-Mann amnesia phenomenon where the person doesn't fully understand what is happening in context.

A specific example that I that I said on Twitter was one comment in which you see the Senate Intelligence Committee engaging with Twitter, asking it if it responded to some sort of tip from the FBI. And that was very interesting to me because I had done a bunch of work on Twitter data for the Senate Intelligence Committee in 2018. And as a researcher running that process in 2018, with no engagement with Twitter whatsoever, what I knew was that the Senate Intelligence Committee did not have very high confidence in Twitter's ability to find anything. So reading that interaction was fascinating to me because they were, in my opinion, essentially saying, Did you find this yourself? Or did somebody have to hand it to you again? But what the reporter who wrote that thread construe that as was, Are you taking direction from the FBI and marching? So this was the kind of wet streets cause rain experience that I had in a number of these threads, where I thought, Gosh, I wish that somebody who had either been there, you know, in an abstract sense, not in the company, but who understood the evolution of that stuff, had perhaps weighed in or been consulted. And then I think the second critique was how anecdotal it was. And that made it feel a little bit cherry-picked. And this kind of ties into maybe point three, which is that the trust and the public confidence in whether or not you believe in a framing around an anecdote is entirely dependent on whether you trust the reporter or the outlet at this point. And that's a function of polarization in American society. It is not a critique of Barry or Michael or anybody else's your thread.

It is, I think, the reality. And so with some of the, in my opinion, overemphasis on anecdote, and I recognize, you know, this is the process. This is what you had available to you. What made that troubling to me is that it did feel like there were opportunities for score settling and things or searching for things that you, you know, that a particular reporter found problematic or wanted to dig more into, but that didn't necessarily get at the scope and scale of the phenomenon overall. And I'll point specifically to something like shadow banning, right? Fascinating topic. Lots of, you know, many of us have looked at it over the years and made arguments that I don't think it's something that the platforms shouldn't be able to do. And we can talk about why, but I do think it should be transparent. So that's, that's sort of where I sit on the shadow banning question. But what we didn't get was how many users were receiving these labels in what country during what time period, how many of those who received a label were mentioned in a government request? That's absolutely kind of crucial to this question of what, to what extent does the government actually exert influence over the platform? It's not simply filing a report, it's did the report lead to an action?

And this is the sort of thing, again, maybe this is my bias as, you know, as somebody in academia, where I say like, God, I'd really love to get my hands on the summary stats. You know, can you request those? Can you say like, in this moderation tool, you know, can we connect the dots here between here's the FBI over submitting, in my opinion, litanies of accounts, you know, really just sort of stupid process. But then what happened next? And that was like the kind of connecting the dots there was, in my opinion, kind of underdone candidly. And it led to an opportunity for innuendo to drive the story and whether or not you believe the innuendo is entirely dependent on whether you believe or trust the outlet and the person covering the story. So in the interest of informing the public writ large, that's where I felt like, you know, and as Barry notes, you depending on, I think we're saying the same thing, which side of a political spectrum you sit on, you either trust or do not trust at this point, I don't know that it's political spectrum so much as like, you know, institutional, populist, maybe, right. But there is, I think that that tension for me was, it was where I felt, and I wrote this in The Atlantic, that I felt that there was a little bit of a missed opportunity there. How could we perhaps get at more of those like holistic or systemic views and forming an opinion of platform moderation that are less anecdotal and less dependent on trust in a reporter's narrative?

Institutional.

Populist, maybe, right.

Right, right. Yeah, I mean, just to echo part of it, it's hard to capture kind of like how chaotic the situation was. I mean, it was like getting searches back at midnight, working till three in the morning, the owner of Twitter coming in at 1230, wanting to schmooze. Like, you know, I second everything Renee say, meaning on the question of, should these platforms, not just Twitter, be more transparent? Do we have a problem with private companies that have sort of unaccountable power over the public conversation? And to what extent are they, you know, doing the bidding of organizations like the FBI? Like, that's something really important that every citizen has a right to know, not just me and Michael Schellenberger and Matt Taibbi. But I just can't emphasize enough, like, that the idea of going in and saying, give me a report or a summary on XYZ, that just wasn't something that was possible while we were there.

Right. Okay, well, let's get to what was found out or what has been found out so far. I guess it's preamble. I just want to say I think the big story here, which is certainly beyond the case of Twitter, is our ongoing struggles to deal with misinformation. And this is something that Renee obviously knows a lot about. But it seems to me that this is the kind of thing that may never be perfectly solved in the absence of just perfect AI. And when you look at what imperfect solutions will look like, they will always throw up both type one and type two errors. So, you know, any attempt to suppress misinformation is going to suppress real information. And that'll be embarrassing and cause some people to be irate and to allege various conspiracies. And it also, you know, it will fail in the other way and lots of misinformation will get through and fail to be suppressed. And this isn't merely just an engineering question. This is an ethical question.

It's a political question. Even in the simplest case where we know what is true and what matters and what we should do on the basis of these facts, and where I would say we're very rarely in that situation at this point, but even in the best case where we know what's true, it can be very difficult to know what to do in a political environment where great masses of people believe crazy things. It's a question of how to message the truth in hand to great numbers of people who, as we've already said, no longer trust certain institutions or certain people and will reach for the most sinister possible interpretation of events and, you know, anchor there. And that seems to be the state of discourse we have on more or less everything from public health to the very topic we're talking about. So with that as just kind of the frame around this, perhaps Barry and Michael, you know, either of you can start. I'd love to know what you think we have learned so far and what has been the most interesting slash concerning facts.

I'll say maybe one thing and then kick it to Michael. I think that there are two main stories here. Story number one is about the way that an extremely powerful tool that has influenced elections, that has led to revolutions, claimed to have a particular mission and gaslit the public as it secretly abandoned that mission in critical ways. And it shouldn't matter what your politics are. Like that to me is a really important story. If you believe as I do, you don't need to go all the way and believe that Twitter's a public square to believe that has an enormous influence on the public conversation, on deciding who is a hero and who is a villain on any number of things. The second thing that I think is the headline story is the way that this sort of very close relationship between the federal government and one of the most powerful tools of communication in the world. I think those are sort of like the two core stories that came out of all of the reporting. I don't know if we're on Twitter files number 121 or whatever, but those to me are the two biggest headlines. Michael, do you agree with that framing?

Which topics do you think you're most concerned about the messaging around COVID or Hunter's laptop? What do you consider to be the center of gravity here?

On the cultural, I'll leave the government conversation to Michael because he did much more on that. To me, it's the way that Twitter actively narrowed. I don't know if we want to get into Hunter yet, but yeah, I mean, I certainly think that when a private company decides to lock out a newspaper doing truthful reporting weeks before an election on the spurious grounds that it was based on hacked material as if that isn't what is printed in places like the New York Times and the Washington Post every day. Yeah, I have a huge problem with that, but I think one of the core things that came out of what we saw, especially in the shadow banning of people like Dr. Jay Bhattacharya was the way that people inside Twitter actively worked to make things untouchable, to make people untouchable, to make particular viewpoints that have turned out to be very vindicated untouchable, and therefore profoundly shaped the public conversation about something like COVID and the right way to respond to it.

I think that is a really significant story. Michael?

Michael? I would say there's three areas. I would say the first had to do with the crackdown on misinformation bleeding into a crackdown on free expression, which I think you alluded to, Sam. And I'll give one big example, which is Facebook under pressure from the White House, censoring accurate information, emailing the White House to say, we are censoring this accurate information because we view it as encouraging vaccine hesitancy. Now, they didn't exactly black it out. They repressed the spread of it, but it is a form of censorship. Twitter did a milder version of this with Jay Bhattacharya and with Martin Kildorf, who just simply said not everybody needs to get the vaccine. And they put an interstitial on it, which is a kind of like warning thing saying official Twitter censors say that this is not right. That was a case where in the case of Kildorf, he was expressing an opinion and he is a Harvard medical doctor, not to stoop to credentialism, but he certainly I think had a right to weigh in on that question. And then in the case of Facebook, there was no transparency here. And I should actually pause and just say, however much we disagree on many things, I've had the pleasure of being able to have an ongoing conversation with Renee over the last few weeks and we both very strongly support transparency. That I think I agree with Renee and others that argue that transparency would solve a lot of these problems.

If Facebook had simply done, if there is something somebody said that Facebook said, we are suppressing these views because we are encouraging vaccines and we're going to allow this debate on in some way, this is no so technical obstacle to allowing that to occur. So that's one. Number two is the laptop. I think there is a very clear pattern of behavior. I cannot prove that there was an organized effort, but nonetheless, I think that my thread on the Hunter buying laptop shows that there was a very strange pattern. And again, maybe it was a total coincidence for both existing intelligence community officials and former intelligence community officials to pre-bunk the Hunter Biden laptop. And we can get into the details of this, but suffice it to say, I think it merits more investigation. I strongly support congressional investigation on it. I don't think we've gotten to the bottom of it. I find it extremely suspicious and I think other people do too when they really look at it. And again, maybe I'm overly pattern recognizing here. I hold that as a possibility, but I think there's there's something really interesting there that has to be talked about more.

And then the third thing is just this grotesque inflation of the threat of Russian influence operations. It was being used as a way, as a cudgel to basically start to deplatform, deamplify, censor, demonize, disparage, discredit people that did not deserve that. That's sort of what Matt talks about today. And I thought Renee over email, there was an exchange about this, but it's not just a single thing. I mean, it was being used as justification for all sorts of things, including censoring this laptop. It became a kind of boogeyman. And I think one thing I wanted to do on this podcast and say very clearly is I do think that Yolirov turns out to be a more complicated character than I think he had been perceived as in the beginning. I see him, he repeatedly would point out that various things were not violations, including the thing that Trump was deplatformed for. He said very explicitly that Trump had not broken, had not violated Twitter's terms of service. And they then worked to create a justification for deplatforming him. Same thing with the Hunter Biden laptop. They said that it had not violated Twitter's terms of service.

They were very clear on this and there were other instances of it. Now then Yolirov was then basically overruled by the people above him. So he was a good company man, but I don't think that the demonization of Yolirov that had occurred perhaps earlier in the process of looking at what happened at Twitter was fair. But I do think that this, and I think I've mentioned him here in this context because he was the one that was often pushing back against the abuse of this Russian influence operation.

You mean it wasn't fair when Elon branded him a pedophile in front of 120 million people?

No, that was obviously wrong. That was obviously wrong. Absolutely. No hesitation. Ditto, obviously.

It's announcing that. No hesitation. Ditto, obviously.

So Renee, feel free to say whatever you want here, but I would love to get your take on the Russian disinformation piece too.

Yeah, sure. So I think that where I come down, and Michael and I have been emailing about so many of these issues over the last couple of weeks, I really come down in a place where I feel like there are nuanced moments here. And as we talk about, for example, Yolirov pushing back against some of the things that happened, content moderation is the story of people trying to make decisions, the best possible decision in line with a particular policy that a company has written, and then some sort of sense of even-handed enforcement. So you have the policy and then the enforcement, these are sometimes two different things. The policies, what you then have is people in the most high stakes, volatile situations trying to figure out what to do. So what winds up happening on Twitter, ironically, is that all of these things are reduced down to, do you think this person is bad? Do you think that decision is bad? If you think that's bad, obviously there was some sort of malice behind it. And that I think is a flattening of what is actually happening. There is some interesting dynamics and uses of the word censorship that I've been intrigued by as we have moved through the evolution of some of those policies over the last seven years. And just to help make sure the audience understands, content moderation is not a binary, take it down, leave it up. So that is the...

I'll use Facebook's terminology here. They have a framework and they call it remove, reduce, inform. Remove means it comes down, reduce means its distribution is limited, and inform means a label is put up. There is some interstitial, a pop-up comes up, or there's a fact check under it, or YouTube has a little context label down at the bottom of the video. Sometimes it'll take you to a Wikipedia article. So in that moderation framework, remove, reduce, inform, when something is reported, there's a policy rubric that says, this thing may violate the content and then the enforcement, whether to remove, reduce, or inform, is based on some internal series of thresholds. I am not an employee of these companies. I don't know what those are. So for me, one of the interesting things about the files has been seeing those conversations come to light. And my personal take on it, my interpretation, has been largely that you have people trying to decide within the rubric of this policy, what they should do. So there were a couple of policies that I think are relevant to this conversation. And what's just been said, the first on the subject of the Hunter Biden laptop was the creation of a policy following a lot of what happened with the GRU.

So when we talk about Russian interference, I'll connect it to Russian interference for you, you and I spoke back in 2019 about the work that I did on a particular data set for the internet research agency. So that is the sort of troll factory. When people think about social media interference and they think about trolls or bots, the internet research agency is what they're thinking of. But there was another component to Russian interference in the election, which was Russian military intelligence hacking the DNC and the Clinton campaign and then releasing files at opportune times, for example, to distract or change the public conversation to make them cover these files. I think the first transfer, I'm not mistaken, was dropped the day of that Access Hollywood Pussygate tape coming out, right? So the media is talking about Pussygate. All of a sudden, here's this trance of secret documents, media conversation changes. So this is in response to things like this and also to hacked materials more broadly, the platform implements a hacked materials policy that says, despite the fact that, again, journalists may have a particular point of view about how to treat hacked materials, the platform does not necessarily have to share that point of view because sometimes hacked materials turn out to be sex pictures or nudes that are sitting on your phone or a variety of other types of private conversations that get dropped. So this policy covers things beyond the contents of a wayward laptop from a presidential sun. And so again, they're not writing the policy for Hunter Biden's laptop. They've written the policy and then you see in the conversation them deciding whether and how to enforce it. And this is where the conversations with the FBI come into play.

Again, no personal, I felt like the enforcement on the Hunter Biden laptop by Twitter was quite foolish. I thought this is one of these, like the horses left the barn. You're creating more problems for yourself by trying to censor, particularly an article as opposed to the contents of the laptop itself, right? There's one thing. You can enforce your policy on hacked material by taking down the nudes that were going up and saying that violates our terms of service without saying that also the New York Post article digesting the contents of the laptop violates the terms of service. This is where you see some of the debates about the enforcement there. But the...

Actually, just to linger on that distinction, if I'm not mistaken, this was true when I last looked, but perhaps something has come out since. Biden and his team never asked for the story to be suppressed on Twitter. Weren't they just asking about the contents

of the laptop, like nude photos of Hunter Biden to be taken down? So, my understanding from when that Twitter files thread went out, I and others went to the Internet Archive to go see what the substance of those tweets had been, and they were in fact nudes. Does that mean that they were all nudes? No, because again, we have a very particular filtered anecdotal view of what happened with regard to those requests. We're told the Trump campaign requested takedowns, sorry, the Trump administration requested takedowns, the Biden campaign requested takedowns, and then we have a list of like four or five different tweets. And so that, again, is where depending on your framing and your perception, this was either egregious jaw boning or somebody trying to get nudes taken down off a platform. But from what I have seen, it was the latter.

But don't we think that the scandal was the fact that Twitter locked out the New York Post?

Yeah, and I'm not in any way saying that I thought that that was a good decision. That was what I meant when I said that the suppression of the article was bad. Facebook did something different. I don't know if you remember what Facebook did at the time, but Facebook actually used reduce, and Facebook said, we are going to throttle the distribution of the story while we try to figure out what is going on here. Now, the question of is that throttling censorship is a subsequent label? Censorship is where we've really moved very, very far in our use of that term in the context of social media moderation. My personal feeling on that very strongly is that it was political. The first labeling is censorship articles began when Twitter began to fact check tweets by President Trump. It did not take them down. It did not throttle them. It put up a label, a fact check. I think that's counterspeech and contextualization.

This is my personal view on it, but we began to see, again, a flattening of the conversation where remove, reduce, and inform were all contextualized as egregious overreach and censorship. Where I come down on a lot of these questions is I recognize the complaint, I acknowledge that things were not handled well, and I ask, what do you want done instead? If you do not want the label, if you do not want the reduce, and if you definitely don't want the takedown, then is the alternative simply a viral free for all at all times with every unverified rumor going viral and the public being left to sort it out? I'm very curious about that, particularly because journalism is supposed to be about informing the public a recognition that journalists themselves serve a filtering function, serve a fact checking function. We can debate whether that's partisan or biased or this or that, but there is, I think, a core belief at the center of the profession that there is such a thing as the best information that we have in this moment and how do we convey that in a particular information environment? That's where I think a lot of my work has been, but I'll stop talking there because I think that the complexities of content moderation are too often viewed as right versus left, take down versus leave up. They're really filtered through the context of the American culture war, and this is a global platform trying to figure out, what the hell do you do when Modi's government requests a take down? This

is the policies that have to be made here, policies that have to be made here.

Yeah, you just capitulate and hope no one notices. All right, so I just want to add something to what you said, Renee, because what people are reacting to, so people are acting like they want just everything to rip however the algorithm sees fit and any curation is nefarious, and yet we know we have an algorithm or a set of algorithms that preferentially boost misleading and injurious information. So the truth is always playing catch up to the most salacious lies, and if that's going to be the status quo, there's no way you build a healthy society and a healthy politics on top of that. So I think anyone who thinks about it for five seconds knows that they don't want that, and therefore you have to get your hand on the wheel, at least a little bit, and whether that hand is some other algorithm or it's actual kind of the conscious curation of monkeys, you need to intrude on what we currently have built, and it comes back to how transparent those intrusions are and then

what people make of those efforts based on our divided politics and our tribalism. And I think that the transparency piece is the common ground in any area where we can actually move forward. Google has a interesting... All the platforms have transparency reports. Most of them are aggregated stats. They're not particularly interesting. Google actually will say, here's a government takedown. Here's approximately the request. Here's what we received. Here's what they asked us to do, and then here's what we did. It's very one-cent summary, two-cent summaries, but I really love that. I think of that as this is a best practice.

There's the Lumen database, which does this for DMCA takedowns, which are usually companies, sometimes others requesting takedowns related to copyright violations. Again, here is the request. Here's what we did. And I think that is an optimal path forward for saying you cannot have a wholly moderation-free environment. Every algorithm, just speaking of curation, has a waiting in some regard. There is no such thing as neutral. There is no... Even reverse chronological is a particular value judgment because they're waiting it by time. And you can see this actually quite clearly now on Twitter. If you look at the For You page, I think they're calling it, or For You versus Following, you see different types of things. You can go and you can look at a chronological feed. You will see that For You is often bait.

It's the most outrage-inducing, you're going to go click into this. You're going to go fight with that person. That's great for the platform. The chronological feed is not necessarily as engaging. It's not necessarily going to keep you there, but it is a different mechanism for surfacing information. What we're ultimately talking about here is incentives. It is a system of incentives, it is a system of judgments, and that is in algorithmic curation as well as content moderation. I do think that the public does not actually understand the extent to which an algorithm deciding to curate and surface something shapes their behavior, shapes what they do next. This is a thing that I feel like I'm trying to scream it from the rooftops, just saying it's not just about, is this person being censored? Is that person being censored? It's actually what is being amplified, and that is potentially the far more interesting question as we think about how

to build a system that vaguely mimics a public square. I've run the simplest algorithm over here, which is to delete my Twitter account, and it's impossible to exaggerate the effect it has had on my mind and life not to be on Twitter. I recommend it to anybody. When I have checked back on Twitter just to prepare for this conversation, I am just fucking aghast that I spent that much time there. It's a mortifying glance backward over the previous 12 years. Even the good stuff is for the same reason I'm not on TikTok now or any of these other platforms because it would just be a time incinerator. When I look back at my engagement with Twitter, it's amazing to me. So there's something pathological about, I think, every variant on offer, and it's not to say that it would be impossible to build a social media network to everyone's benefit, but it's Twittering it, and it's just very interesting to have unplugged. I've done podcasts about Facebook without ever being on Facebook because it's of enormous importance to society. It's both what it does appropriately and badly, and hence this conversation. Michael, do you have anything you want to insert at this point?

Well, I would say the three things I raised, which was the need for transparency and content moderation because there is some amount of censorship of justifiable opinions going on and accurate information. That's kind of the big social media thing. The other two I mentioned, I think, really have more to do with FBI, and do we think it's an apolitical law enforcement organization? The third is around this inflation of the Russia threat, which is not specific at all to social media, but I think is extremely important because we all know it's a terribly dangerous thing to underestimate a threat. But in fact, exaggerating a threat has very serious problems associated with it, both the ability to abuse that, which we saw in terms of deplatforming, deamplifying people that were innocent. In other words, saying Russian influenced as opposed to Russian. I think that we need to kind of get to the bottom of the FBI issue and the treatment of the Russian laptop and also, I think, have a real honest conversation about this issue of Russia threat inflation. But I'm not sure. I think if we all agree on transparency, that's great. I just think we should acknowledge that, for example, a lot of the misinformation is coming from the sources that I think you might think of as the people that are not the sources of misinformation and that sometimes it's innocent. We all thought that if you got the vaccine that you were either not going to get sick or that you are not going to be transmissible. Those two things both turned out to be wrong.

It seems to me that like having a conversation about those edges of of science is exactly what you would do on something that we call a platform. And so I think it resolves a little bit, if you say look, if you're a platform, you have this incredible privilege, which is that you're gonna have this limited liability, basically. But that means that you're also, but the flipside is that those platforms are also curating. And so, you get yourself in a funny position which is like okay, well then, how do you resolve that? And it seems that you have to resolve it, which is that if you're gonna have this amazing privilege to be a platform, rather than a media content producer, then you must be transparent about how you're making those decisions. And there must be a place for people to appeal if they're being censored or just throttled or reduced or even if there's an interstitial. I have an ongoing conflict with Facebook about the cause of high intensity fires in California. I have the top scientists in California saying high intensity fires are caused by fuel wood load rather than temperature changes. I am not allowed to express that on Facebook. I have been severely, so severely throttled by Facebook that it's basically a dead platform to me. The ask, my demand for an appeal to the fact checker was they said, go talk to the person that censored you. It pisses me off, needs to be resolved.

And it needs to be resolved. If you say that it's due to Jewish space lasers, you might have an open lane on Facebook. So you've got some nice followers. I want to drill down on a couple of points you just made. So it seemed to me that the story was changing in real time as the files, as the Twitter files were dropping. Initially, the story seemed to be that the meddling on the part of Twitter's ultra woke employees was just horrific and horrifically biased. But then it seemed to shift and it was more of a picture of Twitter resisting an intrusive FBI, and actually resisting fairly professionally, or at least they eventually caved, I guess, but they seem to surprise some of you in how much they in fact did resist. And so viewing it from the outside, it seemed like the goalposts kind of moved a little bit. Like first, we're just meant to be astounded by what the back room of Slack looks like at Twitter. But now we're seeing that no, actually they were quite tied in knots over following their own policies and we're just really getting worn down by the FBI's requests. How do you view it there? And then I guess the other piece I would add is, here we're obviously talking about Trump's FBI, right?

Run by a Trump appointee. So that does kind of muddy the picture of Frank political bias being the reason to suppress

or pre-bunk a Hunter Biden laptop scandal. Yeah, I mean, so I think, yeah, I definitely think that my perception of what was going on changed over time. I, of course, we were all only responsible for the threads that we wrote. So by the time I came in, it looked like Yol was doing more pushing back. At the same time, on both of the two issues that I looked at and was involved in, the decision to deplatform Trump and the decision to bounce is the technical language, the technical word, the New York Post account. In both cases, Yol and his team had decided that there was no violation and then they reversed themselves. Now, to some extent, you go, well, you know, you're kind of, I think Renee articulated this a little bit before, which is, the rules are evolving over time as they deal with real-world cases. So you can't be too much of a, you know, like, well, we wrote the rules and we can't change them, but it did, there wasn't transparency about how that was happening and hence another reason for transparency. And in the case of the Hunter Biden laptop, I find Jim Baker's behavior extremely suspicious. He is clearly a deep anti-Trump-er. He is the person that Hillary Clinton's attorney came to, Michael Sussman came to, to share false information about an alleged Russian bank that was wiring information potentially to the Trump campaign, triggering the investigation. He then goes to Twitter and appears to have played a major role in reversing the initial conclusion by Yole Roth's team that the laptop had not violated any of Twitter's terms of service and that the laptop appeared to be authentic

rather than a product of Russian disinformation.

And then again, you get to a point with this stuff where you kind of go, I've done all I can in terms of going through the data. I've made as strong of a statement about this pattern of behavior, appearing to be a pattern of behavior and appearing to be organized. And now I just think it's in the hands of the Congress that it needs to get to the bottom of what was going on over there and they may never do it. But there was stuff going on, Sam, that was weird. Like why in the world did Aspen Institute do a tabletop exercise with the national security reporters from the New York Times and Washington Post, the safety officers from Facebook and Twitter to talk about a potential Russian hack and leak operation involving Hunter Biden? Why did they do that in August and September? Keep in mind that FBI had the Hunter Biden laptop since December 2019, that they were listening to Giuliani's calls. I find the whole thing extremely suspicious. Now, of course, Hunter Biden was in the news and that was what Trump got impeached over, yes. And it may be that I am again, engaging in an overly pattern, trying to recognize a pattern here that's not there and that's possible. But I do think there's something very strange going on that we haven't gotten to the bottom of and we should. And the fact that Trump had his appointee there doesn't mean that there wasn't potentially some effort by both existing and former intelligence community operatives to basically engage in a PR campaign or what we used to call PSI ops or now it gets called influence operations to basically pre-bunk the Hunter Biden laptop so that I personally and my entire liberal democratic family and all of the Democrats I know thought that it was Russian disinformation.

I mean, I didn't even really know that it wasn't until pretty recently. I just assumed it was Russian disinformation, I didn't take it seriously. So I think partly I kind of go, I know how I experienced that episode, which was to go, I don't know, it sounds like it was probably Russian disinformation and it wasn't and they knew at the time that it wasn't.

And that seems to me, quite important. Let's just close the loop on that if we can. Now this laptop has been studied, certainly by many, many people right of center for many months.

Is there anything there that was important? Oh my gosh.

Yeah. Is there anything that ties back to his father that is a big deal? Yes. Like what? Oh. I'm still in a bubble. I still haven't heard this story apart from a single line in an email saying, I gotta give 10% to the big guy.

And we assume the big guy is Joe Biden. Yes.

Like what? Oh. I still haven't heard this story apart from

a single line in an email saying. Oh no. No. Significant quantities of information. I mean, you have a photograph of them with Kazakh businessmen who were part of the natural gas pipeline to Ukraine. You have Joe Biden leaving his son a voicemail message saying that he thinks that the New York Times piece about Hunter's business dealings turned out okay. You have Tony Bob Alinsky who was a partner with Hunter Biden and nobody has questioned his credibility. Nobody has challenged anything he said saying that he met in the early on in the relationship, met with Joe Biden and Hunter Biden where Joe made it very clear that he was part of this business operation. So again, I don't even know. I voted for Clinton and I voted for Biden. And I don't know. I have problems with Trump as I know you do.

I know that all three of you do as well.

I love Trump. I love Trump, I don't know why you got this idea. You must be a victim of misinformation. Sam's been really, really subtle about his political use about Trump. I've got my MAGA hat somewhere here somewhere.

Let me get it. I've got my MAGA hat here somewhere, let me get it.

He's really, really hard to read on the subject. But nonetheless, as a voting citizen, there is no reason that there should have been that level. I mean, we should have known right away that that laptop, The laptop was verifiable within, I mean, FBI could have verified it within minutes or hours. It took Peter Schweitzer a few days. It probably would have taken Washington Post New York Times even less time if you had the people on it to verify it. You had to cross check with public records. And anyway, we now know that that is totally intact. It hasn't been tampered with. So for me, I kind of go, yeah, you basically, it is what it is. I mean, it is what it looks like. It is Hunter Biden was a grifter selling his family's name, selling access to his father, bringing people into the White House, bringing his father to dinners with Ukraine businessmen, having photographs taken with Kozlak businessmen, all about exchanging, about trading influence to basically get some pretty, and I work on energy, that is my main focus. These are huge, I mean, this is just huge gas and natural gas deals, gas being kind of the most important fuel in the world right now.

This is not subtle stuff either. And especially if he's in bed with the Chinese, with guys that are tied to Chinese intelligence. Yeah, I think that's very, very concerning. That should be very concerning to anybody

no matter your politics.

It should be very- Yeah, let me just add a footnote here because I think my views on this topic are much misunderstood. I totally agree with everything you just said, even though I'm hearing some of those details for the first time. And I welcome an investigation into all of that. And I think we should have understood all of that. The thing that I resist is the idea that we ever should have been hostage to Rudy Giuliani's timetable, where this thing gets released a couple of weeks before an election, as an October surprise, when there's not enough time to get to the bottom of this to find out just what is innocuous and what is super important. And just so that it can have its attendant effect of just obliterating the political discussion and driving the polls for Biden downward. My high-level view of it is unchanged, which is there's really nothing that could be on that laptop that signifies Joe Biden's venality and corruption in a way that would overwhelm my impression that Trump and Trump's family are 10 times more corrupt. Both of these men have been living in public view for practically as long as I've been alive. We know a lot about how they live. If Biden were rolling around in a Maserati and had 25 homes, well, okay, well, then I think something really interesting might be going on, and let's get to the bottom of it. But we know how Joe Biden lives compared to Trump. Trump's corruption leaks out of him with every utterance, and his whole presidency looks like a grift.

So I just think that if you're gonna balance the scales between corrupt politics and nepotism and grifting and bad incentives and just keep heaping it on both sides, one's Mark Biden, the other's Mark Trump, I know how they balance. And still, I agree with you that we should know everything. It's just we couldn't know everything 10 days out from the election with the clock ticking. That was always my concern. And no, I don't support Twitter's decision to knock the New York Post off the platform. Where I draw the line is it made sense for respectable journalists to avert their eyes from the story until after the election, given that we knew what happened with Hillary's emails

in 2016 and how decisive that seemed to be. Well, one of the things that we didn't, at some point, other stories are important in the world, but there was so many things that we saw at Twitter that just culturally I think would be extremely interesting. Maybe Renee will think this is dog bites man, but I think for a lot of normal people who are vaguely aware of this platform called Twitter, understand that it helps shape public discourse, they would be just absolutely shocked by the extent to which even the public conversations in Slack at Twitter sound like a gender studies classroom at Oberlin College. You know, like I went looking just for names of people that were prominent to see what were said. And I went looking for JK Rowling and you see people in public Slack threads talking about how her tweets fall under the company's dehumanization policy or how she's back on her transphobic bullshit and she violated the community guidelines on the platform. And let's figure out how to band together to get her kicked off. And this is the kind of stuff that's happening just publicly at the company.

That doesn't surprise me at all, I must say, but I don't like it at all, but that's what I would expect was going on at Twitter.

That's sort of where I would come down on it too. That's not a space that I personally work in and most of my engagements with Twitter in a professional capacity over the years actually focused on the integrity team, primarily around state actor and inauthentic networks. And I was at a global level. Again, we were looking at Saudi Arabia and Egypt targeting Iran, also not just the American culture wars, but you do have, people are very free and they discuss things in Slack. And this, again, just to circle it back to transparency, is where you would want to be able to see why somebody is actioned if they are actioned in a particular way. And I do think that moving forward on transparency and enforcement is, in fact, the most critical thing to restoring public trust. I think the appeals process as well is important. We've seen Facebook try to do this with the Oversight Board. I'm wearing degrees of success and it's a whole other podcast, I think, but that combination of here's a policy, here's an enforcement, here's how we can look at how the policy was enforced here. People can kind of weigh in on whether they were enforced against fairly. I think, again, what you're getting at though, policies are written by people and they reflect a particular set of values. And so that is not, again, per what Sam just said, I am not surprised.

That's my point. And also, I think that the question of, does chatter in a Slack translate to an enforcement is the really kind of key connect, the dots piece that we need to see there. Not just that they're talking about in the Slack, but does Ms. Rowling or anybody else then in turn get a strike of some sort. And that, I think, is the piece where we need to have a little bit more dot connecting between, here are some employees griping in a Slack channel. And again, it depends also on which Slack channel it is. Is it just some random employees having a conversation? There are a lot of these slacks that are channels that are water cooler chat type stuff

versus the enforcement team doing it. That's my point. But I think what's relevant is institutions are people. If 99.9% or whatever the percentage is of Twitter employees are not just liberal, but are pretty hard line leftists in their views, or at least the ones who are speaking publicly. I think Matt Taibbi pulled the donations. It shouldn't surprise us then that they are labeling things as mis or disinformation or hate speech or whatever label they want to put on it that are just normal, frankly, or heterodox or vaguely conservative. And I do think that that is scandalous. But Barry, we knew like who was it? Was it Megan Murphy? Yeah, when she said men aren't women though in 2018. A lifetime ban for that.

So we knew, but Barry we knew like who was it? Was it Megan Murphy? When she kicked off Twitter for a lifetime ban for that. So we knew just how insane it was. She's back now though. Yeah, along with many other people. Although not Kanye, that was a surprising ban. Can you really ban swastikas?

That seems strange. She's back now though.

Ban swastikas? That seems strange. But that's also subjectivity in action, right? You're saying the same thing. That is also the subjectivity of who is in charge making the determination, I think. And so again, I think this is where interestingly, you have seen the rise of alt platforms, right? We haven't talked about that at all, which is when you are moderated against in some way, and I am not saying this is right or wrong. I hope that people can distinguish between descriptive and normative here. When that happens, you do see a migration to Telegram, to Parler, to Truth Social, to Gab. There is an entire constellation of platforms that are designed to continue to give people platforms, particularly who want to say certain things that a mainstream platform has decided are out of the bounds of polite conversation under its moderation frameworks. We can argue about whether platforms have the right to set those moderation frameworks, right? This is something where the more, you know, I would argue, yes, and I would continue to argue that, by the way, even with the change in ownership, just to be clear, this is something where, you know, I believe that not every platform has to have some kind of government decreed standards of niceness or norms, and that, in fact, we have seen over the years, a variety of different types of tolerance levels take shape.

You can have nudity on Twitter, you can't have it on Facebook, a huge spectrum there. I think that's actually good, right? Because then audiences can choose where they want to go. So the question is, again, if you don't agree with the policy, users can, of course, complain about the policy. If you've been enforced against on the policy, there are other places to go. Is it egregious that Twitter has enforced against a policy that it has articulated? That's where I start to say, you know, okay, again,

the question is, what do you want the policy to be?

Orange making. But I'm kind of trying to step back and say, on the platform where the majority of journalists, politicians, and policy makers happen to be, it's not on TikTok, it's not on Facebook, it's not Instagram, it's here, and we're concerned about living in a democracy where there is rampant actual misinformation. Isn't it a very dangerous thing, then, to label things that are normal views, views that fall well within the 40-yard lines of acceptable political discourse to say that they are outside of them, and, in fact, to label them as actually misinformation? Like, that's what I think is important and relevant. I don't want to live in a world where someone who's critical of lockdowns or someone who says, hey, wait a minute, like, maybe it was a bad thing for schools to be closed for a few years, let's talk about it, is regarded as like a heretic. That's bad for the country. That's all I'm trying to say.

I agree. No, and I completely agree. And I think that the platforms in the, I feel like I'm like the token platform excuser here. I'm really just trying to add context on what happened and why over a period of time. There was an attempt to differentiate between expressing a point of view versus targeting an individual with a point of view. And that's where you start to see some of the hate speech language focuses very specifically on does it target an individual and enforcing on that differently, again, theoretically, than simply expressing a point of view that is debating a particular ideology around gender or race or what have you. So there was that attempt to say the harm is caused when you target a person leading a person to feel a certain way, as opposed to express a general point of view. This is how the policy language was actually written. Again, the enforcement is the part where when the rubber meets the road, you see either moderation, people who don't really necessarily understand cultural context or are politically biased in their own way, or are more inclined to take a hard line view on one position or another in line with their particular own feelings on the matter. That's where you see the differentiation between policy and enforcement come out in some pretty unfortunate ways. I wanna just mention the misinformation thing though, because that policy is not the same as the misinformation policy. And this is where, again, per your point about normal people, unless you've been sitting there like the way we do, where we just go through every one of these policies, you know, my team has done work on elections, we've done work on COVID misinformation.

In response, we are regularly assailed to some sort of like evil censors and collusion with the government to silence the speech of millions, which is just absolute horseshit. But is the, again, the perception and reality are not the same thing here. The people don't know what they've been enforced on necessarily. Sometimes it shows up in your inbox and you maybe can kind of intuit what the offense was, but that's not made transparent to the public. And this, again, gets at that question of when somebody is enforced against, what is the violation and putting it out there in a way that the public can see it as well? I think that you'll see people complain a little bit about, when that change is made, you'll see some complaints around, well, this should have been private. You know, oh, well, that was a thing that, do I necessarily want everyone to see this? But that's where I think, I do think overall it is beneficial for everyone, for the platform, the user, the broader public to understand why an enforcement action took place and under what policy. Misinformation, and I've been saying this for several years now, and this has been something as I've done this work for a long time, my own evolution on it is that misinformation had egregious scope creep as a term. And I think a lot of that was media coverage. I found it very frustrating. Again, I think Joe Bernstein in the big disinfo piece, there were some things where I was like, he's absolutely right.

There were others where I was like, no, you know. But what started to confront people and what became very clear to me during COVID actually, during the work that we were doing on trying to understand COVID, was that we were looking at rumors, not at misinformation. And I know that this sounds like some niche academic thing, but I think it's actually important because rumors mean you have unofficial information circulating. This is just what people think about the world in an environment where the truth cannot be known. And so the policies that say this is true, this is false, just completely misinterpret what is actually happening in those moments. And so the question, again, getting back to like, my kind of core thing that I'm gonna harp on is, okay, so what should happen then, right? If the problem is a time differential between something going viral and us being able to know the truth, then do we let it rip? And again, with some of the COVID stuff, we're finding out information now and there were pivotal moments in 2020, particularly pre-vaccine, where the question is like, should I be taking this particular drug or doing this particular thing? And you're not gonna know the answer to that until some set of studies has been done. So you have this offset and what winds up happening is it becomes very much a function of influence and trust, where if you trust the speaker, you're inclined to believe them and kind of follow on and in their particular view of the world, is that just where we are, right? Are we just in this universe where people who trust this group of people are going to do this thing and people who trust that group of people are gonna do that thing. And the kind of best case knowledge and facts are lost in that divide around the kind of incentive structures of the influence ecosystem being, somebody saying, I know the truth, you have to listen to me, that other group is lying to you, because that's where we've really gone.

So even as misinformation is the wrong frame entirely, I think that this dynamic of rumors and trust and curation is the kind of the main overarching question that plagues us. Like we're focused so much on the moderation piece at the end, not so much on how people are forming opinions and who they're trusting and what information they're getting in that moment. And that is, I think actually that the much more important question as we think going forward

about how people collectively make sense of the world. Let's close on the topic of COVID, because I think that sharpens up the problems we've been talking about. So we've been in the midst of a global public health emergency of what scope and scale is debatable, because people have very strongly differing opinions on this topic based on how our conversation has balkanized them in the way that Renee just described. So there are people who think, from the spring of 2020 onward, thought that COVID was a non-issue, it was less of an issue than the ordinary flu, but the vaccines we were handed were terrifying and would kill millions of people. And then obviously there were people who were very worried about COVID and thought lockdowns and school closures and everything was justified and took more or less the CDC's opinion all the way down the line. And then there are many people in the middle. And yes, so the narrow cases, Twitter throttled the account of Jay Bhattacharya, who Barry just had on her podcast or just interviewed on Zoom. And I agree that is a bizarre outcome, but you do a little digging and you see that it was not an accident that Jay was perceived as a fringe figure at that point. I don't know what point he was throttled on Twitter, but I think it was the end of March, 2020, he wrote a Wall Street Journal piece with a co-author where he was frankly pretty skeptical about the coming death toll attributable to COVID. He had his reasons, but to my eye, his assumptions and then certainly the slant of that piece has not aged well. I mean, he seemed to think that there was more or less equal probability that something like 20,000 people would die in the US versus the far range of 2 million, right? Now, 2 million was a much better guess than 20,000 even at that point.

I mean, it wasn't even close. And 20,000 is however you run the numbers, 20,000 is off by an order of magnitude at least or more like 50X, whereas 2 million would just be twice too high. Again, people would debate the mortality that we've suffered due to COVID. All this is to say that Twitter and every other social media platform, I think rightly viewed themselves as having a public responsibility to curate information somehow because there was just a deluge of rumors and lies and hysteria that was clearly going to translate into an unwillingness to follow any precautions that were being recommended and much less take vaccines when they became available. And if you believe the current numbers, again, half the people in the country won't believe anything said on this topic, some hundreds of thousands of people probably died due to vaccine hesitancy. And that was a foreseeable outcome provided that you believe the vaccines did anything at all that was useful for people. What we're suffering is a pervasive inability to even talk about facts or acknowledge facts in an environment where everything becomes do your own research. Barry, I guess I'll go to Barry first. Do you disagree that Twitter and the other social media platforms had some responsibility to try to bias things toward the best information available insofar as they could figure out

what that information was? Of course, they along with the government has a responsibility to the public to try and save people's lives. I think it's anyone who's honest will say that these platforms shouldn't just let actual misinformation and lies rip and that they should have some moderation policies. But I guess I wanna go back, if I could step back for a second, when you're saying the reason J. Bhattacharya was perceived as a fringe figure was because of a concerted effort on the part of people like Fauci and Francis Collins to make him and the other signers of the Great Barrington Declaration into a fringe figure. I was one of the people at the time who believed that they were fringe figures

and who believed, yes, they were fingers. And who believed, yes, yes, yes. Well, let me just clarify what I actually am saying. When I read, I don't think I read it at the time, but I just read it. When I read his Wall Street Journal piece, I can look at the date it was published and I know what I would have thought about it then. It was not plausible to think, given what was happening in Italy especially at that point, that we might only suffer 20 or 40,000 deaths from COVID in the United States. That was just the bell curve of possibility he drew between 20,000 and two million possible. So there are many people who will never listen to another word I speak on any topic ever again, because I had Nick Christakis on my podcast claiming that we would have a million people die from COVID in the US, which is, again, if you believe anything like what the CDC or any other mainstream institution is saying is what we've had. So his estimate looks pretty good, and yet he was for school closures, he was for a lot of things that are now, don't look so good in hindsight, but he was motivated by an expectation that we were gonna lose a million people. I think at precisely that moment, Jay was motivated, it seems by an expectation that we're probably gonna lose far fewer than that. And so, to my eye, he was quite wrong to expect that at that point, because I know what it was like to talk to people who were skeptical that COVID even was a thing at the end of March, when Italy was crashing.

And it looked crazy then and look crazier now, crazy then, looks crazier now. But we can debate who was right or wrong about the number of deaths and then we can also debate who was right or wrong in the main about whether or not mass lockdowns were the right policy or not. And I think if you actually go and read the Great Barrington Declaration, in large part, that view of things has been totally vindicated. When you look at the follow-on effects of a lot of these policies, the idea that there should have been absolute protection and lockdown for vulnerable people, but the ability for things like children to go to schools, that seems really sensible now. Back in the height of that extremely heated up moment, I remember I was listening to you, I was like, I thought that that was crazy. I really, really did. I thought that that perspective was crazy. But now with the humility of the past more than two years, I can say, hey, wait, let's go and revisit that. First of all, why did I think that that was heretical and irresponsible? What new information has come out that has had me revisit my view? And what was sort of the broader context at the time that had me thinking people like Martin Koldorf or Jay Bhattacharya, I think there was also a Dr. Gupta who wrote it, were off their rockers. And I think that's in part why the Twitter story has been important is trying to understand the combination of people like Fauci and Collins and people should go look at the FOIA emails that came out about the way that they actively tried to spin the Great Barrington Declaration as being fringe, as people that didn't have the proper credentials, and also looking at the way that places like Twitter frame that debate.

The reason that to me this is important is that if you were someone that was basically told you were a grandma killer, and then it turns out your view of things was kind of vindicated, it's very easy to basically fall down the rabbit hole. And this is where I see kind of happening over and over and over again. Could be on COVID, could be on puberty blockers for kids, could be on a million sort of topics where people are told you're conspiracy theorists for believing that. Then the thing turns out to be true. And in the meantime, that person has now moved 10 steps further into insanity. And they believe that not only is the truthful thing true, in other words, that vaccines don't stop the transmission of COVID, but they actually have moved on to believing that the vaccines are actually a scheme by Bill Gates to put a microchip in your brain because of the great reset. To me, that is the heart, and I didn't articulate that quite properly. You guys know what I'm getting at. The reason that I think this conversation is so important is I think that it is extremely essential that we keep the parameters of healthy debate as wide as possible so that people don't go off into the gabs and parlors and truth socials and fall off into the deep end, which is what I think is happening more and more often as we see the Overton window being narrowed to a sliver. So that if you don't agree with these, a laundry list that grows by the day of what's politically acceptable, then all of a sudden you're like, well, maybe those are my people over there,

and I don't want people going down that rabbit hole.

Yeah, Renee. I think, I mean, you're getting at something that I think is- Sorry if none of that made sense. No, it does. I was pregnant during 2020, right, when COVID first started, and I also had two other kids, and my kid was taken out of school, and I was school stripped down in San Francisco, and I was very happy about that in March of 2020. I'm not gonna lie, because people were dying. We could see the visuals beginning to come out of China were horrific, and so the question became, what you're getting at really is this question of how do you know what to trust, particularly as you're articulating something where you feel increased confidence in it, looking back in hindsight, and so one of the things that, just to hear our level set really quickly on why the policy was what it was, is as I've done on all these issues, which was that in 2019, the platforms began to crack down on anti-vaccine misinformation, and what I mean when I say that is actually misinformation in this strict sense of the word, information that we know to be false, and this was primarily around the vaccines cause autism trope, and it was very specifically because of two measles outbreaks, one in Brooklyn, and then one in American Samoa, and then one in American Samoa killed about 85 people, and the platforms were really struggling with the fact that anti-vaccine activists in Samoa were actively using the platform to undermine the vaccine drive to try to get MMR rates up in Samoa, which in fact had dropped because of a situation where some nurses had misadministered a vaccine, they've mixed it with the wrong thing, a child had died, and then anti-vaccine activists had piled into that moment to try to undermine confidence in vaccination, eventually leading to this outbreak. So that again is where some of the backstory to where those policies came from was out of a desire to try to stamp out that kind of stuff that was leading to deaths, and of course congressional hearings and letters that were going by saying, why is your platform hurting kids, right? So that's the policy that then as COVID breaks, the policy was to try to boost the World Health Organization and the CDC. And I had very, very mixed feelings about this because candidly, I think the World Health Organization and the CDC produce garbage content that is never going to go viral, not really going to be read. And it's just sort of like, it misunderstands how institutions fail to communicate, again, all of their podcasts, but as COVID was happening, it was really not the institutions who were covering themselves in glory and communicating the most up-to-date information at the time. In fact, they were largely reticent and you had these other voices, particularly frontline doctors who were coming out of nowhere to talk about what they were seeing. And what you start to see is Twitter assigning blue checks to those people to try to raise their credential on the platform, recognizing that everybody all of a sudden is looking for information and the institutions are not providing it.

And so I spent a whole lot of early 2020 writing the story of why institutions were so bad at communicating in this environment. Again, I felt like the stakes for me as a pregnant person, which means a depressed immune system and reduced lung capacity, the stats were just abysmal. Women who were getting COVID were not having good outcomes. And I felt acutely like I didn't know where to get information from. And so that is the, but we're all sitting there on Twitter, that doesn't mean that if I pick somebody because they seem to be saying something that I like, that I've necessarily made a good choice there. And that I think is, again, this tension of when you have such a proliferation of information, how do you decide what to trust in that moment? More importantly, what does a platform decide to amplify in that moment? With any good means of addressing or solving this, what they do very simply is they boost the institution, which has had theoretically a good track record over time. Does that mean it will continue to? No, it does not. But it is in terms of what they did, that was the thinking behind it. So again, I was, I think what became known as a, quote unquote, re-opener mom, because I was the person filing FOIAs for my kid's school district saying, why the hell are we a year and a half in with no plan for anything?

So I get it. I really, really do. But when I get to what is the appropriate way to frame that debate, how do you hear from both sides, if you will, or divergent perspectives in a way where we have a memory of those conversations after the fact that really collapses? What you think about, what you see on Twitter, or Facebook for that matter, 30 minutes later, the entire conversation's changed. We're on to the next controversy, we're on to the next drama, and you don't see anybody being held to account. We talk a lot about, oh, media has to reckon with its communication in this regard or that regard. You don't see that same degree of scrutiny applied to the past communication of influencers very often. And that's where I do feel like our ability to think about who performed well is entirely a function of after-the-fact analysis. And what we're confronted by is a very fast information environment in which somebody has to make a decision, or some algorithm has to make a decision, about what to boost in that moment. And that is, in some ways, the central tension and the central problem that's going to face us for every one of these issues going forward. How do you decide in that moment what a reliable source actually looks like?

Sorry if not a...

No, it does.

Michael, jump in here. Yeah. I mean, Sam, I guess my one question is, I'm not sure I'm understanding what your point is about the Jay Bhattacharya predicting 20,000, only predicting 20,000 deaths.

What was the point you wanted to make about that? Well, at the point he wrote that article, it was, at least in my mind, absolutely obvious that we were going to have hundreds of thousands of deaths if you just scale what was happening elsewhere, Italy most importantly, at that point. It's just like it would have been a miracle to only have 20,000 deaths over the course of this pandemic in the United States, given how it was exploding. Now, I would completely agree with Jay that there's a lot we didn't know, and there are lots of studies yet to be done, and we couldn't really estimate the fatality rate at that point because we just didn't know how many people were infected. But again, there was a range of expectations, and pretty much that week, I think, when Nicholas Christakis got on my podcast and said, I think we're probably going to have a million dead over the course of the next year or two. That's not verbatim, but that was the gist of what he said. It was not crazy for him to say that at that point, given what we were seeing. He wasn't pretending to be Nostradamus. He was actually extrapolating from facts on the ground. And Jay was taking a very different line, but obviously he's a serious epidemiologist, if not an outlier on that particular question at that point, and to treat him like a kook looks ridiculous. And to Barry's point, the specific policies that we implemented in response to those expectations, many of those haven't aged well. I would totally agree that the school closure piece looks fairly masochistic in hindsight.

But the problem I'm seeing now is that many people seem to have been right about certain things for the wrong reasons, and we're drawing the wrong lessons from that. So if it turns out here's an example that I don't think will come to pass, but here's a hypothetical. If it turns out that ivermectin really is a panacea with respect to COVID, Brett Weinstein was still wrong to have thought it was a year and a half ago, to have asserted confidently that it was a year and a half ago. Because there was no good reason to believe that year and a half ago. And there was no good reason to believe that the vaccines were going to kill millions of people six months ago or a year ago.

I guess I'm trying to understand what the implications of that are. In other words, are you making... Is this other than sort of thinking Jay and Brett were wrong? Are you suggesting there's an implication of that for social media policy or how we think about these things?

Yeah, I'm just saying. I don't know what Twitter's decision-making process was like on these specific points, but if you think that they have a responsibility to try to signal boost what's considered valid mainstream information about how dangerous this epidemic is and what we should do in response to it, and to dial down misinformation or erroneous information or disinformation or some species of falsehood. And I think we do think that, otherwise it just

becomes a fun house of lies. Yeah, but Sam, what about the case I gave where Facebook is dialing down accurate information about the vaccine because they view it as an obstacle to overcoming

vaccine hesitancy? I think those kinds of noble lies are almost always... I can't think of a case where they've proved to be a good idea. They're so corrosive of trust that it's just, for understandable reasons, I think it's an awful intuition. It's an understandable one as you continue to raise the stakes, and there's two variables that explain this. There's the stakes, just how many people you think are going to die in this case, and then there's just how crazy the population is, and so how much you have to treat people like children. And in the presence of Trumpistan, certainly within Trumpistan and anywhere near its borders, you have tens of millions of children who are... QAnon is insane, and everything leaking out of that part of our society is insane. You got Jewish space lasers. You've got people who can't be talked to as rational actors. So the question is, how do you navigate around that when it's an emergency? Now, again, COVID was much less of an emergency than it might have been, and that's great.

That's a good thing. I remain worried about what we're going to do in the face of a much worse emergency, because I think we've had a dress rehearsal for something, and it was one pratfall after the next. And I agree with many of the people who are deeply skeptical and critical of our institution's responses to COVID, I think are right to... I mean, I think it was awful to see how we responded.

And the noble lies were a symptom of the panic in the face of this problem. That is... Yeah. And just to be, by the way, just to be fair to Jay and to provide a little correction, because I was a little surprised to hear that he estimated 20,000. I just looked at his Wall Street Journal article. That's not really what he says.

It's not that he estimated 20,000. He put a range, right? But if you read the article, the slant of that article was, and at one point later, toward the end of the article, it's very clear that they think this is overblown, right? They don't think it's at all likely we're going to have a million people dead. I mean, if you price in the fact that we had a million people dead, if you thought that Jay believed that we were very likely going to have a million people

die from COVID, you don't write an article like that. No, I hear what you're saying, although I think I'm also pointing to... You get to this thing where you go, yeah, you should not suppress accurate information. We're all against that. We also think that the process of discourse is that we might be wrong. And so there's epistemic humility that's engaged in this. I mean, look, we have an excess deaths in 2022 in Britain were the highest in 50 years, outside of the pandemic. They don't know what's going on. There's real questions about it. If you go read the BBC on it, they say we don't think it's from the vaccines, but it also appears to be disproportionately affecting people's hearts. And so you get some reasonable debates. And when you start to get into a thing of like, well, everybody has to do the vaccine, everybody has to do the vaccine because there's overarching reasoning, you start to lose sight of, I think, the high levels of uncertainty.

So yeah, right now it's in a million death toll from COVID. How do you factor in the fact that a significant number of those people would have died within six to 12 months from non-COVID causes? Like we don't, that's not like, this is not something, this is not like a one or a zero. This is not a black or white issue. You're dealing with the ethics and the facts are all mixed up together. And so, again, I would keep coming back to transparency because I kind of go, if the platforms are going to basically engage in that kind of activity, they need to say what it is. If you're going to have section 230, you need to have an obligation to be transparent about those content moderation decisions. And there needs to be some place where people can make appeals and argue with Facebook and argue with Twitter. I also want to just make a comment about, I think, this very fascinating conversation between Barry and Renee about this issue of, well, some people can go to Truth Social or some people can go to this or that platform. I mean, I find myself, on the one hand, after this whole experience, feeling kind of sad that you're off Twitter, my friend Claire Lehman is off Twitter. I have conservative friends who have left Twitter for totally different reasons or opposite reasons or the same reasons. And I think there's, I'm sort of grieving the loss of the original dream here, which was a single town square where you really were actually able to have proper debates.

And so on the one hand, I'm totally distressed that that's gone and it makes me sad. And I kind of go, boy, in my darker moments, I'm like, God, we're just such a shitty species that like literally like for 10 minutes, we had this like grand experiment that lasted 10 years where everybody was on the same platform and we couldn't do it. Like we couldn't stand it. We just couldn't stand being around each other. We're too narcissistic. We're too blinded by our own arrogance and certainty. And we're just not able to do it. And that bums me out. On the one hand, on the other hand, I kind of go, let a thousand platforms bloom. Maybe Tumblr will do what its owner says he was considering doing, which is creating a bit of a rival to Twitter and having something like that. But we may not be able to do it. And I think that's maybe a new reality.

I mean, I've been enjoying sub-stack. I've been enjoying just being, writing long form and giving away some of it and using email and websites again. But I do kind of go, boy, is it really that impossible to set this up? If you had transparency, couldn't you actually allow these things to occur? So I know that this was our sort of final wrap up remarks, but I do want to hold out a candle to the possibility of us all being on the same platform and being able to argue and there being some way to resolve some of the conflicts like the one that I'm in with Facebook. Because, I mean, Facebook's response, as Renee and I were discussing, their response was just to not feed news to people anymore. I mean, if you saw this Wall Street Journal piece, it was like, and Renee tipped me off on it too, full credit to Renee on this. It was like, Mark Zuckerberg just kind of throws up his hands and just goes, I can't deal with it. It's just going to be wedding photos and baby photos and that's it. But I think it's like, I look at Facebook's plummeting value and kind of go, it also abandoned in the stream. And I just think it makes it a much poorer, thinner,

weaker platform than what it was. Yeah. Well, I share your despair, but I can tell you when you yank the cord on social media, life does get better. And I don't live with that despair on a day-to-day basis anymore. I mean, I just, I think, I don't know what the solution is for social media. Ironically, I think it's probably much more aggressive curation. Once you prove you're a Pizzagate guy, I think I don't want to talk to you anymore. And I want a platform where I don't have to see your troll army in my face every time I attempt to have

a conversation. Do we need to talk to Alex Jones? I don't. No, no. I don't. No, no. But one thing I just want to pick up on from what Sam was saying before is, I don't know, maybe I'm naive or just too much of an optimist, but I don't think the lesson of the past few years is that people need to be treated pedantically or like children. I think the lesson of the past few years is you got to treat people like adults and give them information like adults and news like adults. And when you try and simplify it and propagandize, people are going to see through it and they're going to go and do their quote, own research. And it's going to lead to a place of just unbelievably diminished trust. To me, that's one of the big takeaways of the past few years, whether you're talking about legacy media institutions or universities or sensemaking institutions of American life. I think most people are not Pizzagate QAnon caricatures.

I think most people can make informed decisions given sophisticated information. I think one of the reasons that there has been diminished trust in our public health organizations is because it was very clear that we weren't being given all of the facts.

Treating them as adults isn't the same thing as giving them the tools to asymmetrically boost misleading information. And that's what our status quo has been. It's just we've given them a fire hose of nonsense to aim anywhere they please. And now we're dealing with the aftermath of that. And I just simply disagree that do your own research in the best of times is just an invitation to waste your time and to come up with bad information. It's like, I have a PhD in neuroscience and I am not even the right guy to get into the weeds of epidemiology and immunology and virology and all the things you should get into the weeds to assess the safety of the mRNA vaccines. The problem has been COVID has been a moving target all this time. And what was rational to think two years ago may be less rational now and vice versa. And we needed institutions that we could trust that not only seem trustworthy, but were in fact trustworthy with all of their experts lined up in a row, having the hard conversations hour by hour to deliver us good information on besmirched by bad incentives. There are obviously bad incentives and it should make everyone uncomfortable that we've got a pharmaceutical industry that has windfall profits at moments like this. And we can't figure out whether we can trust their take on the safety of medication. I get all of that, but the idea that Jack and Jill over there without any relevant training, they're going to do their own research and get to the bottom of it just by listening to Brett Weinstein's podcast is...

No, of course not. I'm talking about what's going to prevent people

from going down the Majid Nawaz route. This is the question that's been, I think, for people who look at mis and disinformation research, for lack of a better term right now, there's always been that trade-off where what you're saying is if you do stifle a particular point of view and it moves to an alt platform, then what you see is the audience that follows tends to be the more extreme audience, which then just gets more entrenched and continues to stay in that space, sees even less and less of what we might call, I don't know if mainstream is the word, but just a broader spectrum of types of content as opposed to when you really go down a rabbit hole. So there's that trade-off. The other piece of it that the flip side of that is does exposure to it move what we might call questioning or normie audiences in a particular direction? There's not a lot of good research on that, and this is because we actually can't see a lot on this question of unaccountable private power. We can't see very much of that. We can't see at a systemic level what do people do next after engaging with this particular piece of content, this audience, this thought, this influence or whatever. There's so little understanding of how influence actually works on social media, but that's a broader problem. This is where this attempt to strike a balance or come to some trade-off just leaves everybody generally dissatisfied with where we are. In prior media environments, when we had a higher degree of trust in government, what's really interesting is that you did see the government playing the role of trying to explain to the public what was happening. Now we're in this really terrible place where half the population doesn't trust the government in any given point in time. People don't trust the platforms.

The platforms didn't want to be the arbiter of truth, but nobody trusts the government or the media to be the arbiter of truth. So you just wind up in this environment where actually there's nobody that a sufficient number of people are going to trust. And so we get down to questions like, are there ways to use design to surface the entirety of a debate? On Twitter, you see some very small silo of something. Whereas if you go to Wikipedia, you can read a much longer, richer discussion of it. You can click in and you can see the edit history. You can see the fights in the edit history. There's something there that's just a richer way of presenting the information, and you can actually see over time who has turned out to be more right or incorrect. And I think that we're expecting these public squares to be the kind of be-all and end-all of sense-making, and they're just not cut out for that. They are just not designed for it. And so there's a kind of unrealistic expectation component to this as well.

Well, on both sides. On all parties, it goes both ways, where it's not just that... Again, I keep coming to transparency as the solution to it, but it's so problematic where it's not just that... Sometimes I hear you guys and it sounds a little bit like, well, we can kind of go figure out that this person has been spreading more misinformation. But a lot of the times, it's kind of like Jay Bhattacharya, when he wrote that Wall Street Journal op-ed, he wasn't trying to spread disinformation. Of course not. And in fact, you even kind of go where you're like, well, I'm not an epidemiologist. Well, you're not an epidemiologist, but the question of whether or not to return kids to school is not an epidemiological question, right? It's not a question, it's actually whether... What is in part?

It's in part, maybe it's informed by... I mean, if in fact they are super spreaders,

then it's an epidemiological question. Yeah, it's informed by it. I mean, it's a little bit like climate science informs climate policy, but you're dealing with trade-offs, like all these things in the real world. And so when you get to a decision that's been made by Facebook, where it says, we've decided, we've assessed all the trade-offs and we're going to promote vaccines, then what is the mechanism by which Facebook then has to then deal with the fact of like, well, maybe kids under five, maybe shouldn't get it, or maybe boys, adolescent boys shouldn't get it. It's in a difficult position. I think transparency, again, opens that up where I think the transparency has an impact on the sensors within or, I mean, the content moderators. I know Renee thinks it sounds pejorative, but on the content moderators themselves, knowing that they have to justify that guy that wrote that Facebook executive who was explaining to the White House that Facebook was doing everything it could to promote vaccines, he can then be held accountable publicly for it. And so it's not just that, it's not good actors, bad actors, it's not just kind of deliberate, intentional. It's also like what we knew then, what we know now, and having some sort of visibility into that inside these powerful platforms. For me, I feel passionate about the transparency thing, because I do think it potentially gets us closer to having more, if not a single town square, more town squares. I think all of us would agree in the abstract that we would like to be confronted with disconfirmatory information that challenges our beliefs. I think people in the abstract believe that.

We also know that it's dopamine depleting to be wrong, that it is dopamine as a bummer to be wrong, to be disproved about something. I just had, my wife just did it to me last night. I didn't like it. I didn't like it at all. And in fact, I mean, and you try, but it took me, it took a while for me to kind of go, yeah, okay. And so how do you, I like the design conversation. I think that helps. I think the transparency helps, but I think what we saw here, I think we're at the end of an era. I think Elon shows an end of era. I think what we see with the Twitter files was people definitely trying to deal with a bad situation. I think we also saw actors behaving badly, abusing their power. I mean, I kept coming out of out of San Francisco offices and looking at what FBI was doing, looking at what Facebook, I just kept coming up with abuse of power, that these powers were being abused when the powers were being held, and the wisdom of the founding fathers, where it's competing powers because you don't trust anybody.

And there was no, that system doesn't exist

to this day and we need it. I didn't like it at all.

All right. Well, I know we're far past Barry's heart out. I'm going to use it as an excuse to give us all a heart out because I want Barry to get go and get famous. She's going on real time and. You know, I'm not going to get the makeup that is required now. Not that she needs the makeup, but she needs to be there on time. Barry. Michael Rintney, thank you so much for your time. I think this was a very valuable conversation and just the beginning of it, this problem is not going away. So

thank you for being here. You know, I'm not going to get required. Thanks so much. Thanks for setting up Sam. Thanks for having me.

00:00:00
00:00:00