Project Veritas Hidden Video: Twitter Employees Admit to Political Censorship
Article audio sponsored by The John Birch Society

Former and current Twitter employees are caught on camera admitting they censor political views they disagree with — without the censored users even realizing it.

 

In the second undercover video in three days, Project Veritas has continued to expose Twitter as part of the “American Pravda.” While the first video showed a senior engineer at Twitter saying that the company is “more than happy to help the Department of Justice with their little investigation” into President Trump, this video helps explain why that may be.

The new hidden camera video — published Thursday — features “nine current and former Twitter employees” admitting to “steps the social media giant is taking to censor political content that they don’t like,” according to Project Veritas.

The video opens with a montage of damning quotes before showing those quotes in context. Far from being less daming when seen in context, they are more so. The video — showing current and former Twitter employees admitting to the practice — focuses on something called “shadow banning.” Imagine if Twitter wanted to silence you. If they ban you outright — suspend or delete your account — it would be obvious and you may take to another platform to denounce the company for censorship. But what if they simply press the digital “mute button” on your tweets? Your account is still “active” and you can still post, but no one except you will ever see it.

{modulepos inner_text_ad}

Shadow banning and outright banning seem to be the tools of choice for silencing conservative voices — especially those that support President Trump and his policies.

Olinda Hassan is a policy manager in Twitter’s Trust and Safety department, which she describes as “controversial.” Her team makes the rules and regulations for the platform’s millions of users. They are the gatekeepers. As Hassan explained, Twitter is “working on” a way to silence certain people and ideas on the platform. “Yeah, it’s something we’re working on — where we’re trying to get the shi**y people not to show up,” she told the undercover journalist, adding, “It’s a product thing we’re working on.”

To narrow down exactly who those “shi**y people” are that Twitter is trying to keep from “showing up” in your timeline, the Project Veritas video shows Mo Norai, a former content review agent at Twitter, saying, “Let’s say if was a pro-Trump thing and I’m anti-Trump, I was like, ‘I banned his whole account.’ It goes to you, and then it’s at your discretion. And if you’re anti-Trump, you’re like, ‘Oh, you know what? Mo was right, f*** it, let it go.’”

Norai went on to say that “discretion” — which he described as “I guess how you felt about a particular matter,” plays a huge role in what content gets banned at Twitter. As an example, Norai said, “If they [a user] said, ‘This is pro-Trump, I don’t want it because it offends me,’” the next step would be, “I say, ‘I banned this whole thing’ and it goes over here and you’re like, “Oh, you know what? I don’t like it, too. You know what? Mo’s right. Let it go.’”

So, based on a Twitter employee not liking something because it’s pro-Trump, an entire account can be banned. How’s that inclusive environment working out for you, Twitter?

Because Norai said that during his time at Twitter, left-leaning posts that were tagged as possibly offensive were allowed to remain. “It would come through checked and then I would be like, ‘You know what? This is okay. Let it go.’”

Twitter has a “lot of unwritten rules,” Norai said. Those rules largely dealt with what content was acceptable and what content was not. Here is a clue: Conservative content was removed while liberal content was allowed to stay.

Pranay Singh is a Direct Messaging engineer at Twitter. He said that the suspension of WikiLeaks founder Julian Assange’s Twitter account may have been because of “the U.S. government pressuring” Twitter. He said, “They do that.” In fact, he said it happens “all the f***ing time.” In Assange’s case, he said the U.S. government doesn’t like “people messing with their politics, and [Assange] has sh*t on a lot of people.”

As for the tactic of shadow banning a user, Abhinav Vadrevu, a former software engineer at Twitter, said, “One strategy is to shadow ban so that you have ultimate control.” He added, “The idea of a shadow ban is that you ban someone, but they don’t know they’ve been banned because they keep posting, but no one sees their content.” On the psychological side of the equation, this creates a situation where the users just think their posts — their ideas — aren’t appealing to anyone. “So they just think no one is engaging with their content when in reality, no one is seeing it,” Vadrevu said.

Vadrevu admitted that the practice “is risky” because “people will figure that sh** out.” He also said that it would cause “bad press” and that “it’s like, unethical in some way, you know? So, I don’t know.” Clearly.

Another former Twitter engineer, Conrado Miranda, told an undercover journalist that shadow banning as a way of political censorship is “a thing.” It happens at Twitter. As he explained the process, “we have a bunch of filters removing some tweets” and “kicking out some of them.”

Singh helped explain the types of tweets that are likely not to make the cut. “Just go to a random (Trump) tweet and just look at the followers,” he said. Those followers will “all be like guns, God, ’Merica, like and with the American flag and like, the cross.” He said the way to get rid of those users — all of whom he assumes are bots, not real users, because, “Like who says that? Who talks like that?” — is to “just delete them.” But since “there are hundreds of thousands of them” and that volume can’t be handled by people, “you got to, like, write algorithms to do it for you.”

So, Twitter’s response to posts and users it doesn’t agree with is to assume they are not real and turn a few lines of computer code loose to delete them without any human verification process. It appears that posts that are anti-gun, anti-God, anti-America, with rainbow flags and pentagrams would escape that vetting process, even if they were posted by bots. Smooth, Twitter, smooth.

Perhaps most shocking is the statement by Steven Pierre, a software engineer at Twitter. Speaking on hidden camera, he said that Twitter is developing a way to automate the whole process of what gets seen and what doesn’t. “Every single conversation is going to be rated by a machine” that will decide whether the conversation is “positive” or “negative.” If it’s negative, “They may have a point, but it will just, like, vanish,” he said. When asked whether this would “ban certain mindsets,” he said no. “It’s going to ban, like, a way of talking.” Twitter, where never is heard a discouraging word — or an honest word.

If that isn’t Pravda, nothing is. Between filtering, banning, shadow banning, and manipulating what users see, Twitter is dangerously close to a thought-control platform. In its first video on Twitter, released Tuesday, Project Veritas showed Clay Haynes, a senior network security engineer at Twitter, saying, “I don’t like being part of the machine that is contributing to America’s downfall,” obviously referring to the fact that President Trump uses Twitter for its intended purpose: mass communication. If he meant that, he should quit his job, because Twitter is clearly part of that machine.

Photo: simonmayer/iStock Editorial/Getty Images Plus