Facebook has a long and checkered past concerning the way the company decides what a user sees in his or her timeline. Now, the social media giant is changing the formula again, and this time it will impact whether or not users will see articles shared by their friends. The method by which Facebook will do it involves another controversial issue that has dogged the company: data-mining.
Facebook has built a multi-billion dollar empire by both providing a service its users want and mining the data of those users for the purpose of advertising sales. The method by which Zuckerberg’s company provides the content that keeps its users coming back is a proprietary algorithm which Slate’s senior technology writer, Will Oremus, described in an article in January 2016:
Every time you open Facebook, one of the world’s most influential, controversial, and misunderstood algorithms springs into action. It scans and collects everything posted in the past week by each of your friends, everyone you follow, each group you belong to, and every Facebook page you’ve liked. For the average Facebook user, that’s more than 1,500 posts. If you have several hundred friends, it could be as many as 10,000. Then, according to a closely guarded and constantly shifting formula, Facebook’s news feed algorithm ranks them all, in what it believes to be the precise order of how likely you are to find each post worthwhile.
{modulepos inner_text_ad}
Oremus wrote that the algorithm has had such tremendous impact on the world — and the way it consumes information — that it has:
propell[ed] startups like BuzzFeed and Vox to national prominence while 100-year-old newspapers wither and die. It fueled the stratospheric rise of billion-dollar companies like Zynga and LivingSocial — only to suck the helium from them a year or two later with a few adjustments to its code, leaving behind empty-pocketed investors and laid-off workers. Facebook’s news feed algorithm can be tweaked to make us happy or sad; it can expose us to new and challenging ideas or insulate us in ideological bubbles.
Facebook has changed the formula many times in its history, correspondingly changing what users get fed in their timelines. Now, the company is doing it again. Slashgear reported last week that Facebook is “making another change to your News Feed, and this one involves which articles you will see.” The ostensible reason for this change is that users often “like” and “share” a posted article without reading it and conversely spend time reading articles they do not “like” or “share.” Facebook plans to take that into account and push the articles which get read, regardless of whether they get “liked” or “shared.”
Facebook has often been caught using its power to promote some posts, articles, and comments, while leaving others to languish on deserted digital islands. Any online search of “Facebook censors” will return more content than can be sorted and read. Sometimes the social media empire goes even further, removing or blocking posts, articles, and comments altogether.
In 2012, TechCrunch reported that “Robert Scoble, the well-known tech startup enthusiast, went to post a comment on a Facebook post written by Carnegie Mellon student (and TechCrunch commenter extraordinaire) Max Woolf about the nature of today’s tech blogging scene.” Instead of seeing his comment appear as ususal, Scoble received the following message:
This comment seems irrelevant or inappropriate and can’t be posted. To avoid having comments blocked, please make sure they contribute to the post in a positive way.
As TechCrunch pointed out, “Of course, what makes a comment ‘positive’ or ‘negative’ is a very subjective thing.” What is not subjective is that Facebook censored Scoble’s comment. And this is not an anomaly.
Last summer — at the height of the scandal surrounding Planned Parenthood’s sale of baby parts — The Federalist reported that Russell Moore, a top official with the Southern Baptist Convention, was blocked when he attempted to post the video to Facebook. It appears that Zuckerberg and company wanted to do their part to keep the video exposing Planned Parenthood’s illegal sale of human body parts from going viral. On the upside, even with Facebook’s considerable digital prowess, it failed and the video has been seen by millions.
This writer had his own experience with this phenomenon. Late last year, I wrote an article on Microsoft’s decision to convert the Windows operating system into a suite of malware tools. When I attempted the share the article — entitled Windows 10 Is Malware; Deletes Users’ Programs — on Facebook, I received the following notice
Sorry, this action isn’t available right now
As a security precaution you can’t take this action because your computer may be infected with a virus or a malicious browser extension.
You can learn more about malicious software here.
If you think you’re seeing this by mistake, please let us know.
As for my computer or browser being infected, no chance. I run a very secure Linux distribution and the Firefox browser with extra security measures in place. Just to be sure, though, I ran ClamAV and came up clean.
Whether this was censorship or just a really bad attempt at security by Facebook is not easy to say. In case it was the latter, here’s a tip for the folks at Facebook: Malware writers don’t usually include the word “malware” in either the title or the url when sending out links.
Oddly, though I was blocked from posting any links for more than two days, I was still able to include that and other links in comments.
Facebook’s censorship activities have even created a sort of cottage community. There are even Facebook groups that exist for the sole purpose of discussing this issue. Even major media has taken notice. In February 2016, CNN Money reported on a project by OnlineCensorship.org that was formed to address this issue. According to the article, when one Facebook user attempted to post a link to an interview with the survivor of a drone attack, he received the following notification: “The content you’re trying to share includes a link that our security systems detected to be unsafe.”
OnlineCensorship.org allows people to post screenshots of these notifications. Between November 2015 — when the site launched — and February 2016, the site received more than 200 submissions, according to CNN Money.
And while Facebook continues to censor users’ posts, comments, and timelines, it doesn’t want those users to censor themselves. In 2013, Facebook conducted a study to determine why users practice “self-censorship.” This happens when a user types a post or comment, but then doesn’t publish it. Slate reported,
Unfortunately, the code in your browser that powers Facebook still knows what you typed — even if you decide not to publish it.* It turns out that the things you explicitly choose not to share aren’t entirely private.
Facebook calls these unposted thoughts “self-censorship,” and insights into how it collects these nonposts can be found in a recent paper written by two Facebookers. Sauvik Das, a Ph.D. student at Carnegie Mellon and summer software engineer intern at Facebook, and Adam Kramer, a Facebook data scientist, have put online an article presenting their study of the self-censorship behavior collected from 5 million English-speaking Facebook users. (The paper was also published at the International Conference on Weblogs and Social Media.*) It reveals a lot about how Facebook monitors our unshared thoughts and what it thinks about them.
The study examined aborted status updates, posts on other people’s timelines, and comments on others’ posts. To collect the text you type, Facebook sends code to your browser. That code automatically analyzes what you type into any text box and reports metadata back to Facebook.
Because Facebook runs scripts on your browser, the company can easily track not only what you write on the page (whether you send it or not), but also the other pages you visit (even in another tab or page). Facebook also uses the ubiquitous “Like” button that appears on most websites to track Internet users, even those who do not have a Facebook account. As this writer said in a previous article:
After coming under fire for using persistent cookies (small programs that are loaded on users’ computers to maintain certain settings and allow tracking), Facebook introduced the “Tracking Pixel.” It is a 1×1 gif file, invisible to the naked eye in most cases, which allows the company to track users even after they leave the site. Since users who do not even have a Facebook account and have not agreed to Facebook’s privacy policy are tracked as well, such tracking cannot be consensual. Because the pixel is invisible, very few users (whether they have an account or not) could even be aware of it.
Now that Facebook has announced the new changes for how it will feed articles to users, there are concerns of both censorship (Facebook will have even more control over content) and data-mining (Facebook will use its surveillance capabilities to see what users read and how long they stay on the article to determine the article’s value rating, regardless of whether or not a user “likes” or “shares” the article).
Facebook is no stranger to social engineering. Last summer, the company used the occasion of the Supreme Court overstepping its constitutional boundaries to overrule state laws and even state constitutions by legalizing same-sex marriage to conduct a psychological experiment on users with the “Celebrate Pride” rainbow filter for profile pictures. It was a huge success.
So, “like” Facebook or hate them, the fact remains that the company will continue to increase its ability to influence the way society thinks of any number of topics by deciding what articles millions of people see on those topics. That is a very frightening thought.