Democracy c/o Facebook. Part -1

  • December 7, 2018

Beginning as a modest online platform for college students to network, Facebook is today one of the biggest ever tech company in history. And one of the most powerful. The social media giant is an information gold-mine for both social engineering, political propaganda and market surveys and product placements, and has been lately accused of swaying elections and working to subvert democracies. Siddhartha Dasgupta writes about Facebook and what it means for our Democracies and Societies.


Recently, the New York Times published an investigative report regarding Facebook and its role in influencing world politics today. The report is titled “Delay, Deny and Deflect: How Facebook’s leaders fought through crisis”. The report claims that top officers including CEO Mark Zuckerberg, and Chief Operating Officer Sheryl Sandberg knew about the Russian misinformation campaign during Trump’s election, and in fact hired Republican research firm to discredit critics of Facebook. They are also alleged to have lobbied with Jewish think-tanks to launch malicious campaigns describing Facebook’s critics as “anti-Semitic”.


After the Times exposé, Zuckerberg said they will cut ties with the public affairs management firm named in the report, but refused any prior knowledge of Russian meddling in the American elections. The investigation reportedly took more than 6 months, 5 reporters, and a research team that went deep into the internal transactions of Facebook, from speaking to current and former employees and other stakeholders.


In 2005, Facebook crossed 3 million users. The company’s vision then was to become a network for college students. In the words of Zuckerberg himself, “I wanted to make it interesting enough so that people would want to use the site, and want to put their information out”. They operated out of a small apartment in Palo Alto with around 20 employees. From the beginning, the declared aim was to “connect the people”, to build a world that is “more open and connected”. Starting off merely as an online platform for people to share status updates and photos and form groups, etc., Facebook today is one of the largest corporations in history, and is being charged with playing around with democracies across the world. Have they been doing this consciously,with full intent? That is subject to proper investigation. But it can be safely said that even if Facebook was not actively planning to work as the biggest ever experiment in human engineering when it began, the makers were surely smugly unconcerned about the possible impacts of their work. “If you are building a product that people love, you can make a lot of mistakes,” Zuckerberg said in an interview many years back. Facebook, in its early days, was indeed a paragon of the technology optimism of the 21st Century. “Of course technology always made life better, and it always will,” is the kind of belief that marks this optimism, masking the deep cultural and social impacts of new technology. It was believed that technologies like Facebook were there to make the world a better place, and it was also advertised as so from the beginning by its makers. “It was not as if Mark would make us shout out our mission in a morning assembly. We would say it to each other when Mark wasn’t around. We truly believed in that mission,” said one of the employees.


Mark Zuckerberg in the first Facebook Headquarters in Palo Alto, California. 2005.


There were however some inside the company, who were not so sure. “Some of us had an early understanding that we were creating in some ways a digital nation state. This was the greatest experiment in free speech in human history,” said one former employee. “The day we crossed 1 billion users for the first time, I said to myself, Mark is not going to stop unless he has reached everyone!”, said another. A bunch of Harvard-Stanford Ivy League students in their twenties ended up being responsible for the future of the entire planet, but they were not concerned about the implications of that kind of a role. Today Facebook has 2 billion people using it every month, a growth rate that is truly exponential.


In trying to unpack this fairytale growth story, we come across certain crucial stages in the making of Facebook as it stands now. The first major stage was the so-called “Newsfeed”. The seemingly never ending stream of stories, pictures and updates shared by friends and advertisers – a kind of news that is curated at the very personal level. It is our own individual news channel. But the news in the user’s newsfeed is not random. Facebook has its own self-learning algorithms – computer versions of mathematical formulas – for generating this news feed, that constantly keeps track of our preferred clicks, and refashions itself to cater more and more such content that fits our choices and opinions. Subsequent hits on such content tells the algorithm to produce further more of similar content, creating a self driving never ending loop of specialized propaganda. “It’s your personalized newspaper. It’s the New York Times of You. Its Channel You. It is your customized, optimized vision of the world,” said one of the former employees in an interview. This principle of showing the user the world the way they want to see it, is at the core of the idea of Facebook, and the reason for its success. “That’s the key sauce. That’s why we are worth billions of dollars,” added the employee.


The next key step was the introduction of the “Like” button. The Like button tool works on the principle of registering the user’s personal data with each click, creating a trove of individual information of every Facebook user across the world. “The Like button acted as a social lubricant. And of course, it was also driving this fly-wheel of engagement. People felt like they were heard on the platform, whenever they [liked or] shared something, and it became a driving force for the product” said Soleio Cuervo, Facebook’s ex-product designer. The information collected through the Like button indicated the causes, the products, the emotions, the political interests, and also the people that people cared for. This kind of information is a gold-mine for both social engineering, political propaganda and market surveys and product placements. And not surprisingly, these are the exact allegations that have been raised today against Facebook, through the ongoing inquiries into its role in influencing and controlling elections in the US.



As the phenomenon of newsfeed grew exponentially in popularity, the actual media corporations had no option but to take it seriously. Soon, every small and big media outlet began relying heavily on Facebook newsfeed for putting out their stories. Facebook became the largest news distributor in the world, although Marc Zuckerberg has maintained throughout that Facebook “is a technology company, not a media company”. Facebook took in their own words, a “libertarian approach” towards the question of editorial responsibilities, announcing that any content that does not directly call for violence, is permissible, and that Facebook does not share the responsibility of validity of the information users share using its platform. “We said if you are going to incite violence, that’s clearly out of bounds… but we were going to allow people right up to the edge,” said an ex-Public Policy Director for Facebook. On being asked whether as Facebook employees or executives anyone felt any moral obligation, or concern that this could lead to lies being spread at an organised mass scale, obfuscating the idea of “truth” itself, Facebook’s defense has been “We relied on public common sense and decency”.


Occupied Tahrir Square. Cairo, Egypt. 2011.


This kind of technological optimism began to seem to pay off, for the first time in 2011. In Cairo, Egypt, a city square was occupied by millions of Egyptians demanding the removal of the dictator President Mubarak. The Arab spring had come to Egypt. And the movement started with a Facebook page, run by Wael Ghoneim, a Google employee from Egypt, who began documenting abuses by the Hosni Mubarak regime. “In just 3 days, over a hundred thousand people joined the page. Throughout the next few months, the page started growing until it became something like what happened in Tunisia,” said Ghoneim. As the protests had swelled, Ghoneim put out a Facebook status calling for a Revolution, and asked people to gather at Tahrir Square in the heart of Cairo. Rest is history. 18 days later, Hosni Mubarak stepped down. Many people called the Tahrir Square victory as the “Facebook Revolution”. Wael Ghoneim became a well-known proponent of using Facebook for social revolution. The technological optimism seemed to be coming true. On a CNN interview, Wael Ghoneim was asked “Tunisia, now Egypt. What do you think is next?”. “Ask Facebook,” he replied, going on to add, “I want to meet Marc Zuckerberg one day. I want to thank him.” “We are re-wiring society from the ground up,” Zuckerberg would claim proudly in public platforms during those days. Activists and civil society leaders across the world took up the task of transforming society through Facebook.


But this was short-lived, and it began at the same place. Cairo. The so-called Egyptian Arab Spring – an euphemism for western styled democratic revolution replacing middle-eastern authoritarianism – was soon replaced by sharp social polarization between the Muslim Brotherhood wanting to institute a hardline Islamic regime and the pro-democracy forces. What once looked like a Revolution degenerated into a pitched bloody street fight between various sections of society, soon to be taken over by warlords and imperial powers, and terrorist forces like the ISIS, which subsequently spread across the entire Middle-East which has been in a perpetual genocidal war ever since. And it was the Facebook’s newsfeed that again played a key social role in all of this. Fundamentalist, hate-spewing messages would strike with the tribal emotions in modern human users, and create what Facebook considers “engaging content” as people hit the “Like” or other emotion buttons on such posts, or as they share those posts. Numerous fake accounts came up in no time, fueled by business money, creating groups and pages that would amplify such hate messages. The Facebook algorithm, seeing the engagement levels, would further push such messages onto the newsfeed of users it thinks are looking for such content, creating an overnight explosion in hateful propaganda. Wael Ghoneim himself was targeted. Fake statements were issued in his name to create public outrage, he was issued regular death threats, by groups with hundreds of thousands of followers. “I was extremely naive in a way that I don’t like now, when I thought of Facebook as a tool for liberation,” he said more recently. Ghoneim claims he met with high executives of Facebook in Silicon Valley and explained to them the situation, and tried convincing them of the massive serious unintended consequence of the tool they had created, for the entire planet. But he alleges that he was not largely ignored. “These tools are just enablers for whoever. They don’t separate between good and bad, all they look at is engagement matrix,” he said. Reportedly a large number of senior Government officials from the Middle-Eastern countries regularly sent early warning messages to Facebook, about the situation that was emerging – of Facebook all of a sudden becoming the public sphere in that part of the world. But no action was taken, not even any concrete responses were obtained. According to several critics, Facebook was always severely understaffed. Even after repeated alarms, Facebook refused to recruit the number of people that is required to maintain the kind of control that was needed, in order to cut the costs and keep the profits soaring. In a way, Facebook was not even designed to police the content coming from all corners of the world to which it had reached, neither did anyone in the company had any idea of the social and cultural complexities in any of these places, nor did they really care.


Wael Ghoneim


The other issue that began haunting Facebook roughly around the same period, was that of privacy. “Privacy is our primary concern, … We are not going to share people’s information, except with anyone that they want it to be shared,” its CEO had repeated on multiple occasions. But behind the scenes, as alleged by several former employees of the company, Chief Operating Officer Sheryl Sandberg, one of the chief architects of Facebook’s business model, began exploring newer ways to record people’s personal information, specifically out of commercial interests, in order to pump up the revenue curve that was beginning to settle down on a steady growth rate. Sandberg had earlier worked in the Treasury Department of the Clinton administration, for Larry Summers, and at Google.


Around 4 months before listing itself as a public company in 2012, Facebook for the first time announced its relationship with data broker companies, as revealed by one ex-employee. These are companies that buy data about users – data about their shopping preferences, material needs, residential locations, traffic patterns, etc. It is alleged that Facebook even collects and keeps the last location details of its users, the data collected based on their activity in facebook. This data was in demand with advertising companies for their ad campaigns. Facebook’s revenues shot up. The company’s Director of Public Policy even resigned in protest.


Sheryl Sandberg


In 2011, Max Schrems, a law student in Austria, filed the first known privacy lawsuit against Facebook. He filed 22 complaints with the data protection agency of Ireland, where Facebook had its international headquarters. “They had a small office with around 20 people on the top of a supermarket, in a small town of 5000 people. And this was the agency supposed to regulate not just Facebook, but also Google and LinkedIn,” said Max in an interview. No action was taken based on Max’s complaint by the data protection agency. Facebook dismissed his claims. However, it was soon discovered by certain US investigators that Facebook was not only collecting, but in fact also selling users’ personal data to what are called third-party users – companies that made games and apps for the platform, without anyone’s knowledge. Incidentally, the PR firm Cambridge Analytica that recently got into a controversy regarding swaying Indian election in favour of specific political parties, is one such third-party user that made a fortune out of Facebook data. Facebook themselves were also not keeping track of what these third-party users were doing with such data. There was essentially no control. Anyone who could develop an “app”, could potentially have access to such data, including agencies that aimed at controlling, or influencing elections. Facebook however repeatedly kept violating terms set by the Federal Trade Commission of the US, and continued with their business model. Sandy Parakilas, who had joined the company as Platform Operations Manager, found himself in charge of dealing with the enormous number of privacy issues with the platform, barely 9 months into the job. There was no team in place, or no senior officer in charge of an issue that was the company’s “prime concern”. Parakilas claims to have provided a list of potential malicious uses of this enormous trove of users data controlled by Facebook, to senior executives in the company, including “some who are in the top 5”. But all he got was muted response. “I felt they were not concerned about the vulnerabilities the company was creating. Their main concern was more revenue growth,” he said. Facebook dismissed his claims as well. One year later, Parakilas quit Facebook, out of frustration. On May 18, 2012, Facebook listed itself NASDAQ Stock Exchange in New York, raising 18 billion dollars making it the largest technology IPO in US history. Mark Zuckerberg’s estimated worth touched 15 billion dollars. Facebook would go on to acquire WhatsApp and Instagram to become one of the most valuable companies in today’s world, while also emerging as one of the chief weapons of mass manipulation of people at an enormous scale – influencing their market choices, political choices, and their worldview.


Data Protection Commission, Ireland. The agency entrusted with regulating companies including Facebook, Google, LinkedIn. Source: Facebook


Soon however, the company that had enjoyed unconditional support from the US Government and investigative agencies throughout its history, brought the problem home. It was alleged that Russian hackers, patronised by the Russian Government, manipulated public opinion during the American polls of 2016 that led to the election of Donald Trump. One of the agencies blamed for this was the Internet Research Agency, a troll factory in St. Petersburg with a hundred odd operatives who generate fake propaganda based on such large scale personal data. The IRA also allegedly played instrumental role earlier in social media to fight the anti-Russian Government in Ukraine. “You come to work, and there is a pile of many many sim cards, and a mobile phone. You need an account to register for various social media sites. You take the photo of a random person, choose a random last name, and start posting links to news on different groups”, said one of the IRA operatives. “Our main goal was to paint the Ukrainian Government in a bad light through propaganda,” he added. IRA’s propaganda worked in Ukraine. Extreme Russian nationalism was sparked, mass distribution of fake news about “cruel Ukrainians killing Russian speakers” were circulated. Hired actors were used to create fake news videos. By this time Facebook had added the option of promoting one’s stories by paying money, becoming quite literally a propaganda weapon for war. Mass riots could be sparked almost at will. Users could be constantly experimented upon without their knowledge – which kind of rumours do they believe, which kind they do not, etc, etc. With such a stage set, provoking someone into violent action is just a question of one small final push. In the case of Ukraine, the threat was so grave that even the top adviser to the Ukrainian President lodged a complaint with Facebook on these issues, but got no response other than that they are an “open platform” and that “anyone could post anything”, even if they are fake accounts. Facebook again denied any such conversation with the Ukrainian official. By now however, organised disinformation campaigns in various countries across the world had become a regular feature of Facebook. During the 2016 elections, while CEO Zuckerberg vaguely critiqued right-wing forces in public fora, his own company was allegedly working hand in glove with Russian operatives to ensure the victory of Donald Trump.


Many argue that it is impossible to govern or regulate Facebook. The users have no control over what the company can or is doing with their personal data, and most of it the consumers of Facebook have voluntarily given their personal information over to them. “A lot of times people are just too careful. I think it’s more useful to make things happen, and then apologize later, than to make sure that you dot all your “i”s now, and don’t get stuff done,” is what Zuckerberg had said in one of his early public meetings with a bunch of University students, much before Facebook became the giant that it is today.


To be continued in Part 2…


[This report is based heavily on the recent documentary called The Facebook Dilemma created by PBS/Frontline, recent investigations carried out by the New York Times, and interviews carried out by Democracy Now’s Amy Goodman, and by Vox Strikethrough.]

Share this
Leave a Comment