Click here to support the Daily Grail for as little as $US1 per month on Patreon
Weaponizing Facebook Like Pistol

Weaponizing Facebook: How the Social Media Giant Manipulates Your Mind

 

Excerpted from the short ebook Weaponizing Facebook: How Companies Like Cambridge Analytica Are Manipulating Your Mind, and What You Can Do About It,  available from Amazon for just a few dollars.


 

In 2014, a scientific paper was quietly published in the Proceedings of the National Academy of Sciences that has profound implications for human culture in the 21st century. Titled “Experimental evidence of massive-scale emotional contagion through social networks”, the study by Cornell University researchers was simply stated as an investigation of whether “emotional states can be transferred to others via emotional contagion” through exposure on social networks, “leading people to experience the same emotions without their awareness”.

The extraordinary aspect of this study was that it was conducted on Facebook – with their express permission – on a vast scale, without the knowledge and consent of participants. 689,000 users had their Facebook news feed purposefully manipulated – posts by their friends were filtered, with one part of the experiment reducing the amount of “positive emotional content” presented to some, while another reduced the amount of “negative emotional content” to others. Those users’ own posts were then monitored, to see if they then posted in a similar manner to what they had been seeing on their feed – ie. if those presented with negative emotional content then posted negative emotional content themselves, and vice versa. They did.

The ethical problems with such a study seem obvious. Hundreds of thousands of users, without their knowledge or permission, were intentionally influenced to feel negative emotions. Did doing so lead anybody to self-harm? Did their own ‘emotional contagion’ get passed on to family and friends, having negative, possibly harmful effects on them as well? Should a corporation like Facebook be allowed to use its users like lab rats, and intentionally make them less happy through subconscious prodding?

When mainstream media began reporting on the study, Facebook’s PR department reacted with a  distinct yawn, trotting out the usual platitudes, saying that it was an important aid “to improve our services and to make the content people see on Facebook as relevant and engaging as possible”. It seemed they really weren’t too concerned; it wasn’t until after sustained criticism for a number of months that the mega-corporation offered a mea culpa, saying “there are things we should have done differently”.

And life went on. People kept signing up to Facebook, the company kept pumping out profits for shareholders, and nothing really changed.

But we all should have seen this moment for what it really was: a warning that we weren’t Facebook’s users. Facebook’s real users were marketers and advertising companies. Us? We were actually the product.

1984, in 2018

Until the advent of social media, advertisers were restricted into a ‘one-size-fits-all’ broadcast model: they had to produce ‘generalized’ advertisements that would appeal to the largest possible audience, and place it where they thought the most eyeballs would catch it (prime-time TV, major newspapers), hoping to reach just a percentage of that mass audience. But the advent of smartphones and social media changed the game completely:

It took only a few years for billions of people to have an all-purpose content delivery system easily accessible sixteen hours or more a day. This turned media into a battle to hold users’ attention as long as possible. And it left Facebook and Google with a prohibitive advantage over traditional media: with their vast reservoirs of real-time data on two billion individuals, they could personalize the content seen by every user. That made it much easier to monopolize user attention on smartphones and made the platforms uniquely attractive to advertisers. Why pay a newspaper in the hopes of catching the attention of a certain portion of its audience, when you can pay Facebook to reach exactly those people and no one else?

…Whenever you log into Facebook, there are millions of posts the platform could show you. The key to its business model is the use of algorithms, driven by individual user data, to show you stuff you’re more likely to react to… Algorithms appear value neutral, but the platforms’ algorithms are actually designed with a specific value in mind: maximum share of attention, which optimizes profits. They do this by sucking up and analyzing your data, using it to predict what will cause you to react most strongly, and then giving you more of that.

What the Cornell study showed us is that we are now – in the modern age of ubiquitous information-gathering through social media – hugely vulnerable to manipulation by anybody that has access to our data, and is able to influence what is seen in our social feeds. The biggest players in that game are obviously the giant social media companies, especially Facebook, with over 2 billion users and complete power to both watch what we are doing, and to then use that information to manipulate us.

But Facebook is not the only player – it is simply at the top of the food chain. A major part of Facebook’s core business model is selling access to their data analysis of users, and space on your newsfeed, to other companies who are willing to pay. Both Facebook, and their advertising partners, are right now working together to profit from you by using your data not only to inform their decisions, but also as a ‘weapon’ against you.

We know this from Facebook’s own internal documents. On May 1st, 2017, the Australian published a 23-page internal memo leaked from Facebook, and written by two of their senior Australian executives, which outlined one of the social media giant’s  techniques with advertising partners:

Facebook is using sophisticated algorithms to identify and exploit Australians as young as 14, by allowing advertisers to target them at their most vulnerable, including when they feel “worthless” and “insecure”, secret internal documents reveal.

So not only are advertisers being provided ‘simple’ information on  your interests – for example, whether you might be a wine connoisseur, or a keen golfer – so that they can target their products to people with those particular interests…but Facebook are also selling your emotional state to advertisers, so that they know when you might be vulnerable to their marketing.

The leaked document has scary parallels to the Cornell University study: it says that Facebook can monitor the feeds of young people and identify when they are stressed, anxious, nervous, feeling ‘useless’ or a ‘failure’, and need “a confidence boost”. Which is where the advertising partner comes in, ‘helping’ the teenager out with a sales pitch for their product.

After this news broke, Facebook once again issued a boilerplate corporate-damage-control press release, saying they would work to “understand the process failure” – but refused to comment on whether this was standard practice for Facebook globally. And life went on…again.

What price did Facebook pay for this shocking practice?

Nothing. Nada. Zilch.

As futurist Mark Pesce wrote later in 2017:

Presumably everything is going on at Facebook as before this revelation, with no indication that any of its business practices have changed. Never transparent about how it applies the information supplied by its two billion monthly users, Facebook had been caught red-handed: exploiting the weak spots of teenagers in their moments of greatest vulnerability, watching, waiting then delivering a targeted message at the moment of maximum impact. In another context, this could be read as something akin to torture or brainwashing. For Facebook, it was just business as usual.

The revelation forces us to confront some unpleasant thoughts about how the world works in 2017, and where things appear to be headed. As problematic as Facebook has become, it represents only one component of a much broader shift into a new human connectivity that is both omnipresent (consider the smartphone) and hypermediated— passing through and massaged by layer upon layer of machinery carefully hidden from view.

Three aspects of this ‘brave new world’ of human connectivity mentioned by Pesce are particularly worth our attention:

  • Mass surveillance and profiling of the population by giant social media corporations, employing machine learning to measure and model our hopes, fears, emotional state, ideologies and so on.
  • The rise of ‘neuromarketing’ in partnership with social media corporations: advertising based on an individual’s personal data, aiming to target us with highly tailored messages that nudge us to act in ways serving the ends of the neuromarketers, often without our conscious awareness; and,
  • The use of lies and outright propaganda – what has become commonly known as ’fake news’ –  that prey on our deepest fears in order to virally spread neuromarketing campaigns through social media systems.

Destroying Society

Perhaps we shouldn’t be surprised, really, that Facebook might user our data against us for their own profit. Corporations are not our friends; we are a commodity to them. Access to information gives power to those that have it – and it is fairly common for individuals who are suddenly given massive power to not use it that wisely.

In 2010 it was reported that, as Facebook’s user numbers began growing exponentially, the company’s founder Mark Zuckerberg sent the following IMs to a friend:

Zuck: Yeah so if you ever need info about anyone at Harvard, just ask.

Zuck: I have over 4,000 emails, pictures, addresses, SNS

[Redacted Friend’s Name]: What? How’d you manage that one?

Zuck: People just submitted it. I don’t know why.

Zuck: They “trust me”… Dumb f**ks.

Eight years later, it seems more and more that – rather than being the remarks of a precocious, somewhat obnoxious 19-year-old – these IMs might form the core of Facebook’s business model: collect as much information as possible from people, and use it for profit and power.

It’s worth noting that the techniques used to support this core business model have recently been decried by former Facebook leaders themselves. In November 2017, Chamath Palihapitiya, former vice-president of user growth at Facebook, said he felt “tremendous guilt” at helping to engineer systems that “are ripping apart the social fabric of how society works.” Discussing methods used by Facebook to keep users engaged on the site for longer, and make them return often – because the more you are on Facebook, the more they learn about you, and the more your eyes are looking at advertisements – Palihapitiya said “the short-term, dopamine-driven feedback loops that we have created are destroying how society works: no civil discourse, no cooperation, misinformation, mistruth.”

Palihapitiya said he doesn’t allow his children to use “this sh*t”, and urged people to take a “hard break from some of these tools and the things that you rely on”:

Everybody has to soul-search a little bit more…because your behaviors, you don’t realize it, but you are being programmed. It was unintentional, but now you gotta decide how much you’re willing to give up, how much of your intellectual independence.

Bad actors can now manipulate large swaths of people to do anything you want. And we compound the problem. We curate our lives around this perceived sense of perfection, because we get rewarded in these short-term signals — hearts, likes, thumbs up — and we conflate that with value and we conflate it with truth. And instead, what it is is fake, brittle popularity that’s short-term and leaves you even more, admit it, vacant and empty before you did it…

…Think about that, compounded by 2 billion people.

These astounding comments came just a month after Facebook’s founding president, Sean Parker, confessed that one of the main goals during the early years at the company was to get you on Facebook, keep you on Facebook as long as possible, and make you come back as often as possible. The key objective in planning meetings, he said, was… ”How do we consume as much of your time and conscious attention as possible?”. It was this question that led to the creation of the ‘Like’ button, which Parker said was meant to give users  “a little dopamine hit” to encourage them to post again in future. “It’s a social-validation feedback loop,” Parker noted. “Exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology.”

But now, Parker has had second thoughts about the machine he helped create – he now claims to be a “conscientious objector” to social media, due to the dangers he sees in how easily people can be manipulated through social media without even knowing it:

All of us are jacked into this system. All of our minds can be hijacked. Our choices are not as free as we think they are.

And while companies selling products to us by exploiting our social media data might be awful enough, a company the size of Facebook – which has a user base that consists of more than one quarter of the entire Earth’s population – offers even bigger opportunities to unscrupulous people. The possibility of subverting democracy itself, and in effect becoming the ‘power behind the throne’, by influencing elections.

And that’s exactly what appears to have happened in the 2016 elections…

Read the rest of the book:

Weaponizing Facebook: How Companies Like Cambridge Analytica Are Manipulating Your Mind, and What You Can Do About It

Mobile menu - fractal