Too little, too late

Much has been written in recent days about social media companies like Facebook and Twitter suspending Donald Trump's accounts. This writing is mixed in tone, but some of it says it is 'too little, too late'. I'd like to add my voice to this same opinion and go further and say an independent inquiry is needed into how Trump ascended to become President of the United States and what role the social media companies played in this.

I have a hypothesis that he'd never have got this far or caused this much damage without Facebook and especially Twitter and in this article, I examine that and offer some evidence.

Arguably there are other factors at play here, the main one being the desperation of the Republican party to get back into the White House, but I still believe that the social networks were the key that brought this all together.


There have always been people like Trump and there always will be. One might call them 'the crazy person on the bus', but in days of old the average crazy person on the bus has not had access to targeted propaganda machines, large budgets, and a political party so desperate for power that they'd support anybody, even somebody they did not fully understand.

In the past such people might have grumbled at home, or created fringe parties that did not go anywhere (think the BNP in the UK, or for much of their history, UKIP), but they'd not have been able to gather the momentum that Trump has.

The Social Networks

I'd like to start by looking at the social networks and the set of circumstances that has shaped them since they launched in 2004 and 2006. I'm really talking here about Facebook and Twitter respectively.

Arguably, they were founded without negative intentions, but over the years they have evolved into unregulated and significantly dangerous propaganda machines. Even if you think this is an extreme statement, I'd hope we can all agree that their funding model (targeted advertising) puts any role they might play in regulating content and the use of people's data in conflict.

Let's look at the specific attributes which make them dangerous.

Funding model

They get their revenue from advertising and a tremendous amount of it - the online ads market in Australia in 2018 was worth $6.4bn AUD. Inherently there is nothing wrong with this, but the reason the market is so big is the skill the social platforms have developed in using the data they hold on their users to:

  1. maximise the time users spend on their sites (see The Social Dilemma)
  2. very specifically target ads and content to relevant subsets of the population

So this means they are incentivised to collect more and more data about their users and to use that information to keep those users on their site for longer and longer. They are pretty good at this as well, one study says the average time per day spent on social media has gone up 90% from 2012 to 2019 to more than 150 minutes per day.

The are not publishers

There has been a lot written about Section 230 of the Communications Decency Act recently, mostly because of failed attempts to attach changes to this law to various budget resolutions.

Among other things, this law gives protection to companies like Twitter and Facebook from liability for content posted by their users. This is one of the rules that has allowed them to flourish, and you could argue that it was (and still is) needed to allow communication platforms to exist. But it has a nasty side-effect, it means that (unlike newspapers or other content providers) they will not get into trouble if their users post inaccurate, unpleasant or illegal material.


Many people use both Facebook and Twitter (more than 3 billion in total) however the sheer number of users is not what I mean by reach.

To increase engagement with their systems (and therefore maximise their advertising revenue), the social networks have built algorithms which attempt to keep people on their sites for longer by showing them material that will keep them engaged, reading and clicking.

Social networks like Facebook can construct these algorithms and models because they use the data people give them to build detailed profiles of those people: what they like, and disklike, who the spend time with, what are their activities and hobbies, what they read and watch and what social structures they are part of. Combining these profiles with information about what they click on vs ignore in their feeds gives the ability to build powerful recommendation models which will attempt to use what they know about the base of users to show them content which will continue to engage them.

These models are examples of what Cathy O'Neil calls 'Weapons of Math Destruction', models which have the following three attributes:

  1. Scale - impacting large numbers of people
  2. Opacity - there is no transparency in the model
  3. Result in damage being caused - see below...

Such models are typically self-re-enforcing, something Facebook themselves have acknowledge in their own research that 64% of the times a user joins an extremist group, it is because their own platform recommended it to the user. They result in fringe material getting amplified and bouncing around an echo chamber, and the more people read it, the more they see more of the same.

Further, such models are open to be gamed by people who want to manipulate the content being seen by users of the platforms. Take Russia Today, they have become masters of hosting clickbait, which causes systems like YouTube to consider their content to be highly valuable and then all of a sudden you've got Russia's take on western issues being shown to people in automatically curated lists of videos.

Putting it all together

As with many systemic issues, there was no one factor that allowed Trump to reach the Presidency. It was a collection of factors:

  • Trump himself
  • The desperate Republican Party
  • A collection of sociopaths, with agendas and resources (money) to back him
  • Platforms which can target particular demographics and target messages to them (the social networks) and which do not have any accountability for the content which is created and shared by their users

Without one of these factors, we'd not have ended up in this situation, but in my view the most influential factor has been the social networks. Without them he'd never have got the audience he did.

So, what next?

Some people have argued that the social networks need to not allow the kind of content which stokes right-wing extremism. These arguments often fall fowl (in my view incorrectly) of people citing free-speech concerns. So, I think this needs to be looked at from a different angle, how to do regulate how the data collected by these platforms is used to target messages which are seen by the public. Once this is reformed and regulated, I think we'll be in a better place.

Ultimately, I believe there needs to be an independent inquiry into how we ended up here and the role the social networks played. Once done, we need to take the learnings from that and enact appropriate regulation to ensure it does not happen again.

We cannot trust the social platforms to do this, we know from their funding models that they are conflicted. This must be done from the outside and done soon.

-- Richard, Jan 2021