Home » Commentary » Opinion » It is easier to control guns than thoughts
· AUSTRALIAN FINANCIAL REVIEW
The Christchurch massacre is providing further impetus to a move that has been underway for a while to dramatically increase social media companies’ obligations for the content they host. In Australia, the federal government has proposed a taskforce to look at how quickly and effectively social media companies deal with violent images.
To be clear, this push goes far beyond just preventing terrorists from having a platform to spread their ideas. ‘Regulate the internet’ is pushed as the solution to radicalisation of disaffected youth, the dissemination of hate speech online, sexist and derogatory abuse towards women on social media, even the spread of ‘fake news’.
It’s worth noting that each of those things is also a problem offline, though there is a valid argument that the greater reach and anonymity of the internet has intensified the issues. However, the premise that these are fundamentally new challenges for the digital era is false, suggesting that social media platforms are being scapegoated for deeper problems in society.
But more importantly, making Facebook, Google and Twitter morally (or even legally) culpable on the basis they failed to prevent the bad behaviour of a minority of users is wrong. Both as a matter of principle and as a matter of practice.
First, it lets the actual perpetrators of these heinous acts — and those who cheer on their efforts — off the hook.
It’s not Facebook’s fault that the Christchurch killer chose to livestream his appalling massacre. It’s not Twitter’s fault that people tweet death threats to any woman who has the temerity to appear on a public affairs television show.
Social media simply allows people to interact with one another. It is basically raw, infinite humanity. And much like all places where people have congregated throughout history, some of those interactions are positive, some are negative and some of them are criminal.
The harmful acts are solely the responsibility of the perpetrators. The fact that it is difficult to identify, and prosecute, those acting anonymously online is no excuse to transfer the anger at those crimes towards the visible faces of those platforms.
There can be no moral equivalence between creating a space that can be misused to commit violence (or not policing this space quickly enough) and actually committing violence.
It makes no more sense than blaming Telstra or Optus for the ability of people to make threatening phone calls. Or blaming the local council if someone is yelling racial slurs in a local park.
We can expect social media platforms, like phone carriers, to work with police to bring criminals to justice. At a minimum, they owe it to their users to enforce their terms of service and ban people who are misusing their platform.
But if we want the digital public square to be a place of freedom and global connectivity — and there is no question that we should want this — we cannot expect platforms to pre-vet all content before publication.
We cannot expect them to prevent users from sharing objectionable content at all.
And what’s more we should be wary about government intervention to force more onerous content vetting obligations on Facebook, Twitter and Google in the misguided thought that it will ameliorate their power over news and information.
On the contrary, imposing a general requirement on social media platforms to curate the online space — so it is not objectionable to us — will actually give them even more power.
By expecting Facebook to stop the spread of fake news by fact-checking a user’s news feed, we give Facebook the power to subjectively determine truth. By asking Youtube to pre-vet content, we give them the power to determine what thoughts are and aren’t acceptable.
It’s easy to say ‘we should ban hate speech’, but it’s far harder to define what it means. Most people agree that Fraser Anning’s comments are unacceptable, but those who are seeking to draw a line from white supremacy statements to legitimate policy discussions about immigration show just how ambiguous supposed ‘hate speech’ can be.
Though it may be something of a truism, New Zealand will no doubt find it easier to control access to guns than to control ideas — which is what those proposing regulation of the internet are really trying to do. They are setting social media platforms a historically impossible task.
And we no more want Google, Twitter and Facebook to play Big Brother, than we want the government to do so.
The answer is not to treat the online space as a new and dangerous innovation that needs to be brought under government control; but to realise it is an extension, and new iteration of, the spaces where people have always interacted.
This is not to minimise the very real problems on social media but to understand there is only so much we can do — and perhaps even less that we should do. We don’t need new laws; we just need to enforce the ones we have now.
Simon Cowan is Research Director at the Centre for Independent Studies.
It is easier to control guns than thoughts