Subscribe to get latest news delivered straight to your inbox


    Those Problematic Comments on Facebook—Who Bears Responsibility for Them?

    • 09.12.2021
    • By Hugh Stephen
    Hugh Stephen Blogs

    Unless you have been living in a cave, you will be well aware of Facebook’s current travails, fed by whistle-blower Frances Haugen’s explosive testimony about how Facebook researched but ignored findings that suggested the company’s algorithms were harming individual users by promoting content that kept them engaged—but at a cost to their mental wellbeing. In other cases, Facebook promoted user “engagement” over such basic considerations as factual accuracy, impact on community health (COVID misinformation), public safety (January 6 attack on the US Capitol) and avoidance of sexual exploitation. Facebooks use of self-reinforcing algorithms to maintain a closed loop of content and the creation of “echo chambers” for users, especially users addicted to fringe conspiracy theories and other non-mainstream views, is one problem, as I wrote about recently. It’s spotty record in content moderation is another. Part of the blame for this lies with a pernicious and widely abused piece of legislation, Section 230 of the 1996 Communications Decency Act. While this legislation may have been well intentioned at its outset, over the years it has been interpreted by US courts as providing blanket immunity from liability for internet platforms with regard to any user-generated content that they distribute, including user comments on posts. And digital platforms like Facebook have acted accordingly; namely done as little content moderation as they can get away with.

    While Facebook is not liable for whatever content is posted in user comments, in an interesting wrinkle coming out of a defamation case in Australia, parties who posted content to Facebook are having to defend themselves against liability for comments made by others about that content. So far this has been limited to news publishers who used Facebook to reach a wider audience for their content. Traditionally, publishers have been expected to moderate content that they publish. Among other things, they are expected to avoid publishing libellous comments from users, (such as in “Letters to the Editor” columns), by exercising editorial oversight. Likewise, news outlets moderate the content of their own websites, including comments posted by readers. But what about comments posted by readers to Facebook? They could be seen as similar to a “letters” feature in a newspaper but carried (distributed) by Facebook rather than by the publishers themselves, either in their paper or on their own websites. But Facebook is off the hook.

    The defamation case in question involves a suit by an Australian citizen, Dylan Voller, against two Australian media outlets, the Australian and Sky News, for comments made on their Facebook pages by other Facebook users. The Australian court held that the media outlets were liable for the comments left by readers in response to content that the outlets had posted to Facebook. Whether the comments themselves were defamatory has not yet been decided as the ruling focused on who was liable for the comments. The Rupert Murdoch owned outlets had offered the defence that they were neither able to moderate the comments on Facebook’s platforms, nor switch them off, because these were controlled by the platform. At the time, Facebook did not allow comments to be switched off in Australia. Why? The comment feature encourages user engagement, and this helps build Facebook’s bottom line. After the initial Australian court case in 2019, Facebook finally agreed to allow news outlets to delete comments but insisted that they could only be disabled one post at a time. Eventually, earlier this year, Facebook changed its policy and announced that all users will have the right to control who can comment on posts.

    In Canada, the CBC has now decided to shut down all Facebook comments on its news posts. Initially, it did so for just a month, citing the “vitriol and harassment” that its journalists face on social media, but it has now made that decision permanent. The CBC notes that:

    We know certain stories will draw out obnoxious and hateful comments. The truth is we spend a considerable amount of attention and resources attempting to moderate our Facebook posts. It takes a mental toll on our staff, who must wade into the muck in an effort to keep the conversation healthy for others. It is not sustainable.”

    In explaining its decision to make the disabling of comments ongoing, it continued;

    we were seeing an inordinate amount of hate, abuse, misogyny and threats in the comments under our stories. Our story subjects were attacked. Other commenters were attacked. Our journalists were attacked. Misinformation and disinformation were rife.

    As a result of this toxicity, we were posting fewer of our stories and videos to Facebook, knowing our story subjects would face certain abuse and particular communities would be targeted. When we did share stories to Facebook, we spent a huge amount of time and effort cleaning up the sludge.”

    This is distressing but it is the reality. There seems to be something about the internet that encourages toxicity in public discourse amongst a small but vocal minority, who seem to have nothing better to do than resort to hateful, racist, misogynistic comments and personal attacks, something that would never be tolerated in the offline world. People engaging in that kind of behaviour would either not have a platform to propagate their bile or they would be shut down, pronto. Is it the anonymity of the internet, or the knowledge that small, marginal voices will be amplified and given greater credibility by the nature of the social media platforms they inhabit that encourages this behaviour?

    One cannot blame Facebook for the dark side of human nature, but one can reasonably expect it to step up and own what it has created–and address the problem. Just as news publishers in the past kept the crazies out of the limelight, so too should we not have an expectation that the world’s largest social media platform ought to exercise greater editorial oversight over what it distributes? Note that in the case of the CBC, and media outlets in Australia, the publisher of the material that was the target of negative comments on Facebook had to take on the task of moderating the content. In the US at least, Facebook can hide behind Section 230 immunity for civil liability.

    In an article examining the “Unintended Economic Implications of Limited Liability Regimes for Online Platforms”, German commentator Stefan Herwig notes that Facebook does not have to factor in “economic externalities” because it has been able to offload its costs to others when it comes to algorithmic amplification and content moderation policies. These include “journalistic research corrective agencies” for dealing with disinformation or police investigative agencies with regard to hate speech or terrorist propaganda. However, the principle should be the same as with environmental contamination; the polluter should pay. Facebook does, of course, undertake some content moderation—content that violates its terms of service and content that potentially violates criminal law, an area where no platform immunity exists. But it does so at a minimum cost, automating where it can and outsourcing to the lowest cost (often offshore) provider where it cannot. The result is that automated systems often get it wrong, either blocking content that should not be blocked, as in this recent case highlighted in the Canadian media, or else not blocking content that should be blocked. One could argue that Facebook is damned if it does, and damned if it doesn’t but it is hard to feel sympathy for a trillion dollar company that dominates social media and has made scale its prime asset. If scale brings with it additional costs in terms of hiring real people to perform essential tasks to keep content within socially and legally acceptable bounds, that is part of the price that has to be paid.

    While deeper user engagement is good for business, Facebook may find that it has embarked on an increasingly risky path. If reputable organizations are becoming increasingly reluctant to post content to the platform because of the proliferation of irresponsible, vindictive and defamatory comments, this is eventually going to hurt the company’s bottom line. One way to “encourage” Facebook (and other platforms) to take a more active role in moderation would be to modify the blanket Section 230 civil immunity that is provided, requiring platforms to accept more responsibility in cases where users or society are being subjected to real harm from the dissemination of damaging user-generated content.

    Supporters of Section 230, like Prof. Eric Goldman of the Santa Clara School of Law, claim that elimination of platform liability immunity will curtail “niche non-majoritarian voices”. Obviously not all non-majoritarian voices are down in the gutter, but some are, and giving them a platform to spread their poison serves no useful purpose. (See “Thank You Professor! Explaining Section 230 to Canadians”). Short of fixing Section 230 in the US—and enacting measures in other countries to hold the platforms responsible for content they distribute and profit from—the only viable solution seems to be to switch off user-generated comments because the bad outweighs the good, especially if or when the burden of legal responsibility is placed on the party posting the content and not on the platform nor on the party making the defamatory comments.

    There seems to be something wrong with this picture with respect to where the burden lies.

    This article was first published on HughStephensBlog