Subscribe to get latest news delivered straight to your inbox


    Big Tech’s Culture of Carelessness

    • 24.02.2021
    • By Creative Future
    CreativeFuture

    Coincidental with the Big Tech CEOs getting chewed out by Democrats and Republicans alike late last year in one of several Section 230 hearings, Yaël Eisenstat – ex-CIA officer, past White House Advisor, and Facebook’s former head of global elections integrity operations – rendered a diagnosis of what’s gone wrong.

    “[A]t the heart of the problems with big tech,” she wrote in an article for The Information, “is an issue unlikely to be addressed: company culture.”

    Eisenstat’s prognosis was not good. “The very culture that propelled these companies to unprecedented success is at odds with the way they would need to operate if they truly want to fix some of the negative societal impacts fueling the ‘techlash,’” she continued.

    Or, to put it bluntly: prioritizing growth, speed, competitive advantage, and profits above all else is in their DNA, no matter the societal costs.

    In recent weeks, we have learned all too well what those societal costs can be, as years of divisiveness and toxic rhetoric online spilled over into real life, and real mayhem in America.

    Political views aside, when speech incites violence, whether online or IRL, it is not acceptable. Criminality is not protected under the First Amendment and the internet platforms have simply not done enough to curb the user behavior that incites it.

    Of course, the platforms would beg to differ.

    Companies like Facebook and Google knew from their inception that, with modest exceptions, they would not be held accountable for the user-generated content they distributed. Although not Congress’ intention, two “safe harbors” — specifically those found in Section 512 of the Digital Millennium Copyright Act (DMCA) and Section 230 of the Communications Decency Act (CDA) – have been interpreted by courts in a manner that effectively immunizes platforms from liability for enabling illegal activity by their users.

    There’s nothing inherently wrong with legal safe harbors. Properly constructed, they can promote growth in socially beneficial sectors by providing businesses certainty that they will not be subject to unreasonable litigation risks – so long as they take certain statutorily defined measures to prevent harm.

    But that’s where things have broken down. The courts have ruled that Sections 512 and 230 shield platforms whether they take measures to prevent harm or not.

    When it comes to the distribution of pirated content, for example, the courts say that all the platforms need do is take down the infringing material once they are notified by the copyright holder – after the harm has already occurred. And if the infringing material is reposted repeatedly – which is incredibly easy to do – they don’t need to take it down until the copyright holder notifies them again.

    And when it comes to other violations of law by their users (such as, say, selling counterfeit products or disseminating malware), the platforms do not need to do a thing as long as the violations are not felonies. They can allow the illegal behavior to continue and even profit from the advertising revenues, subscription fees, and the collection of valuable data generated by the illegal activity.

    As a result, platforms are incentivized to get as much user-generated content up as fast as possible – and to avoid concerning themselves much, if at all, with what that content is or where it comes from.

    Mitigating potential harm from the content they carry only slows them down, costs them money, and deprives them of potential revenue. Unlike every other business, there is almost no legal risk to Facebook, Google, and other internet companies for ignoring criminal behavior on their services – so why on earth would they bother?

    The platform companies respond little to conscience or societal pressure. It is only meaningful legal risk that gets their attention, and incentivizes them to consider the harmful social side effects of their business models.

    Thankfully, Congress has, at long last, decided that they have had enough of this attitude on the part of the platforms. They seem intent on fixing the interpretations of Sections 512 and 230 that have allowed the digital platforms to grow unfettered while they are dividing us, misinforming us, surveilling us, and stealing from us.

    While Section 230 reform will snag the lion’s share of the headlines following the Capitol Building riot, major changes to the DMCA will also be under review in 2021.

    Additionally, with the recent passage of two powerful new copyright provisions, creatives will have improved opportunities to protect their work in the internet era. The Protecting Legal Streaming Act (PLSA) makes the massively harmful, large-scale illegal streaming of copyrighted material subject to felony prosecution for the first time. And the Copyright Alternative in Small-Claims Enforcement (CASE) Act establishes a small claims tribunal for creative individuals and small businesses to protect themselves from infringement outside of federal court.

    While creatives are grateful for these important legislative changes, there is still a long way to go to make internet companies more accountable for how their platforms are used. Until Congress requires the platforms to change their culture of carelessness from the ground up, abuses will only grow. For now, it is Americans who are suffering whilst these large tech platforms flourish.

    This article was originally published in CreativeFuture.