Categories
Copyright Platform Accountability

Should User-Generated Content (UGC) be Exempt from Law and Regulation? Should Internet Platforms Bear any Responsibility for UGC they Distribute?

Should user-generated content (UGC) on social media platforms be free from any regulation and the rule of law, simply because it is user-generated? Should social media platforms be given a pass when it comes to any responsibility for the UGC that they distribute? That seems to be the message from those busy attacking Canadian Heritage Minister Steven Guilbeault for proposing legislative amendments to broadcasting regulation (Bill C-10), and for promising future legislation that will require the platforms to control “online harms”, another form of UGC. Bill C-10 (in its current form) would subject the platforms that host UGC (Youtube–owned by Google–being the prime example) to some regulation with respect to that content. The “online harms” legislation has yet to be introduced although Guilbeault has made clear it is coming this spring.  (Online harms refers to child sexual exploitation, hate speech, revenge porn and incitement to violence.) That Bill’s exact provisions remain to be determined.

Bill C-10

With respect to Bill C-10, the issue is whether the online platforms will be considered “broadcasters” when disseminating video content posted by users. If so, and if Guilbeault’s proposed legislation is enacted, that content would be subject to “discoverability” criteria established by Canada’s broadcast and telecommunications regulator, the CRTC (“the Commission”) to ensure that Canadian content is promoted. The legislation has run into a buzz-saw of opposition from various quarters and has quickly become politicized. Guilbeault has been accused of wanting to “censor” the internet.

Strangely, considering the focus of the criticism, the primary objective of the Bill is not to regulate user generated content. Rather, it is to bring online streaming services under the purview of the broadcasting regulator to ensure that Canadian content is promoted and made “discoverable”, among other obligations.

Should there be a UGC Carve-out?

The original version of the Bill included an explicit carve out for user-generated content in order to reassure consumers that they were not being targetted, but once review by Parliamentary Committee began it was quickly realized this would create a massive loophole that could be exploited by the social media platforms. They could have used the UGC exception to avoid obligations being imposed on other streaming services, such as Spotify for example, with respect to Canadian music. An amendment was therefore proposed dropping the explicit exclusion for UGC. This prompted critics to charge the government with interfering with free speech and dictating what Canadians can post on social media. This is total hyperbole and the critics from the main opposition party, the Conservatives, are surely aware of this, but politics is politics.

Intense Criticism

It has not helped that Guilbeault has struggled to explain clearly the intent of the legislation, which is targetted at the platforms, not consumers. Some of the criticisms have come close to becoming a personal vendetta, with Michael Geist of the University of Ottawa leading the charge, accusing Guilbeault of giving “disastrous” interviews that should lead to him being fired. Geist has been on a campaign for weeks to discredit the legislation, Guilbeault, and the government’s agenda to confront the web giants, publishing almost daily attacks on his blog. Geist is particularly unhappy that Guilbeault and the Heritage Ministry have been given the file to run with rather than the usually more Silicon Valley-friendly Ministry of Innovation, Science and Economic Development. In other words, the “culture” mavens seem have priority over the techies who guard “industry” interests.

What’s the Real Issue?

With regard to the policy intent of Bill C-10 (Amendments to the Broadcast Act), one can legitimately question whether Canadian content “discoverability” requirements are needed, or indeed whether streaming services should be treated as broadcasters. Even the whole question of Canadian content quotas can be debated. I, for one, remain to be convinced that enhanced discoverability requirements are needed to get Canadians to watch more Canadian content (Cancon). And then there is the question as to what constitutes Cancon, but that is another entire topic. But just to give one example of the arcane rules that govern Canadian content, a project produced by Netflix with a Canadian story, Canadian actors and Canadian writers will not qualify as Cancon if it is fully financed by Netflix. Why? Somehow, the money is not “Canadian” enough. Go figure. Since establishing its Canadian operation in 2017, Netflix has spent over $2.5 billion on production in Canada but much of that does not count toward content quotas. (An earlier blog I wrote on this topic, “Netflix in Canada: Let No Good Deed Go Unpunished explains how difficult it is for companies like Netflix to qualify.)

In my view, the answer to getting Canadians to watch more Cancon is to produce more good quality Canadian content. (Schitt’s Creek is a prime example of successful Canadian programming that does not need to be “discovered”). However, putting the Cancon question aside for a moment,the issue is now whether a level playing field will be established for all streaming services. If discoverability requirements are going to be applied to streaming services, then social media platforms should not be given a pass simply because they host user generated content.

Is UGC Sacrosanct?

There is nothing sacrosanct about UGC that puts it into a separate universe. For the most part, it should be left alone as it forms part of the free expression of society, but where and when it crosses the line of the law, or falls into an area subject to regulation, there is no reason why UGC should be treated differently from any other content. The killer of 51 people at two mosques in Christchurch , New Zealand, live-streamed the shootings on Facebook. That live-stream was 100% UGC. Some critics claim that subjecting UGC appearing on Youtube to CRTC oversight will impair free speech rights and would be contrary to the Canadian Charter of Rights and Freedoms. This, despite an opinion from the Department of Justice, backed up by testimony from the Minister of Justice (himself a distinguished legal scholar), explicitly dismissing claims that any provisions of the Bill would violate Charter freedoms.

Why Include Youtube?

Why extend the content discoverability requirements to Youtube? Because Youtube is a major distributor of music and video, and in fact acts as an online broadcaster—although the content is user-generated. (There are more than 35 million Youtube channels, most of them with an admittedly small following). According to a Ryerson University study (quoted in the Toronto Star), 160,000 Canadians post content on Youtube, with 40,000 of them earning revenue. Would subjecting this “broadcast content” to discoverability requirements be an impairment of free speech rights? Why would it be? Nothing is censored, nothing is “taken down” or “buried”. Users are free to post what they wish. Indeed, that is part of the problem. Sometimes what they post is illegal, infringing or libellous.

The fact that content is user-generated is no reason to exempt it from regulation deemed to be in the public interest (although there may be different viewpoints as to what constitutes the public interest).  Where it falls within regulation, user-generated content—especially when done for commercial purposes such as ad-supported Youtube channels—should not have an unfair advantage over other forms of content.

Net Neutrality

Another argument against applying any regulation to the distribution of UGC is that CRTC oversight will undermine net neutrality. Vocal C-10 critic Michael Geist claims that Guilbeault’s bill shows the Canadian government has abandoned its support for this principle. This is an old canard regularly trotted out by opponents of any internet regulation. By Geist’s own admission, net neutrality requires ISPs to avoid practices that would unfairly give preference to certain content over others through discriminatory charges. In particular they are required to not favour content in which they have a financial interest over other content that may compete with it. Net neutrality is founded in the common carrier concept that emerged from the telegraph era when companies like Western Union prevented competing news services from using their telegraph system to file competing news stories. The principle is the same today. But net neutrality has never meant that there should be no regulation of internet content. The best example of the need for regulation is the question of “online harms”, the next Guilbeault shoe set to drop.

Expected “Online Harms” Legislation

Right now, Bill C-10 is the target of the critics, but I am sure that when the “online harms” legislation is tabled (shortly), we will hear the same complaints about how it interferes with freedom of expression on the internet. This raises yet again the fundamental question as to whether government has any role in regulating what appears on social media. The answer, surely, must be yes—subject to the normal protections regarding freedom of expression. In Canada this is done through the Charter of Rights and Freedoms. Section 2(b) of the Charter protects, “freedom of thought, belief, opinion and expression, including freedom of the press and other media of communication”. But that is subject to limitations. The Canadian government’s explanation of the Charter says, right up front with respect to the freedoms that it guarantees, “The rights and freedoms in the Charter are not absolute. They can be limited to protect other rights or important national values. For example, freedom of expression may be limited by laws against hate propaganda or child pornography.”

That is apparently what the online harms legislation will do. Michael Geist doesn’t like that legislation either. In an opinion piece in Macleans (once described as Canada’s national news magazine), Dr. Geist attacked the online harms legislation because it will likely include a mechanism to block illegal content hosted by websites outside Canada that are beyond the reach of Canadian law. According to him, this would “dispense with net neutrality”. If net neutrality means protecting the rights of offshore websites to disseminate hate speech, material that sexually exploits children and incites violence and terrorism, most of it UGC by the way, then net neutrality is not worth protecting. But of course, this has nothing to do with the meaning of net neutrality. Net neutrality as a huge umbrella protecting everything on the internet exists only in the minds of the cyber-libertarian claque.

An “Internet Firewall”?

Disabling access by consumers to illegal content hosted offshore is not some Orwellian plot. It is a reasonable application of the law to rogue sites that thumb their nose at national legislation because they are hosted somewhere in cyberspace. Opponents of any form of site-blocking claim that it creates an “internet firewall”, with obvious comparisons to the “Great Firewall of China”. What China is doing to limit access to online content by Chinese citizens parallels other censorship and behaviour control measures instituted by the authorities in China. But China is China. Canada is Canada. To equate targetted blocking of content that is illegal under the Canadian criminal code with the kind of thought control techniques exercised by the Communist Party in China is fanciful. Another potential use for targetted site-blocking, subject to all the requirements of due process—application, hearing, appeal, etc.—is to disable access by consumers to offshore sites hosting illegal, pirated, copyright infringing content. See my recent blog “Site-blocking for “Online Harms” is Coming to Canada: Similar Measures to fight Copyright Infringement Should Follow”.

Expeditious Takedown

In the same op-ed, Prof. Geist also objects to the fact that platforms will likely be required to takedown illegal content within 24 hours. This is similar to Australian legislation passed after the Christchurch killings that requires platforms to “expeditiously” take down “abhorrent violent material” when notified to do so. Geist claims this approach substitutes speed for due process. But sometimes speed is precisely what is needed when the harm is so egregious that action must be taken immediately. One would expect the platforms to exercise their own oversight in such cases, but experience has shown that they often will not act unless required to do so.

Holding Big Tech Accountable

At the end of the day, the key question comes down to whether UGC has some special place as a form of speech that cannot be regulated or subjected to lawful oversight, and to what extent the social media platforms that host and thrive on UGC should bear any responsibility for the content they allow to be posted. For all too long, the platforms have hidden behind the pretence that they are just neutral “bulletin boards” with no responsibility to vet what goes up on those boards. They employ terms such as “net neutrality” and “freedom of speech” to duck any responsibility for offensive and illegal content that they are happy to monetize—and on occasion even encourage. Some of this is copyright infringing content, which is why I am writing about UGC on this copyright blog. By sprinkling magic dust on UGC to make it “different”, the big tech platforms have tried to duck their share of responsibility for allowing and exploiting infringing content, shifting all the burden to the users which they enable.

One thing is certain. Change is coming. Platforms are being increasingly held to account for the content they carry, in Australia, the EU and in Canada.  In the US, serious reconsideration of Section 230 of the 1996 Consumer Decency Act, the “get out of jail free” card that the internet platforms have used for years to avoid any responsibility for online content that they host and distribute, is coming under serious scrutiny. Those opposed to any change in the status quo are fighting a furious rear-guard action, invoking hallowed and sacrosanct concepts such as free expression (the First Amendment in the US, the Charter in Canada), net neutrality, lack of due process, and so on, all in a vain attempt to avoid any restrictions on big tech and to hold it more accountable.

Conclusion: UGC Must Comply with Laws and Regulation

I cannot predict at this stage what the final shape of Bill C-10 will look like, or whether Steven Guilbeault will be able to withstand the furious attacks by opponents seeking to strip user-generated content (UGC) out of the legislation. As for the online harms legislation, we will have to wait to see how it deals with harmful and illegal content on the internet, much of it generated by users. If it requires platforms to expeditiously take down harmful material, that will be a good thing. If it provides a mechanism to prevent consumers from accessing purveyors of illegal content who avoid Canadian law by locating their servers offshore, that would also be a good outcome.

With regard to C-10, although you can question the necessity for bringing streaming services under the broadcasting regulator and applying Canadian content and discoverability requirements to them, if that is the policy direction, then there is no reason to give Youtube a pass simply because it commercializes user-generated content. Laws and regulation must apply to UGC, subject to constitutional limitations, just as they do to other forms of content. To act otherwise creates a massive loophole that undermines policy delivery, is unfair to other content services, and tilts the playing field by impeding fair market competition.

This article was originally published in Hugh Stephens Blog.

Categories
Platform Accountability

Fair Use ‘Abuse’ is No Excuse for YouTube to Hold Back Piracy Tools

Big Tech companies have a knack for wielding the “free speech” argument as a shield to avoid tackling the systemic criminal behavior on their platforms. This tendency bleeds into the realms of copyright disputes and content protection, where Big Tech’s deep-pocketed, anti-copyright advocacy network often vilifies efforts to fight piracy online as dire threats to our cherished internet liberties.

This tactic, already deeply misleading, reaches new levels of bad faith when these groups turn to “fair use,” which they repeatedly weaponize to make baseless attacks on automated content filtering solutions such as YouTube’s Content ID and Content Verification Program.

At a Senate DMCA hearing in June of last year, for instance, Public Knowledge’s Meredith Rose argued that automated filtering systems chill free speech “all the time” by giving users too much power to take down works that qualify as fair use, such as commentary or parody. And in December, the EFF – a notorious Big Tech bulldog – echoed that claim even more forcefully, arguing that one of YouTube’s automated tools “discourages fair use and dictates what we see online.”

Perhaps the cruelest irony facing creatives who have had their work pirated on YouTube and are left with no recourse is that YouTube has the tools to find and fight infringement. But YouTube just won’t give them to most individuals. And the purported concerns over “fair use” are a big reason why.

The Suite

In fact, YouTube has an entire suite of content protection tools, including Content ID and Content Verification Program (CVP), which automatically detect unauthorized works on the platform and give rightsholders options to remove or monetize them. CVP is less sophisticated than Content ID but provides users the ability to quickly root out unlicensed uploads of their works and file takedown notices in large batches.

Then there is Copyright Match, which uses the same matching technology as Content ID but is only useful for finding full uploads of the user’s original videos – and YouTube will only allow use of Copyright Match if the creatives have previously uploaded a full version of their copyrighted content to the platform themselves.

Finally, there is YouTube’s “default” copyright protection “tool,” a cumbersome Copyright Takedown Webform that must be filled out anew for each and every new alleged copyright violation. When the same pirated work pops up elsewhere on YouTube, even after an earlier version was taken down, the form must be filled out all over again. It’s like Groundhog Day.

Alas, most creatives are relegated to using the webform since YouTube is notoriously stingy about who receives access to its higher-level offerings. In the case of Content ID – by far the most powerful tool of them all – this withholding makes sense, as its complicated dashboard is designed for large-scale copyright owners (such as movie studios and music labels) who may have to manage thousands of titles across hundreds of international territories

But the very effective Content Verification Program seems tailor-made for creatives with fewer copyrights and less complicated management scenarios. CVP has been proven to be a boon to creatives who own their copyrighted work outright and simply want an automated tool for finding – and easily removing – their content from the platform. Expanding access to CVP could be enormously beneficial to thousands of creative individuals and small businesses with smaller but frequently pirated catalogues.

Unfortunately, very few creatives are granted access to CVP, and YouTube provides no set guidelines explaining who gets it and who doesn’t – other than a hazy suggestion that “If you often need to remove content and have previously submitted many valid takedown requests, you may be eligible for our Content Verification Program.”

But even when creatives meet both these vague criteria, they are routinely denied not only CVP but even the significantly less effective Copyright Match. That leaves them the (non-)choice of hunting down and removing every last unauthorized upload on their own, one Copyright Takedown Webform submission at a time, and doing it over and over and over.

The Myth of Fair Use Abuse

Why does YouTube not offer the appropriate existing content protection tools to more creatives? Shouldn’t every rightsholder have the chance to quickly and easily protect their own work from being exploited by others on a platform worth many billions of dollars?

EFF would like you to think this is the reason: Giving copyright owners free access to reasonable content protection tools would deprive users of their free speech rights because copyright owners would file copyright claims en masse on content that actually qualifies as fair use.

Rebecca Tushnet, an outspoken copyright critic from Harvard, provided a particularly succinct summary of this deceptive viewpoint in her written testimony for a February 2020 DMCA Senate hearing: “Automated systems [like Content ID] don’t respect fair use and other limits on copyright,” she wrote, “harming the creators copyright is supposed to serve.”

So that’s Orwellian reasoning. The Tushnet contingent believes that the most effective tools we have for stopping the theft of creative works on the world’s biggest video platform are actually harming creatives. And their reasoning, in part, involves a purported plague of fair use abuse.

But there is no plague. And you don’t have to take our word for it, because Google (YouTube’s parent company) agrees. At yet another DMCA hearing, in December 2020, Katherine Oyama, Google’s Global Head of IP Policy, testified that “less than 1% of all Content ID claims made in the first half of 2020 were disputed.” And within that tiny amount, according to Google’s own YouTube data, “60% resolve in favor of the uploader.” Do the math and that leaves 40% of this dispute total resolving in outcomes that do not favor uploaders, some of which, yes, surely involve the denial of those uploaders’ fair use claims.

To summarize: Only one percent of all Content ID claims are disputed to begin with, and within that one percent, only 40% of those claims, fair use or otherwise, are ruled legitimate, i.e. favor the disputer.

To simplify: That’s four-tenths of one percent of all infringement claims that could potentially be wrongful fair use takedowns.

Doesn’t sound like Content ID is suppressing our internet freedoms “all the time”… does it?

One Big Problem

Of course, even a broken clock is right twice a day. The anti-copyright cohort is correct that there are instances where the use of automated copyright protection services lead to some legitimate fair uses being wrongly taken down. But, as demonstrated above, the volume of these take-downs is microscopic in relation to the overwhelming amount of piracy that could be, and already is, reversed by tools like Content ID and the Content Verification Program.

Furthermore, in those rare instances where a user can legitimately contest fair use in response to a takedown, YouTube has an appeals system that works to restore false or invalid claims as quickly as possible. Yes, even in the tiny fraction of a percent of cases where a takedown claim is invalid, the perceived threat can be mitigated by YouTube’s built-in resolution system.

So it’s time to get to the truth – the risk of inadvertent takedowns of content using YouTube’s content filtering tools where this a legitimate fair use claim is miniscule. And the harms to creatives that are prevented by the use of filtering tools are vastly larger than the inconveniences imposed on a very small number of creatives by the use of these tools. In short, fair use abuse is no excuse to bar quality content filtering tools, and access to these tools should be in the hands of more creatives.

Categories
Platform Accountability

YouTube Bares its Anti-Creative Roots

In case you had any doubt how YouTube feels about artists, the company made its views clear in November with a sneaky alteration to how it monetizes videos. If you were among those who always thought that YouTube feels nothing for artists, this news proved you were 100% correct!

“Starting today we’ll begin slowly rolling out ads on a limited number of videos from channels not in [the YouTube Partner Program],” YouTube announced in a blog post summarizing a recent raft of changes to its terms of service. “This means as a creator that’s not in YPP, you may see ads on some of your videos.”

Here is what this means: A creator who is interested in sharing ad revenue with YouTube must be enrolled in the YouTube Partner Program (YPP). It used to be the case that if you were a creator who wanted to keep your videos free of ads on the site, you could simply not enroll in YPP. Well, not anymore! Now, YouTube has decreed that it can run ads on all videos – however, if “you’re not currently in YPP,” the post continued, “you won’t receive a share of the revenue from these ads”.

Unsurprisingly, this brazen cash grab sent shockwaves through the creative community. One YouTuber described the change as the greediest move he’d ever seen, noting that “[i]f you’re a small channel, struggling to grow and haven’t yet gotten monetization, YouTube will run ads now and take 100% of the profit from your work.”

So, why not just join the YouTube Partner Program and start collecting that sweet, sweet ad cash? Easier said than done.

To qualify for YPP, your videos must have attracted 4,000 hours of viewing and 1,000 subscribers in the prior 12 months. YouTube bragged recently that the number of members who reached this elite threshold doubled in 2020 – but they didn’t actually disclose what that number is. That’s probably because all evidence points to it being pretty tiny.

Last year, the social media data firm Social Blade counted more than 37 million total YouTube channels with at least five subscribers or more. Twenty million of them had between 10 and 100 subscribers, and 12 million had between 100 and 1,000 subscribers – which means that, in the best-case scenario, tens of millions of YouTube channels fail to meet the 1,000-subscriber threshold required to enroll in YPP. That’s tens of millions of channels whom YouTube has now given itself blanket permission to profit from without paying them a penny.

Of course, YouTube is a private company. It has every right to monetize whatever lawful content it wants to on its platform. But YouTube also likes to portray itself as a creator-friendly platform that pays content creators fairly (never mind the fact that it pays between just three-tenths and one-half of one cent per video view and that some of its most popular channels may still be generating less than $17,000 per year) – and this move is anything but that. On paper, YouTube has less incentive than ever to bring more channels into its revenue-sharing partner program. Why accept more members into YPP when it can now monetize any video it wants to, from the tens of millions of channels outside of the program, and keep 100% of the money?

But the ramifications this monetization change could have for creator livelihoods is only the beginning of why it is troubling.

YPP exists in the first place because YouTube had a big problem. It was monetizing horrible videos involving hate speech, extremism, violence, and other harms. It was not only profiting from some of the most toxic content imaginable but pairing some of its most prized advertisers with said content. By YouTube’s own admission, YPP’s stringent eligibility requirements are a way of “strengthening our requirements for monetization so spammers, impersonators, and other bad actors can’t hurt our ecosystem or take advantage of [users], while continuing to reward those who make our platform great.” Noble words that, in retrospect, ring completely hollow. Now that YouTube has codified its ability to monetize any video, whether it’s in the program or not, how is it going to avoid backsliding, running ads against sketchy and dangerous content?

We might know the answer to that question if YouTube offered any transparency whatsoever about this latest decision – but as per its custom, the details of the monetization update are incredibly vague and create many more questions than answers. How is YouTube determining which non-YPP videos get monetized while excluding potentially harmful videos? What is the engine of the moderation process for non-YPP videos? Humans? AI? The update says a “limited number” of non-YPP videos are being monetized – but what is a “limited number” when we’re talking billions of hours of content? Millions of videos being monetized without permission? Tens of millions?

Then again, at least YouTube has provided some kind of warning here. Sure, it is tucked deeply away in a dusty corner of the site’s terms of service – but it is there, clearly signaling to any potential uploader who happens to stumble across it that YouTube may secretly monetize their video and pay them nothing.

Meanwhile, there is a whole other category of creatives who see their works monetized on YouTube all the time without any fair warning: victims of piracy, ranging from filmmakers to musicians to podcast hosts and beyond.

From Content ID to the Content Verification Program, YouTube has the tools to help these creatives find and take down pirated versions of their works. These offerings even give creatives the ability, if they so choose, to share in any revenues generated rather than taking the pirated versions down. But YouTube arbitrarily withholds such tools from most creatives, leaving them to spend endless hours scouring the platform’s ocean of content to find pirated copies of their works, then filling out cumbersome takedown forms to ask for each individual instance of infringement removed… until, of course, the next person posts a pirated copy, and then the process starts all over again.

If YouTube really wanted to “reward those who make our platform great,” it would start by expanding access to its content protection tools and empowering creatives to better protect their own copyrighted works on the site. Then again, YouTube’s own executives have called the platform a “pirate site.” Profiting from other people’s works without permission is in its DNA.

This article was first published in CreativeFuture.

Categories
Platform Accountability

5 Burning Questions for Facebook Chief Product Officer Chris Cox

It’s been a minute since we last heard from Chris Cox, the man who at one time was, as WIRED put it, “effectively in charge of product for four of the six largest social media platforms in the world.”

As Facebook’s longtime chief product officer, Cox has effectively been Mark Zuckerberg’s “chief of staff for executing product strategy” since at least 2014. Zuckerberg might be the “face” of Facebook, but Cox is, per the company’s own employees, its “heart and soul.” Under his steady hand, the platform grew into a global behemoth with more than two billion users and a market value of more than $500 billion. He played a major role in the creation of News Feed and, somewhere along the way, was handed the reins of WhatsApp, Messenger, and Instagram as well.

It is not an exaggeration to say that, by 2019, Cox was the most powerful chief product officer (CPO) in the world. In an era where the line between digital life and real life is almost nonexistent, that made him one of the most powerful people in the world, full stop.

But then something went awry. Zuckerberg announced a shift toward end-to-end encryption and integrated the four apps in Cox’s portfolio under one blanket. Shortly thereafter, in April 2019, Cox left the company. He was reticent to share the reason for his departure, but we know that Cox was an advocate for limiting toxicity and preserving the safety and well-being of Facebook users. So, it is not difficult to imagine at least one reason for why he left: increased encryption, while enhancing privacy, makes it more arduous than ever for Facebook to curb hate speech, human trafficking, conspiracy theorizing, terrorist plotting, and all the other terrible behaviors that have turned it into a democracy-threatening cesspool.

But then, barely a year after his resignation, Cox returned to his CPO job, citing an urge to “roll up my sleeves and dig in to help” in the face of a “public health crisis, an economic crisis, and now a reckoning of racial injustice.” It was a bafflingly quick turnaround. If anything, Facebook had become a far more fraught place to work during Cox’s absence, shouldering more and more of the blame for societal maladies ranging from rampant misinformation, to tech addiction, to election tampering. Antitrust investigations – both at home and abroad – swirled around the company like a tornado, and Facebook’s own employees were in a state of perpetual discontent.

All of these monumental challenges seemed only to motivate Cox even more. “In the past month the world has grown more chaotic and unstable, which has only given me more resolve to help out,” he wrote. Since then, other than an update here and there on his Facebook page, he has not been very visible – which is a shame. As the chief product officer of the world’s biggest social media platform, Cox remains one of the world’s most influential technology executives. We would like to hear more from him. Per his own staff, Cox is “everything Mark wished he could be”, and is renowned for his ability to compellingly explain what the company is up to, for better or for worse.

We can’t seem to get serious answers from Zuck. By contrast, taken at his word, Chris Cox seems to instinctively want to help make things better. So here are five questions we’d like to ask Chris Cox:

1) The election has come and gone and the actual balloting went pretty smoothly, at least relative to the dystopia of chaos and violence that many feared. But with tight results and delayed counting in several states, the election misinformation machine began firing on all cylinders. Facebook’s response to it was limited to adding a notification to posts that included “premature claims” of victory by candidates, and to putting limits on political advertising. Nevertheless, Facebook was rife with misleading and threatening posts and misinformation in the aftermath of Election Day.

Do you, Chris Cox, really feel your company’s response to misinformation on and after the election was sufficient when the stakes were so high for our country? Do you really feel you did enough when a key pillar of our democracy – a free and fair election – was under unprecedented attack?

2) The much-touted Facebook Oversight Board has finally begun reviewing high-impact content moderation claims, including some recent cases involving nudity, incitement of violence, and hate speech. The board writes that once it has ruled on these items, “Facebook will be required to implement our decisions”.

Chris Cox, what will that implementation look like? One of the claims, for example, is over the removal of a video and comments criticizing France’s refusal to authorize certain controversial drugs for the treatment of COVID-19. Let’s say the Board agrees this post should, in fact, be removed – as Facebook has already done based on violation of its Violence and Incitement policy. What happens next? Other than keeping the post down, how do you foresee “implementing” the Board’s decision in future product developments? As the Board makes specific decisions on complaints (some of which may already be moot because they are no longer burning issues), how will Facebook turn the lessons of these cases into actionable, positive, systemic changes at Facebook?

3) You left Facebook following Zuckerberg’s announced plan to focus on an “encrypted, interoperable, messaging network.” You never explained why (so far as we are aware), but some have speculated it is because your work to prevent hate speech, conspiracy theories, human trafficking, and other harms on the platform would be jeopardized by an infrastructure that could make Facebook’s most toxic content invisible – even to Facebook. In any case, you are back in the saddle again, but we haven’t heard how you are squaring your concerns about encryption with your efforts to prevent harmful speech.

So, what has changed? Have you made your peace with encryption? How will it affect your efforts against harms on Facebook’s platforms?

4) In 2008, you served, for a short time, as Facebook’s director of human resources. You later said of the experience that it taught you that “we don’t need innovation in the field of HR and recruiting, we just need competent managers.” Since then, in the wake of scandal after scandal, employee dissent has become commonplace at Facebook.

If you were leading HR today, how would you handle these concerns? What would you do to fix a company culture that isn’t just rankling current Facebook workers but jeopardizing its incoming talent pipeline?

5) And lastly, Chris, a question near and dear to our hearts. Facebook, and your Big Tech peers, report that you have committed substantial resources to meet the biggest headline challenges you face about content moderation. No doubt about it, it is difficult to make fast and fair decisions about highly subjective material without, as Zuckerberg often laments, becoming “arbiters of truth.” Heck, you even spent $130 million establishing the aforementioned independent oversight board to try to [nothing less than]answer some of the most difficult questions around freedom of expression online.

Our question is this: if you can put this much time and effort and spending into the difficult content moderations issues, why can’t you commit more resources to better clean up the easy stuff – i.e., reducing the incidents of piracy on your platform? You are willing to deauthorize groups that engage in bad behavior – why not extend that to the tons of “free movie” groups on your platform where the members aren’t just linking to external piracy sites but actively hosting the full uploaded films on your platform. That kind of behavior is awfully obvious and doesn’t require wrestling with “truth.” Piracy is a crime and it is costing the U.S. economy at least $29.2 billion every year. Why is it still on your platform?

This article was first published on CreativeFuture

Categories
Platform Accountability

Big Tech’s Culture of Carelessness

Coincidental with the Big Tech CEOs getting chewed out by Democrats and Republicans alike late last year in one of several Section 230 hearings, Yaël Eisenstat – ex-CIA officer, past White House Advisor, and Facebook’s former head of global elections integrity operations – rendered a diagnosis of what’s gone wrong.

“[A]t the heart of the problems with big tech,” she wrote in an article for The Information, “is an issue unlikely to be addressed: company culture.”

Eisenstat’s prognosis was not good. “The very culture that propelled these companies to unprecedented success is at odds with the way they would need to operate if they truly want to fix some of the negative societal impacts fueling the ‘techlash,’” she continued.

Or, to put it bluntly: prioritizing growth, speed, competitive advantage, and profits above all else is in their DNA, no matter the societal costs.

In recent weeks, we have learned all too well what those societal costs can be, as years of divisiveness and toxic rhetoric online spilled over into real life, and real mayhem in America.

Political views aside, when speech incites violence, whether online or IRL, it is not acceptable. Criminality is not protected under the First Amendment and the internet platforms have simply not done enough to curb the user behavior that incites it.

Of course, the platforms would beg to differ.

Companies like Facebook and Google knew from their inception that, with modest exceptions, they would not be held accountable for the user-generated content they distributed. Although not Congress’ intention, two “safe harbors” — specifically those found in Section 512 of the Digital Millennium Copyright Act (DMCA) and Section 230 of the Communications Decency Act (CDA) – have been interpreted by courts in a manner that effectively immunizes platforms from liability for enabling illegal activity by their users.

There’s nothing inherently wrong with legal safe harbors. Properly constructed, they can promote growth in socially beneficial sectors by providing businesses certainty that they will not be subject to unreasonable litigation risks – so long as they take certain statutorily defined measures to prevent harm.

But that’s where things have broken down. The courts have ruled that Sections 512 and 230 shield platforms whether they take measures to prevent harm or not.

When it comes to the distribution of pirated content, for example, the courts say that all the platforms need do is take down the infringing material once they are notified by the copyright holder – after the harm has already occurred. And if the infringing material is reposted repeatedly – which is incredibly easy to do – they don’t need to take it down until the copyright holder notifies them again.

And when it comes to other violations of law by their users (such as, say, selling counterfeit products or disseminating malware), the platforms do not need to do a thing as long as the violations are not felonies. They can allow the illegal behavior to continue and even profit from the advertising revenues, subscription fees, and the collection of valuable data generated by the illegal activity.

As a result, platforms are incentivized to get as much user-generated content up as fast as possible – and to avoid concerning themselves much, if at all, with what that content is or where it comes from.

Mitigating potential harm from the content they carry only slows them down, costs them money, and deprives them of potential revenue. Unlike every other business, there is almost no legal risk to Facebook, Google, and other internet companies for ignoring criminal behavior on their services – so why on earth would they bother?

The platform companies respond little to conscience or societal pressure. It is only meaningful legal risk that gets their attention, and incentivizes them to consider the harmful social side effects of their business models.

Thankfully, Congress has, at long last, decided that they have had enough of this attitude on the part of the platforms. They seem intent on fixing the interpretations of Sections 512 and 230 that have allowed the digital platforms to grow unfettered while they are dividing us, misinforming us, surveilling us, and stealing from us.

While Section 230 reform will snag the lion’s share of the headlines following the Capitol Building riot, major changes to the DMCA will also be under review in 2021.

Additionally, with the recent passage of two powerful new copyright provisions, creatives will have improved opportunities to protect their work in the internet era. The Protecting Legal Streaming Act (PLSA) makes the massively harmful, large-scale illegal streaming of copyrighted material subject to felony prosecution for the first time. And the Copyright Alternative in Small-Claims Enforcement (CASE) Act establishes a small claims tribunal for creative individuals and small businesses to protect themselves from infringement outside of federal court.

While creatives are grateful for these important legislative changes, there is still a long way to go to make internet companies more accountable for how their platforms are used. Until Congress requires the platforms to change their culture of carelessness from the ground up, abuses will only grow. For now, it is Americans who are suffering whilst these large tech platforms flourish.

This article was originally published in CreativeFuture.

Categories
Platform Accountability

Shame Never Sleeps: Updating The Facebook Timeline Of Scandal and Strife

Day after day after day, Facebook harms our society, our democracy, and our world.  Their role as a haven for piracy is just one small part of their bad business. That’s why we must all demand #PlatformAccountability – Facebook’s abuses must be curbed.

What have they been up to lately?  Well, here are some of their latest bad acts… followed by the catalogue of harms we’ve been keeping on Facebook for the last few years.

Once you’ve read this – if you haven’t signed up to follow CreativeFuture yet, please come on board.  We’ll keep you informed and show you how you can add your voice to the demand for #PlatformAccountability.  If you’re already a friend of CreativeFuture, please share this with your friends!

 

 TIMELINE UPDATES

October 23, 2019 – Zuck-Bucks Out of Luck on Capitol Hill

“Zuck-bucks” is our nickname for Facebook’s beleaguered cryptocurrency project, Libra. This terrible plan from a company that has lost America’s trust got hammered today on Capitol Hill, as enraged lawmakers lined up to give none other than Zuckerberg himself the what-for at a hearing assessing the project. Rep. Ayanna Pressley (D-Mass.) put it best when she told the CEO, “Libra is Facebook, and Facebook is you… You’ve proven we cannot trust you with our emails, with our phone numbers, so why should we trust you with our hard-earned money?” We couldn’t have put it better ourselves!

October 31, 2019 – Facebook Finally All Things to All People, Including Slave Traders

You know you’re having a bad year when the world discovers that your photo app with more than a billion monthly active users enables a thriving slave market. Today, a hard-hitting BBC News Arabic report finds that “domestic workers are being illegally bought and sold online in a booming black market” on Facebook’s Instagram app. (And it turns out that listings for slaves have also been promoted in apps “approved and provided by Google Play”.) We knew Quality Control was not these tech companies’ strong suit, but COME ON! At this point, it’s just getting surreal.

November 6, 2019 – Facebook Sued by Their Own State

Okay, here’s how you really know you’re having a bad year: You’re one of the most successful companies in an area of the country – Silicon Valley – that generates massive revenues for the state where it resides. For years, you helped make this state a shining beacon of progress and innovation and wealth, admired around the world. And today, none of it matters. Because now things have gotten so bad that Facebook’s home state of California has sued the company for “failing to comply with lawful subpoenas requesting information about its privacy practices.”

November 25, 2019 – Facebook Hit with Data Scandal No. Eleventy-Billion

We had no choice but to invent an entirely new number to account for Facebook’s unfathomably huge pile of data scandals. And it keeps growing. Today’s news involves a “software development kit” from Facebook called One Audience that gave third-party developers improper access to users’ personal data such as email addresses and usernames. Privacy? At this point, who cares?

December 9, 2019 – Facebook’s Fact-Checking Failures Are Bad for Your Health

Let’s add a new Facebook harm to the list: they may be making us physically sicker. It turns out the company’s abject failure to keep misinformation off their platform is actively “harming public health,” reports The Washington Post. Turns out that misleading ads on Facebook about medication meant to prevent the transmission of HIV are scaring patients away from taking the preventative drugs they need. The tech giant’s refusal to remove the content, say lesbian, gay, bisexual, and transgender advocates, has created nothing less than a “public-health crisis.”

December 19, 2019 – 267 Million Phone Numbers Exposed

Remember when an event where Facebook exposed millions of people’s records seemed inconceivable? Did you ever think it would become routine? Well, shockingly, it has. Facebook exposed 267 million phone numbers? Meh… that’s just today’s scandal. We’re very tired.

January 9, 2020 – Facebook Can’t Stop Won’t Stop… Letting Politicians Lie in Ads

Pop quiz: You’re a global internet giant who has lost the trust of pretty much everyone because you can’t (won’t?) curb misinformation on your platform. But now you have the opportunity to earn back some of that trust by reducing the spread of misinformation on your platform by politicians. So, what do you do?  Why, just the opposite, of course! Today, Facebook announces that they will do nothing to limit the targeting of voters by politicians spreading misinformation. Paired with the previous announcement that they also will be exempting political ads from third-party fact-checking, and Facebook has now notched two BIG strikes against the pillars of democracy.

January 11, 2020 – Skilled Workers Embarrassed to Apply at Facebook

All this scandal and strife, as horrible and embarrassing as it is, doesn’t seem to have slowed Facebook’s growth. But here’s something that might: The New York Times reports that tech companies have lost their status as “every student’s dream workplace”. These bad reputations are finally starting to catch up with them.

January 28, 2020 – DOJ Pulls Facebook’s Enemies into Antitrust Probe

It’s been relatively quiet on the antitrust front in recent weeks, but now comes news that one of the probe’s major players, the U.S. Department of Justice, has been setting up interviews with Facebook rivals – looking for insight into “the competitive landscape of the industry, along with their perspectives on and relationship to Facebook.” Just guessing, but we think the feds will hear something like this: “They’re ruthless and rotten to the core – break ‘em up!”

January 31, 2020 – At Last, Facebook Embraces True Purpose – ‘To Piss Off a Lot of People’

Today, from the CEO who recently said “my goal… isn’t to be liked, but to be understood,” comes a brand-new statement of total indifference to the human race. “This is the new approach, and I think it’s going to piss off a lot of people,” Mark Zuckerberg told a crowd at the Silicon Slopes Tech Summit in Utah. He was referring to Facebook’s claim that it will stand up for “free expression” and for encryption, damn the consequences for civilization. But not to worry, Mark – you’ve been pissing people off for years, starting with us.

Keep Hope Alive

As we all know by now, Facebook isn’t going to change – or clean up their act – on their own. For all their big talk about fighting misinformation and corruption, the truth is, they don’t care unless it makes them money. They don’t care how their decisions impact individuals, voters, creatives, or even the truth itself.

They may not care, but we care, and we think you do, too. And so do a lot of people in positions to do something about it, like our elected representatives.

Let’s make sure they know we’re with them, and that we want them to take action. Share this timeline with friends. Send it to your elected official of choice. Sign our petition!

This can be the beginning of the end of Facebook’s abuses… if we all step up.

It’s been long enough. #StandCreative

FULL TIMELINE

March 17, 2018 – Cambridge Analytica 

The Big Bang. Facebook has had their fair share of crises. But nothing – not even enabling abuse by a foreign power so pervasive that it may have swung an election – could ever put a dent in the company’s meteoric ascent… until now. On this day, a bombshell expose of Facebook’s involvement with an “upstart voter-profiling company” called Cambridge Analytica was published, and the online universe was forever changed.

Aiming to influence the behavior of American voters like never before, the British firm utilized a Russian-built app to harvest the private information of more than 50 million Facebook users without their consent. Facebook knew about Cambridge Analytica’s harvesting for years, yet never felt the need to disclose what it knew to the general public, or apparently the government or anyone else.

CreativeFuture had been ringing the alarm bells for months at this point – but the Cambridge Analytica scandal made millions of people, across all industries, recognize that Facebook’s behavior was out of control.

The Cambridge Analytica story caused Facebook to lose $120 billion in stock market value in a single day, put Facebook in the spotlight of regulators around the world, and led to a precipitous decline in user trust. After years of fawning coverage by the press, something suddenly and significantly changed, as journalists started to seriously scrutinize the social network, uncovering tale after tale of questionable business practices fueled by a grow-at-any-cost mentality.

March 21, 2018 – WhatsApp Founder #DeletesFacebook

In the wake of Cambridge Analytica, the growing “delete Facebook” movement garners one of its most damning members: WhatsApp cofounder Brian Acton, who left his position with the messaging service over a year earlier. In 2014, WhatsApp became one of Facebook’s most important acquisitions, and has since more than tripled its user base to 1.5 billion worldwide. But today, Acton expresses his disgust and tweets, “It is time. #deletefacebook.”

April 4, 2018 – Oops… Actually, it Was 87 Million

Hey, remember just a couple of paragraphs ago when Facebook told everyone that Cambridge Analytica had information on around 50 million users? Today, Mark Zuckerberg tells reporters that figure is actually, probably closer to, oh, 87 million users. You know, only about 40 million more than the original disclosure. “I’m quite confident, given our analysis, that it is not more than 87 [million],” Zuckerberg adds. Well – consider us reassured.

April 10-11, 2018 – Zuck Amuck on Capitol Hill

Washington finally gets Facebook CEO Mark Zuckerberg to come to Capitol Hill for two days of grilling on the Cambridge Analytica incident. Questioned by both the Senate and the House in separate hearings, Zuckerberg remains chillingly cool and collected as he takes questions on topics ranging from regulation, to content moderation, to the future of artificial intelligence. His tone – respectful, self-admonishing, yet entirely confident in Facebook’s ability to fix what ails them – is right out of the company’s PR playbook, setting a discomfortingly nonchalant tone for what’s to come.

May 10, 2018 – The Russian Invasion

Even as the Mueller investigation of Russian involvement in our elections continues, there was much more blatant Russian meddling occurring right out in the open – across the vast, endlessly exploitable expanse that is Facebook. Today, Democrats on the House Intelligence Committee publish more than 3,500 ads from Facebook and Instagram (which is owned by Facebook) linked to the “Internet Research Group,” a sinister Russian propaganda network.

Designed to divide Americans through targeted misinformation campaigns, the ads took on topics such as Black Lives Matter, immigration, Islam, and guns, disguising themselves through phony events, fake news stories, or provocative memes. “They tear at the parts of the American social fabric that are already worn thin,” writes Wired, “stoking outrage about police brutality or the removal of Confederate statues.” Ah, outrage… the rotten beating heart of the Facebook business model.

June 3, 2018 – Dirty Data Deals with Device Makers

Fresh on the heels of Cambridge Analytica, Facebook finds itself facing another data scandal, with the Times reporting today that the company allowed phone and device makers access to its users’ personal information. “It’s like having door locks installed, only to find out that the locksmith also gave keys to all of his friends so they can come in and rifle through your stuff without having to ask you for permission,” said former FTC chief technologist Ashkan Soltani.

July 4, 2018 – Lynchings in India

As Americans spend the day celebrating their hard-won freedoms in the United States, Facebook continues to show the perils of its “free” business model in other countries. Namely, in India, where Facebook’s shiny, $16 billion toy, WhatsApp, has come under fire for helping spread misinformation leading to brutal killings. In most cases, innocent bystanders were beaten to death by mobs fueled by rumors of child kidnappings, organ harvesting rings, and other lies spread on WhatsApp. “The abuse of [platforms] like WhatsApp for repeated circulation of such provocative content are equally a matter of deep concern,” said India’s Ministry of Electronics and Information Technology in a statement, adding that the messaging service could not “evade accountability” for its role in the proceedings.

July 11, 2018 – Cambridge Analytica Deemed Criminal

Had you forgotten about Cambridge Analytica already? Well, Britain sure hasn’t – today the United Kingdom’s Information Commissioner’s Office announces that, in failing to tell tens of millions of people how Cambridge Analytica harvested their information for use in political campaigns, Facebook broke British law. They fine the company a whopping £500,000, which Zuck could probably find between his couch cushions. But, it’s not so much the payout as it is the precedent – in this ruling, the UK signals to the world that the extent to which Facebook failed to safeguard its users wasn’t just callous – it was illegal.

August 6, 2018 – The Alex Jones Massacre

This one’s really embarrassing. After happily hosting the accounts of InfoWars founder and notorious conspiracy theorist Alex Jones for years, Facebook finally deems it appropriate to ban the man who used his popularity to aggressively frame the Sandy Hook shootings as “completely fake” and publicly vilify the families of its victims, among other crimes against humanity. After an incident in which Jones, with no evidence whatsoever, refers to Robert Mueller as a pedophile on his show, then threatens the Russia investigation special counsel with violence, Facebook decides that enough is enough.

Of course, we all know that Jones has been spewing such vitriol for some time, so why now? Only after increased public pressure forced other companies to ban Alex Jones did Facebook finally decide to stop taking the ad revenue he generates. They ban this racist hate-monger because he has become bad for business – not because it’s the right thing to do.

August 22, 2018 – Pissing Off Apple, Part I

“If you were on the edge of your seat wondering what Facebook’s next major consumer privacy headache would be, the wait is over,” begins today’s TechCrunch story about Onavo Protect. The Facebook-owned app provided data insights that led Facebook to, among other things, purchase WhatsApp for $19 billion. It also, we learn today, was banned from the Apple App Store.

Why? Here’s Apple’s own explanation: “With the latest update to our guidelines, we made it explicitly clear that apps should not collect information about which other apps are installed on a user’s device for the purposes of analytics or advertising/marketing, and must make it clear what user data will be collected and how it will be used.” Will a slap on the wrist from Apple be enough to force Facebook to clean up its act? If you’re still on the edge of your seat, you really haven’t been paying attention.

August 27, 2018 – Too Little, Too Late in Myanmar

After years of helping foment ethnic and religious tensions that led to serious human rights abuses in Myanmar, Facebook finally admits that maybe, just maybe it was “too slow” in preventing the spread of “hate and misinformation” against Rohingya Muslims in the troubled nation. It also bans 20 organizations and individuals guilty of some of the worst offenses in question. It was a U.N. call for Myanmar military leaders to be charged with genocide and other crimes against humanity that finally spurred Facebook into action. The takeaway: Don’t worry, if the use of its platform puts an entire race of people in danger, Facebook will totally hire a few extra content moderators in your country.

September 28, 2018 – Hackers “View As” Much User Data as They Can

Another day, another data breach – as Facebook admits to a cyberattack that exposed the information of nearly 50 million of its users. It’s all good, though – the company would later downgrade this number to a mere 30 million people whose accounts might have been taken over by hackers via the platform’s “View As” feature – a tool that allows users to see their own page as someone else would. “We need to do more to prevent this from happening in the first place,” says Mark Zuckerberg in a follow-up call with reporters, to which everyone responded by writing “no s**t” in their little notepads.

November 14, 2018 – Profits Over People: The Story of Facebook

Yet another riveting New York Times expose unveils Facebook’s sustained efforts to “delay, deny, and deflect” warning signs of hate speech, bullying, and other toxic content on its platform. Focused on how “bent on growth” Mark Zuckerberg and Sheryl Sandberg really were, the story details some astonishingly cynical measures the pair took to keep Facebook’s problems under wraps, and to warp public sentiment regarding its business practices. These included minimizing the company’s role in Russia’s election meddling even after their own internal investigations showed clear signs of it, and working with a conservative PR firm to attack critics of Facebook – including billionaire George Soros – in far-right media outlets.

December 14, 2018 – Millions of Private Photos Exposed

First, it was 87 million Facebook users whose privacy was compromised. Then, another 50 million. So, it almost seems anticlimactic when it is discovered that 6.8 million more Facebook users had their personal information violated by the social network. But even though exposing the private photos of millions of people to third-party apps is still a travesty, the biggest deal about today’s revelation is that, in the wake of the previous two privacy disasters, this one hardly registers at all.

January 29, 2019 – Pissing Off Apple, Part II

Today, Facebook causes another problem with Apple via their Research app, which paid teenagers to let the social media giant surveil all of their phone and web activity – 1984 style. Hey, remember how on August 22 Facebook tried something like this once before, with their Onavo Protect app, and then Apple banned it? Well, this time around, Facebook just sidestepped all those pesky App Store restrictions regarding privacy and other annoying things, “purposefully avoiding TestFlight, Apple’s official beta testing system,” writes TechCrunch. That’s a big no-no at Apple, not to mention a sign of Facebook’s increasingly desperate measures to cater to the youth demographic that is leaving Facebook in droves.

March 4, 2019 – User Contact Info Stolen, Shared

We wouldn’t blame you if you forgot about this one, but it’s worth remembering. Today, Facebook is accused of failing to let its users opt out of a feature that lets other people look them up using their phone number and email address. This includes users who would never in a million years add their number to a social media platform, but did so anyway because they were led to believe it was necessary to set up the site’s much-touted two-factor authentication security option. Even while pretending to protect your security, Facebook finds a way to blow it up again.

March 13, 2019 – Dirty Data Deals Deemed Criminal

Those darn dirty data deals from December 18 just keep coming back to haunt Facebook. (So it goes when you play fast and loose with the personal information of a 2-billion-person user base who thought they could trust you.) Besides being on the hook for what could be a multi-billion-dollar fine from the FTC, today the Times reports that Facebook is now under criminal investigation by a New York grand jury for business practices that, to put it lightly, “deceived consumers.” So, what do you do when your favorite social media platform is considered a literal criminal in the eyes of the law? Well, if you’re the average Facebook user, turns out you… don’t do squat? Sigh. What is wrong with us??

March 14, 2019 – Two Top Execs Peace Out

Mere weeks after laying out plans to become a more “privacy-focused” social network (says the social network that has privacy violations in its DNA), Facebook loses two more top executives. One of them, Chris Daniels, was in charge of the company’s 1.5 billion-user chat colossus, WhatsApp. No biggie. But the other guy, Chris Cox, was instrumental in the creation of News Feed, Facebook’s signature personalized update engine/fomenter of fake news. Reportedly, both men were unhappy with their employer’s revamped approach to personal data and encryption – a rumor Cox substantiates with an internal goodbye memo containing what could go down as one of history’s most passive-aggressive jabs: “This will be a big project and [Facebook] will need leaders who are excited to see the new direction through.” Question: if the guy described as Zuckerberg’s “right hand man” isn’t excited about where things are going at this point… who will be?

March 15, 2019 – Mass Shooter Films Himself, Goes Viral

Facebook’s bottomless cesspool of harmful content reaches staggering new depths when a New Zealand shooter films himself massacring dozens of mosque worshipers – and his horrific livestream goes viral. This tragedy comes after months of reassurances from Zuckerberg that his company is 1) “doing all we can to prevent tragedies like this from happening,” and  2) “building A.I. tools … to identify and root out most of this harmful content.” Following today’s debacle, allow us to respond to such claims with a highly nuanced, point-by-point breakdown: 1) Clearly, you’re not, and 2) We’ll believe it when we see it.

March 21, 2019 – Instagram Passwords Exposed on Company Server

In a blog post, Facebook fesses up that the supposedly encrypted passwords of “tens of thousands” of its Instagram users were stored on its servers in a “readable format.” But, fear not! The passwords were “never visible to anyone outside of Facebook.” Well, good then – we guess?

April 3, 2019 – User Records Exposed… Sigh… Again

Some enterprising cybersecurity researchers discover that hundreds of millions of Facebook user records had been exposed to the public. Comments, likes, reactions, account names – you name it, it was available for download to literally anyone with a little technological know-how. Once again, this egregious data breach was caused by Facebook’s eagerness to let third-party app developers integrate seamlessly with its platform. Once again, it’s clear that Facebook has “no way of guaranteeing the safe storage of the data of their end users if they are going to allow app developers to harvest it in mass,” said one of the researchers.

April 16, 2019 – Dirty Data Deals Even Dirtier than First Thought

Leaked company documents show that, in between heartfelt pledges to protect user data at all costs, Facebook CEO Mark Zuckerberg was aggressively handing over that very same data to other companies in private. Drawing from more than 4,000 pages of emails, webchats, spreadsheets, and other internal communications, a damning NBC News report shows that Facebook enjoyed rewarding favored partners with data access while denying it to other companies it viewed as competitors.

They also discussed selling access to the data for years, despite Zuckerberg’s adamant and repeated claims to the contrary in front of Congress. “One of the most striking threads to emerge from the documents is the way that Facebook user data was horse-traded to squeeze money… from app developers,” writes NBC News. It all prompted one Facebook employee to gift the world with the understatement of the year: “It’s sort of unethical.”

April 17, 2019 – Email Contact Lists Stolen En Masse

Facebook admits to having collected, without permission and apparently without even realizing it, the email contact lists of 1.5 million users over the last two years. But even if the company doing the collecting claims it didn’t realize it was happening, maybe we should have gotten suspicious when Facebook started forcing certain new users to enter their email passwords to verify their identities? (Note to self: Don’t share your email password with anyone – not your mom, not your friends, and definitely not a social media behemoth with a history of flagrant privacy violations!)

April 18, 2019 – Instagram Password Leak Statistic Gets a Shameful Revision

Hey, remember all those Instagram passwords that were made visible back on March 21? Well today, Facebook makes a slight tweak to its original post about that little dilemma. Turns out its initial figure of “tens of thousands” of users affected by the leak actually should have read… “millions.” You know, just a statistical difference of… oh, a few extra zeroes!! And no, we’re not suspicious at all that it took their allegedly top-notch security team weeks to notice the discrepancy (sarcasm alert). And yes, sneaking a line about the snafu into a month-old blog post seems like a super-efficient way to inform the public about it (double-sarcasm alert). After years of frantically covering their own ass, today’s fresh scandal might be most notable for Facebook’s shady PR response. It appears they’ve given up on even trying to be straightforward about how awful they are.

April 25, 2019 – New York Attorney General Investigates Facebook

Today, New York’s Attorney General Letitia James wrote each of Facebook’s dozens of data breaches onto its own piece of scrap paper, put all the scraps in a hat, and pulled the company’s April 17 scraping of 1.5 million user email contacts out at random. The point is, it’s unclear why this particular snafu is what finally compelled her to open an investigation into the company’s repeated “lack of respect for consumers’ information,” but we’ll take it. “It is time Facebook is held accountable for how it handles consumers’ personal information,” James said in a statement. We’ll take that, too – even though we could have told you the same thing years ago.

May 9, 2019 – Facebook Co-Founder Calls for Facebook Break-Up

Another day, another former high-level Facebook executive calling for breaking the company up (now if only someone from within the company would make the call). It doesn’t get much higher than this one, though: Mark Zuckerberg’s former partner in moving fast and breaking things, Facebook co-founder Chris Hughes. “Mark is a good, kind person,” Hughes writes in a blistering New York Times op-ed. “But I’m angry that his focus on growth led him to sacrifice security and civility for clicks.” Sounds like he’s really taking this personally – but hey, imagine if your best friend betrayed the thing you made together, and it affected the lives of 1.5 billion people. You’d be pretty angry, too.

June 25, 2019 – Facebook Embraces Criminality on its Platform

Illegal drug sales. Prostitution. Sex trafficking. Endangered animal trafficking. The trading of… human remains ranging from “Tibetan skull caps to babies in jars”!? Wow. The Dark Web is truly a twisted and distur… —wait, that’s Facebook that Morning Consult is talking about in today’s op-ed? And what’s that? It contains 1.5 million listings for illegal drug sales alone, which is six times more than the vile Dark Web platform, Silk Road? Well, we wish we could say we’re surprised, but by now we’re just, sadly, not. One detail, however, does scare us anew in this scathing piece: “[Zuckerberg’s] announcement at F8 that he plans to shift the platform design to focus on groups, and Facebook’s plan to launch a cryptocurrency, are downright alarming… the changes will make it harder for authorities and civil society groups to track and counter illegal activity on the platform.” Just what we need – more obstacles to holding Facebook accountable.

July 12, 2019 – Facebook Fined $5 Billion

Let it be known that on this, the twelfth day of July in the year 2019, the social media giant known as Facebook became the proud owner of the largest fine ever dished out by the Federal Trade Commission. Is $5 billion a large enough punishment for a platform that, somehow, despite egregious abuses of customer data and the perpetual fomentation of division and hatred, is still worth hundreds of billions of dollars? Not nearly – but it signals, as The New York Times today reports, “a newly aggressive stance by regulators toward the country’s most powerful technology companies.” We’ll take what we can get, and keep our fingers crossed that this is just the tip of the iceberg.

September 5, 2019 – (Bad) Luck of the Irish – Facebook Leaks 400 Million Records in Ireland

Today, we learn about Facebook’s 400 millionth data breach… er, that is, we learn about 400 million Facebook user records leaked in one data breach. Really, the statistic could go either way at this point, as yet another egregious abuse of personal information gets added to Facebook’s miles-long tally. Sadly, the sheer number of these things is starting to work in the company’s favor, as you had probably already forgotten about this one, if you ever knew about it at all. But, hundreds of millions of users is a lot of people. Fun twist: under its General Data Protection Regulation (GDPR), Europe now shells out big fines for stuff like this, so this one could come back to haunt Facebook in a major way.

October 1, 2019 – Leaked Zuckerberg Comments Point to Inner Turmoil

To be on Facebook is to risk having your personal information leaked. This is a fact that the company has proven too many times to count. But today, the tables turn, as The Verge publishes a leaked, unfiltered transcript from supreme ruler Mark Zuckerberg, speaking with his employees at two town hall meetings in July. We’re struck not so much by the content of these private admissions (though Zuck’s huffy comments about presidential hopeful Elizabeth Warren’s plans to break up big tech certainly betray an alarming level of insecurity and hostility) as by what they reveal about Facebook’s soul: they tell us that “the people inside Facebook feel under siege and uncomfortable about the world around them,” one Harvard professor told CNET. Or, in other words – this rotten company is rotting from the inside.

October 8, 2019 – 40 States Join Facebook Antitrust Probe Party

Not to be outdone by Google, Facebook has been the target of a states-sponsored antitrust investigation for some time now. But what started as a little soiree with just eight states in attendance, today swells to a full-on rave as a total of 40 states announce plans to take part in the New York-led probe. Even so, Facebook has a little more catching up to do – Google’s probe party has 50 state attorneys general on board. Pretty pathetic there, Facebook! But don’t worry – you’ll still get the brunt of the antitrust crackdown in the end.

October 12, 2019 – Fake Ad Flogs Facebook’s Fiasco of a Factchecking Policy

“Breaking news: Mark Zuckerberg and Facebook just endorsed Donald Trump for re-election.” Don’t believe it? Well, you shouldn’t – it’s not true. But, this fake headline sure makes it seem like it is, and that’s the whole point. Dropped onto the platform today by presidential hopeful Elizabeth Warren, the sponsored post attacks Facebook’s lax policy around political ads with known lies in them – by targeting Facebook themselves with a lying ad. It’s hard to think of a cleverer way to expose the company’s transformation into, as Warren herself put it, a “disinformation-for-profit machine.”

October 23, 2019 – Zuck-Bucks Out of Luck on Capitol Hill

“Zuck-bucks” is our nickname for Facebook’s beleaguered cryptocurrency project, Libra. This terrible plan from a company that has lost America’s trust got hammered today on Capitol Hill, as enraged lawmakers lined up to give none other than Zuckerberg himself the what-for at a hearing assessing the project. Rep. Ayanna Pressley (D-Mass.) put it best when she told the CEO, “Libra is Facebook, and Facebook is you… You’ve proven we cannot trust you with our emails, with our phone numbers, so why should we trust you with our hard-earned money?” We couldn’t have put it better ourselves!

October 31, 2019 – Facebook Finally All Things to All People, Including Slave Traders

You know you’re having a bad year when the world discovers that your photo app with more than a billion monthly active users enables a thriving slave market. Today, a hard-hitting BBC News Arabic report finds that “domestic workers are being illegally bought and sold online in a booming black market” on Facebook’s Instagram app. (And it turns out that listings for slaves have also been promoted in apps “approved and provided by Google Play”.) We knew Quality Control was not these tech companies’ strong suit, but COME ON! At this point, it’s just getting surreal.

November 6, 2019 – Facebook Sued by Their Own State

Okay, here’s how you really know you’re having a bad year: You’re one of the most successful companies in an area of the country – Silicon Valley – that generates massive revenues for the state where it resides. For years, you helped make this state a shining beacon of progress and innovation and wealth, admired around the world. And today, none of it matters. Because now things have gotten so bad that Facebook’s home state of California has sued the company for “failing to comply with lawful subpoenas requesting information about its privacy practices.”

November 25, 2019 – Facebook Hit with Data Scandal No. Eleventy-Billion

We had no choice but to invent an entirely new number to account for Facebook’s unfathomably huge pile of data scandals. And it keeps growing. Today’s news involves a “software development kit” from Facebook called One Audience that gave third-party developers improper access to users’ personal data such as email addresses and usernames. Privacy? At this point, who cares?

December 9, 2019 – Facebook’s Fact-Checking Failures Are Bad for Your Health

Let’s add a new Facebook harm to the list: they may be making us physically sicker. It turns out the company’s abject failure to keep misinformation off their platform is actively “harming public health,” reports The Washington Post. Turns out that misleading ads on Facebook about medication meant to prevent the transmission of HIV are scaring patients away from taking the preventative drugs they need. The tech giant’s refusal to remove the content, say lesbian, gay, bisexual, and transgender advocates, has created nothing less than a “public-health crisis.”

December 19, 2019 – 267 Million Phone Numbers Exposed

Remember when an event where Facebook exposed millions of people’s records seemed inconceivable? Did you ever think it would become routine? Well, shockingly, it has. Facebook exposed 267 million phone numbers? Meh… that’s just today’s scandal. We’re very tired.

January 9, 2020 – Facebook Can’t Stop Won’t Stop… Letting Politicians Lie in Ads

Pop quiz: You’re a global internet giant who has lost the trust of pretty much everyone because you can’t (won’t?) curb misinformation on your platform. But now you have the opportunity to earn back some of that trust by reducing the spread of misinformation on your platform by politicians. So, what do you do?  Why, just the opposite, of course! Today, Facebook announces that they will do nothing to limit the targeting of voters by politicians spreading misinformation. Paired with the previous announcement that they also will be exempting political ads from third-party fact-checking, and Facebook has now notched two BIG strikes against the pillars of democracy.

January 11, 2020 – Skilled Workers Embarrassed to Apply at Facebook

All this scandal and strife, as horrible and embarrassing as it is, doesn’t seem to have slowed Facebook’s growth. But here’s something that might: The New York Times reports that tech companies have lost their status as “every student’s dream workplace”. These bad reputations are finally starting to catch up with them.

January 28, 2020 – DOJ Pulls Facebook’s Enemies into Antitrust Probe

It’s been relatively quiet on the antitrust front in recent weeks, but now comes news that one of the probe’s major players, the U.S. Department of Justice, has been setting up interviews with Facebook rivals – looking for insight into “the competitive landscape of the industry, along with their perspectives on and relationship to Facebook.” Just guessing, but we think the feds will hear something like this: “They’re ruthless and rotten to the core – break ‘em up!”

January 31, 2020 – At Last, Facebook Embraces True Purpose – ‘To Piss Off a Lot of People’

Today, from the CEO who recently said “my goal… isn’t to be liked, but to be understood,” comes a brand-new statement of total indifference to the human race. “This is the new approach, and I think it’s going to piss off a lot of people,” Mark Zuckerberg told a crowd at the Silicon Slopes Tech Summit in Utah. He was referring to Facebook’s claim that it will stand up for “free expression” and for encryption, damn the consequences for civilization. But not to worry, Mark – you’ve been pissing people off for years, starting with us.

This article first appeared on CreativeFuture.

Categories
Platform Accountability

An Open Letter Correcting Five Passages From Facebook’s ‘Community Standards’

Dear Facebook Human Resources,

We were recently partaking in some light summer reading by combing through the Community Standards section of your website – and were surprised to discover five key passages that must have been written by someone who doesn’t know your company. We wanted to bring the problem to your attention immediately.

We know that you are currently very busy dealing with data leaks and other scandals, negotiating multi-billion-dollar fines (congratulations on getting that one done!), and fending off criminal investigations, so it’s understandable that you may have missed these important details. Fortunately, we have a top-notch Communications team (albeit teeny-tiny) here at CreativeFuture that has taken the liberty of editing these passages for you. You’ll see them below as red-lined corrections to the original document. We are happy to help you put forward a much more accurate portrayal of the Facebook “ethos” at this critical juncture.

1.) INTRODUCTION

Every day, people come to Facebook to share their stories, see the world through the eyes of others, and connect with friends and causes, spread hatred and misinformation, peddle illegal goods, bully and troll other users, and corrode the foundations of democracy and civil discourseThe What passes for conversations that happen on Facebook reflects the diversity chaos of an uncontrollable community of more than two billion people communicating across countries and cultures and in dozens of languages, posting everything from text to violent and disturbing and infringing and otherwise troubling photos and videos.

We recognize how important it is for Facebook to present ourselves be as a place where people feel “empowered to communicate” (please stop asking us what that means – the PR experts said it sounded good), and we take our role in keeping pretending like there’s a chance in hell of ever keeping abuse off our service seriously. That’s why we have developed a set of Community Standards that outline what is and is not allowed on Facebook. Our Standards apply around the world to all types of content. They’re designed to be comprehensive – for example because it would be really hard to formulate a separate, specific set of guidelines on, say, content that might not be considered hate speech may still be removed for violating our bullying policies in every single country.

Sure, the goal of our Community Standards is to encourage expression and create a safe environment, but do you know how hard it would be to account for all the world’s linguistic and cultural nuances so that Facebook truly is safe for everyone? (Seriously, have you tried to figure out what passes for offensive diatribe in, like, Myanmar? Impossible. That’s why we’ve just translated these Standards that were written by Americans to whatever language it is they speak there – and it’s working out great. Don’t even worry about it. Seriously, don’t look it up – DO NOT.) Just know that we base our policies on input from our community and from experts in fields such as technology and public safety. Who are these experts, you ask? Who cares! They are EXPERTS. What else do you need to know?

2.) Privacy Violations and Image Privacy Rights
Pretending to care about privacy while the protection of madly harvesting personal information so that we can churn it into billions of dollars in ad revenue are fundamentally important values for Facebook. We work hard to convince you that we are keeping your account secure and safeguarding your personal information in order to protect you from potential physical or financial harm and not just shopping it around to the highest bidder. Sure, we utterly failed to protect your data that one time. Oh, right, and that other time, too. Okay, yeah, there was also that one other time. And then, sure, if you want to get nitpicky, then yes, this also happened. And also this. And, fine, yes there was this. Plus, there was… hey look! It’s one of your friend’s birthdays! Better go send them one of our world-famous Facebook birthday greetings! Did you know you can send them a birthday story now? How cool is that!?

3.) Misrepresentation
Authenticity Misrepresentation is the cornerstone of our community, and it starts at the top: Our own fearless leader makes grand proclamations about “building a global community” even though his percentage of Facebook voting shares actually makes him a kind of dictator who doesn’t have to listen to anyone. Why do we let him get away with it? Because we believe that people are more multibillion-dollar internet companies should not be held accountable for their statements and actions when they use their authentic identities that occur on their platforms. That’s why we make a big deal out of our requirement that people to connect on Facebook using the name they go by in everyday life. Even though we’ve actually removed billions of fake accounts – a problem that is only getting worse – having a “real name”-policy makes it look like we’re a trustworthy friend who genuinely cares about authenticity, and who doesn’t spend millions of dollars every year to shape policies that erode your privacy rights and preserve the safe harbor laws that let us off the hook for most of the terrible, toxic things that happen on our platform every minute of every day. The truth is, our authenticity misrepresentation policies are intended to create a safe environment for Facebook, not you! where People can trust and hold one another accountable on their own time.

4.) False News
Reducing the spread of false news on Facebook is a responsibility that we take seriously. We also recognize that this is a challenging and sensitive issue. We want to help people stay informed without stifling productive public discourse, but our business model depends on viral content that foments outrage and controversy. There is also a fine line between false news – such as articles that spread misinformation about vaccinations for children – and satire or opinion – such as all those hilarious “thought pieces” alleging that the government is forcing you to vaccinate your kids so they can “control them,” whatever that means.

What’s a giant corporation to do whose shareholders don’t like it when our traffic dips down? For these reasons It’s simple: we don’t remove false news from Facebook but instead, significantly reduce its distribution by showing it lower in the News Feed. That way, we can still make lots of money from it, but you won’t have it shoved in your face all the time – just sometimes. It’s a solution that benefits everyone: The trolls can keep on publishing hate speech and misinformation; our users can turn a blind eye to the rot and decay at the heart of our platform; and our investors can still swim in blood money. Win-win-win!

5.) Intellectual Property
Facebook takes intellectual property rights seriously and believes they are important to promoting expression, creativity, and innovation in our community. You own all of the content and information you post on Facebook, and you control how it is shared through your privacy and application settings. What’s that you say? There are numerous groups on our platform with tens of thousands of members who are freely sharing illegally downloaded films and music? That’s why we ask that However, before sharing content on Facebook, please be sure you have the right to do so. We also ask that you respect other people’s copyrights, trademarks, and other legal rights… Facebook’s Terms of Service do not allow people to post content that violates someone else’s intellectual property rights, including copyright and trademark. Okay? Happy now? Look, you’re lucky we even offer that – thanks to the Digital Millennium Copyright Act, we actually don’t have to take any proactive measures to seek out pirated content, if we don’t want to… But yes, upon receipt of a report from a rights holder or an authorized representative, we will are, lucky for you, obligated against our will to remove or restrict content – eventually. Like, when we have a free moment (we’re very busy, so don’t hold your breath), or when the illegal link has stopped making us boatloads of unearned money. You’re welcome.

*******
Thank you for reviewing our suggested changes to Facebook’s Community Standards. We are excited to see them implemented so that your company can show the world how much you really do value transparency and community input. (Maybe don’t tell Zuck though. Just update everything when he’s out of the office – probably in front of some legislative body somewhere in the world apologizing yet again for what Facebook does and promising to “to better.”  He won’t notice the changes. When was the last time he even looked at these anyway, right?)

Sincerely,

CreativeFuture

Categories
Platform Accountability

Google Is Monetizing Human Tragedy: Why Aren’t They Held Accountable?

My wife and I had just been visiting our daughter in her new home when we turned on the car radio. It was an interview on CBC with Andy Parker, whose daughter Alison had been murdered, live on TV, by a deranged former employee, back in 2015. The killer recorded and later uploaded video of Alison Parker’s death to the internet, in addition to the live broadcast of the tragedy. The radio story was about the trials of a father who was being trolled by hate-mongers and conspiracy theorists, and about his ongoing efforts to get the videos of his daughter’s murder taken down by YouTube. My heart went out to him. I understood what was driving him, what helps him get up each morning. It is to give some meaning to his daughter’s death by trying to make the world a slightly better place. And one of those things, in addition to better gun control, is to try to bring Google, owner of YouTube, to account for its actions, or rather, its non-action.

One wonders why a corporation of this size and influence, one with such reach and the power to influence people’s lives for the better, doesn’t get it. When Parker first learned that videos of Alison’s death were circulating on YouTube, he contacted them and was informed that according to the company’s “moment of death” policy, the content could be removed. There is an online form available that states;

 

If you’ve identified content showing your family member during moment of death or critical injury, and you wish to request removal of this content, please fill in the information below. We carefully review each request, but please note that we take public interest and newsworthiness into account when determining if content will be removed. Once you’ve submitted your report, we’ll investigate and take action as appropriate.”

 

So far, so good. But then Parker found out that he would have to flag each and every posting of the atrocity in order to get YouTube to act. Videos taken down today could be up again tomorrow, posted by people ranging from conspiracy theorists to plain vindictive sociopaths. YouTube refused to institute a blanket ban on the video, even though it had the technical means to do so. Moreover the algorithms that recommend content to viewers continue to serve up content related to the video. In frustration, Parker is now bringing a lawsuit against YouTube.

One has to ask why YouTube could not take the necessary steps to police its own content. Under pressure from copyright owners it has instituted a system of sorts that will take down all videos of a proven copyrighted work. While the system is unsatisfactory to many, at least there is a functioning system to take down copyright infringing works, as YouTube is required to do under the DMCA in order to keep its safe harbour. And there is other content that YouTube is required by law to block, and by and large it manages to do so, such as child porn, and sex trafficking. In addition, there are other forms of undesirable content that the platforms, YouTube among them, ought to block, as a matter of common sense, but here they do a poor job. Facebook’s slow-off- the-mark response to block the dissemination of the filmed violence against the mosque and worshippers in Christchurch, New Zealand, is but one example, as is the ongoing issue of hate speech and incitement to violence and terrorism as witnessed on the website 8Chan.

What really upsets Mr. Parker is that not only does YouTube require him to constantly police its site to identify postings of his daughter’s death (just as copyright owners have to spend the time to notify YouTube of infractions, although some of this is automated through ContentID), the clicks that it attracts enable YouTube to monetize the hateful video. In effect, YouTube is monetizing misery. Moreover each time that a takedown request is submitted to YouTube, the requester must cite the reason for the requested removal. Should a bereaved father have to do this on a daily basis? Parker, understandably, refuses to contemplate ever watching the video and has enlisted the support of others who have been in a similar position to identify and request the removals. (I have not watched it, nor will I ever do so).

While YouTube’s own Terms of Service indicate it will remove videos showing a moment of death scene (subject to the onerous and repetitive procedures described above), Parker has found that one of the more effective tools to use for removal is the use of copyright. The footage of Alison’s murder on YouTube comes from two sources; the actual footage of the atrocity broadcast on the day of the murder and the videocam footage shot by the killer. In the case of the former, although the station, WDBJ in Roanoke, Va. tried to limit broadcast of the footage, it has spread on the internet. Nonetheless, WDBJ owns the copyright in that footage and has assigned its digital rights to Parker. As the rights-holder, Parker asserts his DMCA right to takedown, and YouTube will comply—although as noted it is a thankless and repetitive task to have to continually flag offending content. With regard to the footage shot by the killer, the copyright strategy doesn’t work, yet YouTube is unwilling to enforce its own policies on highly offensive content that has been brought to its attention multiple times. There is really something wrong here.

In the face of this obduracy, or just plain shirking of normal moral responsibility as happened in the case of the mosque shooting in Christchurch, governments world-wide have started to re-examine very carefully the safe harbour protection that the platforms hide behind. In the US, the shield of choice, Section 230 of the Communications Decency Act (1996) has come under close scrutiny for the abuses it has allowed by giving platforms carte-blanche to host just about any content. Other countries, notably Australia, have taken robust action in aftermath of Christchurch. In April Australia passed a law holding senior executives of internet platforms personally criminally responsible (in addition to the corporation being corporately responsible and subject to massive fines) if the platform fails to act expeditiously to take down footage depicting “abhorrent violent content” once so directed by the authorities. The definition of such content includes, “videos depicting terrorist acts, murders, attempted murders, torture, rape or kidnap”.

Google claims that it is using machine technology to catch problematic content but is putting the onus on government to make it clearer to tech companies where the boundaries lie, for example, what constitutes unacceptable online hate speech and violent content. Google Canada’s head of government relations and public affairs is quoted as stated, in testimony to the House of Commons Justice Committee, that;

 

“I think the first step is to actually have a clear idea of what the boundaries are for terms like hate speech and violent extremist content…Because we’re still as a company interpreting and trying to define our perception of what society finds acceptable and what you as legislators and government find acceptable. The first step for us would rather be what is a clear definition, so we can act upon it.”

 

That sounds to me like passing the buck. While there may be grey areas when it comes to what constitutes illegal hate speech, surely that excuse doesn’t fly when we look at Alison Parker’s case. If Google wants to know what constitutes unacceptable violent content, look at the Australian legislation. No responsible broadcaster would disseminate that kind of material. Why should YouTube, Facebook or other platforms be able to get away with it? The videos that Andy Parker is trying to get permanently blocked on YouTube clearly violate the company’s Terms of Service, apart from anything else, and clearly constitute unacceptable violent content. Yet he is getting nothing but the runaround.

As noted above, Parker is taking legal action against Google, with the assistance of the Georgetown Law Civil Rights Clinic. He is also taking aim at Section 230 because more recently Google has cited this provision as the reason why they are not legally required to remove the content. Maybe this, and the publicity that he is generating by speaking out, will generate some action on the part of this multi-billion dollar company. Perhaps, but I am not holding my breath. Above all it seems that the most important consideration for Google is maximizing profit regardless of the human cost. Google performs an important role for consumers with its various businesses, above all online search and access to content. It has been given a largely free-ride by government with respect to regulation and oversight, with the results that we now see. The time for some accountability is long overdue.

© Hugh Stephens, 2019. All Rights Reserved.

This article was first published in Hugh Stephens Blog.

Featured Photo by Rajeshwar Bachu on Unsplash