Now it's America's turn to make sense of Internet content: Supreme Court hears arguments in two critical social media cases
By Kieren McCarthy
The European Union had its say first with the Digital Services Act, which came into force this month. The UK came next with the Online Safety Act, which is now law but won’t come into force for some time. And now it is the United States’ turn to figure out what to do about problematic online content.
On 26 February, the Supreme Court heard arguments on two different cases about online platforms and how much control they can or should exert on what appears on their sites. The decision when it comes - possibly June - is likely to define how content and content moderation in the US is approached for the foreseeable future.
How we deal with information that isn’t illegal but is “harmful” (UK) or causes “societal harm” (EU) is one of the biggest challenges of the digital age and each jurisdiction is dealing with it in its own way, reflecting cultural and philosophical norms.
The EU has legislated and threatened fines if people don’t follow the rules; the UK has gone for the very British compromise of having code of conducts to be produced and tweaked by a regulator (with a back-up stick of secondary legislation), and American has turned to its Constitution, with content moderation viewed through the lens of free speech - the famed First Amendment.
Each approach will have its own effect on billions of people but America’s decision will set the parameters of how the world deals with online unpleasantness: misinformation, threats, racism, misogyny, terrorism, conspiracy theories, political lies, bullying, and so on.
As with everything in America at the moment, the two cases before the Supreme Court are the result of partisan politics stoking culture wars. Both Florida and Texas (heavily Republican states) passed laws in which online platforms were banned from moderating content, in large part because of claims that those platforms - most of them based in (heavily Democratic state) California - were “censoring” Conservative voices.
Florida summary: https://www.scotusblog.com/case-files/cases/moody-v-netchoice-llc/
Texas summary: https://www.scotusblog.com/case-files/cases/netchoice-llc-v-paxton/
Florida hearing: https://www.supremecourt.gov/oral_arguments/audio/2023/22-277
Texas hearing: https://www.supremecourt.gov/oral_arguments/audio/2023/22-555
The platforms argued that they were simply removing false, illegal or damaging information - most famously banning former President Trump over his claims that the 2020 election was stolen, but also people like Andrew Tate, thrown off for hate speech, and a range of less notorious people, including Republican candidates and lawmakers who have posted or amplified conspiracy theories and false or misleading information.
The two states of Florida and Texas passed similar laws obliging the platforms to let people post whatever they wanted, and in return - through industry group NetChoice - the platforms sued, arguing those laws were unconstitutional. Both sides claim First Amendment protections: the states say the Constitution defends people’s right to say what they want and anything that prevents that is censorship; the platforms say their rights will be impinged if they are compelled to publish content that breaks the rules they have set for their service.
Fundamental question
Which has led to the inevitable question: what is an online platform anyway?
Is it a newspaper where editors can exert editorial control over the content that appears? Or is a telephone line where the telephone company has no right to restrict what people say to each other? Is it like TV with a regulator enforcing behaviour? Or are online platforms more like shopping centres which are privately owned but open to the public (where people can say what they like)?
The answer of course is that they are not like any of them, which is why we as a society are still struggling to deal with this issue, and why what the Supreme Court ultimately decides is going to define how we view not just this current issue but the next technological leap when it comes, and everything in between.
It is not going to be an easy task: as legal scholars have been gleefully pointing out (on online platforms), even those presenting the arguments to the nine Justices have repeatedly contradicted what they have said in other settings.
As just one example: Florida passed a law that requires social media companies to prevent harmful content from being seen by children. But in this case, the state argues that the same companies have no right to stop any form of content, regardless of age.
There is also the issue of how broad the laws are: Florida’s would appear to apply to any online platform, even if they don’t actually publish content. So if you buy something on Amazon, or order a cab from Uber, you could be met with an explanation for why the Earth is flat (or much worse) if the law is upheld.
And then there is the conflict with another heavily argued over piece of legislation - the so-called Section 230 - which shields online platforms from legal liability over how they choose to moderate their site.
The existence of that legislation, in place since 1996, recognises in law that there is an expectation that online platforms will moderate content and that that moderation will on occasion be controversial. At the same time it gives almost blanket legal protection to online platforms for the content that they do host (so long as they are responsive to legal requirements to take content down) and therefore makes it hard to impose any kind of accountability or universal rules on how those platforms choose to moderate their content.
Everyone is both happy and unhappy with the law, depending on the specific case in front of them.
Which direction is this heading?
So which way will the Supreme Court swing? Based on what happened in court last month, the online army of armchair Justices have all reached roughly the same conclusion: the court will avoid giving a straight answer.
The specific legal question being asked of the Supreme Court at this point is whether the blocks on the laws that are currently in place because of the lawsuits should be lifted. It seems likely that the Justices will stick to this narrow question as a way to avoid answering the bigger question.
But what the court does end up saying will likely decide the course of those lawsuits, so when the issue does, inevitably, come back around to the Supreme Court, it will be facing the results of its own steer, and unable to duck the question a second time.
In the meantime, we will start to see the results and impact of the EU’s and UK’s own legislative efforts to tackle online content. How optimistic are you feeling about it?