This summer, best-selling journalist Steven Brill, who is also the co-founder of NewsGuard, a company that tracks online misinformation which gave him a front-row view on the rise of fake news, wrote a profound and startling analysis of the phenomenon with the stark title of “The Death of Truth,” clarifying in the subtitle that this was “[h]ow Social Media and the Internet gave snake oil salesmen and demagogues the weapons they needed to destroy trust and polarize the world — and what we can do about it.”
Social media started life 25 years ago with a crystalline, virgin reputation. I remember how excited former Vice-President Al Gore used to get that the internet would bring everyone access to worldwide knowledge. It was a technological miracle, bringing the accumulation of consciousness about what was real and how things worked to all of us, which would inevitably lead to good things.
It is 2024, and we know better now, having experienced terrible things from internet technology: the insurrection of January 6, 2021, which grew out of internet lies that Joe Biden had lost the election; thousands of unnecessary deaths from Covid among people who believed MAGA’s (Make America Great Again, Trump’s movement) internet lies that vaccines could kill you; and hurricane victims dying in floods because of internet conspiracy theories that they shouldn’t evacuate because the Federal Government was coming to seize their homes.
In “The Death of Truth,” Steven Brill attributes media abuse to legislators who established the initial internet standards but didn’t realize its potential for harm. Amendment #230 (more precisely, Section 230 of the Communications Decency Act, part of the 1996 Telecommunications Act) was intended to regulate telecommunication technology, which, at the time, was limited to a few platforms like America Online (AOL), Prodigy, and CompuServe. The amendment permitted them to screen themselves from liability and prohibited their being sued for whatever their users posted on their platforms.
Or, the Fox was left in charge of the Hen house.
How did conspiracy theories and hateful disinformation become so popular? Just as it is the nature of the fox to eat chickens, it is the nature of social media platforms to seek profit. Gore’s worldwide spread of truth and knowledge is a nice idea, but platforms like Facebook and Google, Twitter and Instagram are built to garner the greatest possible amount of advertisement revenue.
“Think of the story of the death of truth,” writes Brill, “as the story of two pernicious algorithms. One, unleashed by Section 230, allowed the social media platforms to recommend the content, however divisive or false, most likely to attract attention. The second set of algorithms are operated by what have become multibillion-dollar businesses you probably have never heard of, known as tech ad companies. They’re the ones that reward content like the Paul Pelosi libel [the fake narrative that the attack on Speaker of the House of Representative Nancy Pelosi’s husband was not perpetrated by an outsider but by a male lover], regardless of whether it is true…”
Further, he notes, and this is a key point: “The platform’s business model is dependent on the volume and velocity of the inflammatory content being offered. It is not a side issue. It is the driving metric. The more engaging the content, the more eyeballs. The more eyeballs, the more advertising revenue.”(bolding added)
Brill notes that many of the rioters convicted for insurrection on January 6, 2021, were educated and employed but willing to abandon their trust in traditional and institutional knowledge when conspiracy theories appealed to their fears and anxieties: “Trump followers were not a coalition united by policy positions. They were a coalition of the scared and the pissed off.”
Predominantly white, middle-aged, and male, they were “scared and pissed off” by America becoming increasingly diverse, woman-led, and multi-racial, which, MAGA misinformation kept telling them, was a zero-sum game in which every gain made by people of color or immigrants or women meant a loss of power for them.
British poet and novelist Thomas Hardy wrote, “If a way to the Better there be, it exacts a full look at the Worst.” The problem with “The Death of Truth” is that Steven Brill so weightily and persuasively details the worst of social media’s destructive capabilities that it is a lift to believe that there might be any adequate antidotes.
For one thing, it is clear that we cannot ask social media platforms to police themselves. Twitter (X), Facebook (META), etc., have large teams of moderators that block some distasteful incoming posts and tweets, but these teams are mainly used to make the company seem responsible. If a fox stops eating hens, it might starve; if the CEOs of media platforms moderate their messaging too strictly, their income will plunge, their stockholders will fall away, and their boards of directors will swiftly fire them.
That is why Steven Brill’s own fact-finding company, NewsGuard, offering individuals, businesses, and governments data acceptable by standard journalistic standards, failed to make inroads with social media: “We were confident that these principles and practices would be widely appreciated and accepted […] What we did not understand was that misinformation and disinformation was their business.”
There are, nonetheless, internet platforms like NewsGuard set up to provide accurate information for their customers. Open AI, for example, and the versions of the ChatGPT services associated with it, are non-profit services whose mission is “to ensure that artificial general intelligence benefits all of humanity” by maintaining internal checks to prevent both deliberate false narratives and the “hallucinations” that AI sometimes makes up of whole cloth.
My Microsoft CoPilot (a version of ChatGPT) provides footnotes so I can confirm every answer. Moreover, my direct political questions — how many people voted in the 2020 election? And what is the status of the 2024 Senatorial Race in Michigan — elicited the polite demurral that “I’m afraid talking about elections is out of bounds for me! What else is on your mind?”
So I asked CoPilot if FEMA (Federal Emergency Management Agency) was responding adequately to Hurricane Victims in Florida. It came up with statistics affirming disaster contributions, thus undercutting Trump/MAGA social media lies that FEMA was diverting flood relief funds to illegal aliens.
Indeed, there are AI-based organizations that supplement their algorithms by incorporating the values and even emotions of real human beings. Quantellia, for example, is a Decision Intelligence company that brainstorms with business CEOs and their teams or with governmental agencies or non-profits to incorporate beneficent policies, ethics, and even heartfelt emotions into their business plans. If a company genuinely desires to practice sustainability or has a mission to benefit its community while still making a profit, environmental and other humanistic outcomes can be worked into the model.
Can social media be controlled?
For-profit social media platforms continue, nonetheless, to wreak havoc. Is it impossible to effectively control them? It seems to me that external regulation, on both national and international levels, is our best hope for significant Internet reform.
There are several options.
Legislative solutions:
Unfortunately, getting the American legislature to curb dangerously insightful material has gone nowhere. Even in the liberal state of California, its Senate Bill 1047, requiring companies to police themselves, was not only opposed by powerful Silicon Valley interests but also by democrats like Nancy Pelosi, who felt it would stifle AI innovations. On September 29, 2024, California Democratic Governor Gavin Newsome vetoed it.
Another legislative solution that Brill suggests is stronger oversight of campaign finance laws. “H.R. 7012 (116th) is an amendment to the Federal Election Campaign Act of 1971 to keep online platforms from “targeting political advertisements based on online behavioral data or demographic characteristics.”
This bill also died before making it to a vote.
In his concluding chapter, “Resurrecting Truth – What You Can Do,” Brill calls for the U.S. Federal Trade Commission (FTC), which promotes U.S. consumer protection, to create stricter controls on the kind of misinformation that foments insurrection and violence. He proposes that “the FTC use independent auditors to review platforms’ contents on a regular basis.”
There is some hope that American courts might step into the breach. New York Times writer Julia Angwin thinks that the courts may be moving our way: “In August, the U.S. Court of Appeals for the Third Circuit ruled that TikTok was not immune to a legal challenge regarding its algorithm, which disseminated dangerous videos promoting a “‘blackout challenge’ showing people strangling themselves until they passed out.” The issue is likely to go all the way to the Supreme Court.
Online literacy and internet education solutions:
Brill also suggests better Internet education, including the teaching of online literacy for consumers. This gets at the source of the problem, which, after all is said and done, is in ourselves when we let misinformation abort our critical thinking
We can be taught how to think critically about the internet by mastering the nuts and bolts of winnowing unreliable from reliable sources. My state of Michigan is making progress in this area: our Attorney General Dana Nessel launched directions on her office’s Website for recognizing fake ID content in election Robocalls like one that used Joe Biden’s voice in a phone message crafted to deter people from voting for him.
Disinformation in Europe
On the international level, Brill sees America’s politically dangerous media disinformation as our “Shameful Export.”
“Brexit,” the English vote to drop out of the European Community, was driven by people believing “quite scary stuff about immigration, and especially about Turkey,” that was all over social media. British reporter Carole Cadwalladr concludes that “The entire referendum took place in darkness because it took place on Facebook. And what happens on Facebook” vanishes as soon as you read it.”
In “The Battle Over Who Controls the Internet,” New York Times reporters Jack Nicas and Paul Mozur announce the good news that a “European Union law requires tech companies to police their platforms: “Europe’s new law has already set the stage for a clash with Elon Musk, the owner of X, formerly known as Twitter. Mr. Musk withdrew from a voluntary code of conduct this year but must comply with the new law — at least within the European Union’s market of nearly 450 million people.”
“Forced to Change, Tech Companies Bow to Global Onset of Rules,” asserted New York Times reporters Adam Satariano and David McCable: “For decades, Apple, Amazon, Google, Microsoft and Meta barreled forward with few rules and limits. As their power, riches, and reach grew, a groundswell of regulatory activity, lawmaking, and legal cases sprang up against them in Europe, the United States, China, India, Canada, South Korea and Australia. Now, the global tipping point for reining in the largest tech companies has finally tipped” by exerting intelligent oversight over profit-based tech companies on both national and worldwide bases.
Let us hope that the United Nations will come up with social media regulations soon. The UN is deeply involved in protecting human rights online, including freedom of expression and access to information and UNESCO is particularly active in this area, releasing guidelines for regulating digital platforms with a focus on human rights and working on a global framework for internet governance.
Here comes the election! What will happen now?
It is generally assumed that in autocratically governed countries, people let their leader(s) think for them, while in democracies, we think for ourselves. We have seen, however, that the powerfully emotional suction of social media’s algorithms has swept lots of Americans into Trump’s authoritarian rabbit hole, to the extent that most polls report that he and Vice-President Kamala Harris are in a “statistical tie, within the margins of error.”
Really? Are enough Americans so gullible about misinformation that they will actually vote for a dictator?
I take hope from my doubts about who is actually being polled and whether the sample is too small and ineffective in including newer voters.
Our canvassers (Democrats going door to door) have encountered a lot of voters who are leaning toward Harris but “don’t want to tell anyone,” not to mention the hundreds of people we are registering through our phone banks who do not turn up in the polls because they have never appeared on a voting register.
Then there are the thousands of women refusing to “go back” before they gained reproductive rights, not to mention thousands more young people who have realized that Trump intends to abolish contraception.
We Americans have always resisted letting other people tell us what to think, and we are stubbornly pragmatic about what is in our own best interests to do.
Yes, many of us respond enthusiastically to a “strong man” who voices our economic anger and racial distrust, but my bet is on those votes being outnumbered by we who insist on our good old “I can do it myself” individualism and on a basic, down-to-earth American pragmatism.
We’ll find out on November 5!
Editor’s Note: The opinions expressed here by the authors are their own, not those of Impakter.com — Cover Photo Credit: Tavis Beck.