In a Senate Judiciary Committee hearing yesterday, there was a striking change of scenery—rather than grilling the floating heads of Big Tech CEOs, senators instead questioned policy leads from Twitter, Facebook, and YouTube on the role algorithms play in their respective platforms. The panel also heard from two independent experts in the field, and the results were less theatrical and perhaps more substantive.
Both Democrats and Republicans expressed concerns over how algorithms were shaping discourse on social platforms and how those same algorithms can drive users toward ever more extreme content. “Algorithms have great potential for good,” said Sen. Ben Sasse (R-Neb.). “They can also be misused, and we the American people need to be reflective and thoughtful about that.”
The Facebook, YouTube, and Twitter execs all emphasized how their companies’ algorithms can be helpful in achieving shared goals—they are working to find and remove extremist content, for example—though all the execs admitted de-radicalizing social media was a work in progress.
In addition to policy leads from the three social media platforms, the panel heard from a couple of experts. One of them was Joan Donovan, director of the Technology and Social Change Project at Harvard University. She pointed out that the main problem with social media is the way it’s built to reward human interaction. Bad actors on a platform can and often do use this to their advantage. “Misinformation at scale is a feature of social media, not a bug,” she said. “Social media products amplify novel and outrageous statements to millions of people faster than timely, local, relevant, and accurate information can reach them.”
One of Donovan’s proposed solutions sounded a lot like community cable TV stations. “We should begin by creating public interest obligations for social media timelines and newsfeeds, requiring companies to curate timely, local, relevant, and accurate information.” She also suggested that the platforms beef up their content moderation practices.
The other panel expert was Tristan Harris, president of the Center for Humane Technology and a former designer at Google. For years, Harris has been vocal about the perils of algorithmically driven media, and his opening remarks didn’t stray from that view. “We are now sitting through the results of 10 years of this psychologically deranging process that have warped our national communications and fragmented the Overton window and the shared reality we need as a nation to coordinate to deal with our real problems.”
One of Harris’ proposed solutions is to subject social media companies to the same regulations that university researchers face when they do psychologically manipulative experiments. “If you compare side-by-side the restrictions in an IRB study in a psychology lab at a university when you experiment on 14 people—you’ve got to file an IRB review. Facebook, Twitter, YouTube, TikTok are on a regular, daily basis tinkering with the brain implant of 3 billion people’s daily thoughts with no oversight.
Apples and oranges
“What we need to do is compare what are the regulations and protections we apply in one domain and we’re not applying in a different domain,” Harris said.
The hearing quickly shifted to Section 230 of the Communications Decency Act, the reform of which has been mooted by members of both parties. One bill introduced last year would require platforms with more than 10 million users to obtain permission from people before serving them algorithmically tailored content. If the companies fail to do so, they would not receive protection under Section 230. Another bill would similarly revoke Section 230 immunity if algorithms spread misinformation that leads to violence.
The panel concluded by exploring alternative business models for social media platforms, ones that wouldn’t be as reliant on algorithms to drive engagement and ad views. Harris suggested that electric utilities might be one such model. Rather than encouraging people to leave their lights on all the time—which would make them more money but work against the societywide goal of energy conservation—regulations have been set up to discourage flagrant overuse, and a portion of the profits are put into funds to ensure the sustainability of the electric grid.
“Imagine the technology companies, which today profit from an infinite amount of engagement, only made money from a small portion of that,” he said. “And the rest basically was taxed to put into a regenerative public interest fund that funded things like the fourth estate, fact checkers, researchers, public interest technologists.”
https://arstechnica.com/?p=1761002