Is Social Media a Public Health Issue?
Federal, state and local governments are now trying to determine the answer. If that answer is yes, regulating it as such is no simple task.
U.S. Senators in the powerful Judiciary Committee harangued the CEOs of Meta (Facebook/Instagram/WhatApp), TikTok, X (née Twitter), Snap(chat) and Discord, for almost four hours last Wednesday in a hearing on child safety. Pretty much nothing came of it other than some great clips, but there is a clear feeling that Congress really wants to pass laws regulating social media companies.
They probably will soon, too. Last year a bill was introduced in the Senate that would require age verification, restrict access to 13 and over, and require parental consent for profile creation for those aged 13-17. It hasn’t moved, mainly because there are significant outstanding questions on how this could even be enforced, among other challenges. As such, Congress is probably going to — as usual — let the states figure it out by experimenting. Meanwhile, the states already have on their lab coats.
There are, of course, numerous issues at play: protecting children from sexual predation, the teenage mental health crisis, drug trafficking, misinformation, and the perpetual political football of “free speech.” Guess which one the politicians picked to start. Was it the ones linked to kids literally committing suicide?
Of course it wasn’t. The “free speech absolutism” versus “regulating hate speech” problem was the first major arena for regulating social media. Between 2021-2022, over 100 bills were filed across dozens of states attempting to control how social media companies regulate content. The topic is worth a post of its own, but there’s a key takeaway I wanted to emphasize from Jeff Kosseff, cybersecurity law professor at the U.S. Naval Academy:
“You cannot have a state-by-state internet,” Kosseff said. “When you step back and look at the possibility of having 50 different state laws on content moderation — some of which might differ or might conflict — that becomes a complete disaster.”
The states are pushing on ahead. And when it comes to the more pressing issue of youth mental health, New York and Florida are currently leading the pack.
At least in terms of who’s going first, anyway.
Florida’s House of Representatives passed a bill not terribly dissimilar from the one in the Senate (restricting access to minors, parental consent, etc. etc. etc.), but it still has to pass the Florida Senate. It probably will, but there are good questions remaining about how it can be enforced. More on that in a bit, though.
On the same day that the Florida bill passed and the tech CEOs were being told that they have “blood on [their] hands” in D.C., New York City officially designated social media a public health hazard and mayor Eric Adams compared “companies like TikTok, YouTube [and] Facebook” to “tobacco and guns,” specifically identifying “addictive and dangerous features” as the reason why. Adams called for a “food regulation model,” built on encouraging healthy habits through education and warnings, like nutrition labels, as opposed to top-down regulations.
“While we are not very good at it, many people manage to have somewhat healthy food intake. Social media is the same,” he said. “We just need knowledge and incentives.”
It’s a bit of an odd thing to suggest a strategy that you admittedly know doesn’t work. If the “food regulation strategy” led to healthy citizens, our rates of obesity, hypertension, diabetes and more would be a lot lower. Nutrition labels don’t really count for shit when you have food deserts, government subsidies that tip the scales of affordability to processed foods (leaving healthier, organic foods to be far too expensive for many people), and a global food supply chain that prioritizes profit over safety.
Plus, it’s been well known in public health spaces for a while now that we have too many warning labels. So much so that people basically totally ignore them.
So let’s be honest here. Is a “warning label” model really going to help with social media addiction? Or is it going to be yet another stupid, pointless pop-up that we come to reflexively close out of, the habit becoming so immediately and strongly conditioned that we soon come to not even consciously experience it?
We can’t depend on these companies to regulate themselves, either. There have been multiple Meta whistleblowers coming out for years now, bringing receipts on how the people making Facebook, Instagram and WhatsApp have been intentionally designing their products to be addictive, regardless of the impact on the human being on the other end. One such whistleblower, who worked at Meta at two different stretches over a decade, testified that “many of the measures he had implemented during his first stint at Meta, such as tools designed to make it easier to report problems, had gone when he returned to the company as a consultant in 2019.” They’re quite literally making the problems worse, not better.
In a recent article on the Atlantic, Adrienne LaFrance summarizes perfectly why we can’t trust a person like Mark Zuckerberg, who once called people that signed up for Facebook “dumb fucks”:
Facebook (now Meta) has become an avatar of all that is wrong with Silicon Valley. Its self-interested role in spreading global disinformation is an ongoing crisis. Recall, too, the company’s secret mood-manipulation experiment in 2012, which deliberately tinkered with what users saw in their News Feed in order to measure how Facebook could influence people’s emotional states without their knowledge. Or its participation in inciting genocide in Myanmar in 2017. Or its use as a clubhouse for planning and executing the January 6, 2021, insurrection. (In Facebook’s early days, Zuckerberg listed “revolutions” among his interests. This was around the time that he had a business card printed with I’M CEO, BITCH.)
It’s easy (and justifiable) to use Facebook and Instagram as a punching bag, but we have to keep in mind that TikTok has been found to aggressively push eating disorder and suicide content to teenagers within minutes of ‘liking’ just one post, and sexual exploitation of minors is rampant on Snapchat. As far as trusting these platforms to regulate themselves go, they aren’t any better than Meta. TikTok has still been unable to protect user data, and Elon Musk’s wild west approach to X has facilitated a jarring rise in antisemitism and disinformation.
Meanwhile, the company behind Snapchat has broken ranks to become the first tech company to endorse the “Kids Online Safety Act,” which is a Senate bill that would “direct platforms to prevent the recommendation of harmful content to children, like posts on eating disorders or suicide.” In other words, it’s the first company to admit that they can’t be trusted to hold themselves to account, and government oversight is needed. That Meta whistleblower I mentioned above feels the same, saying that “Regulators are our last hope at peace.”
Actual regulations, however, has problems of its own. Consider age restrictions: how do you enforce this? By allowing Facebook to scan a copy of your government-issued ID? By giving TikTok your social security number?
Curiously, a great answer may have come from PornHub, which has been credibly accused of endangering and exploiting both children and adults itself:
The best and most effective solution for protecting children and adults alike is to identify users at the source: by their device, or account on the device, and allow access to age-restricted materials and websites based on that identification. This means users would only get verified once, through their operating system, not on each age-restricted site. This dramatically reduces privacy risks and creates a very simple process for regulators to enforce.
It’s actually a pretty great idea. Apple already ends up with a lot of that data anyway, for example, and one of their foundational principles is customer privacy. They believe in that so forcefully that they even denied the FBI a “backdoor” to the iPhone.
But this solution only goes so far. Beyond age verification, you still have the problem of the content itself. Meta already has parental controls, for example, but they found in a 2020 study that most parents don’t even use them, even if they know about them.
“Parents also struggled to effectively enforce limits when many were considered ‘addicted’ to social media/phones themselves by their children,” the researchers wrote.
So there you have it. We have companies who value addiction to their product at the expense of public safety, regulators that barely understand the technology they’re trying to regulate, and the product itself addicts and confuses and stupefies the people that use it, and that includes just about everyone with an internet connection.
Something’s gotta be done, and I do think the public health model is the way to go, but this problem is fundamentally different than cigarettes or lead paint or mesothelioma. It involves a product that distorts the way people even perceive reality.
So one thing’s clear: we have no idea what’s going on, no idea how to fix it, and no idea what we’re even doing in the first place.