Children’s safety online remains a top priority for government agencies in the United States, while school officials across the country are increasingly instituting social media policies for students. As the stakes get higher, with wrongful death lawsuits against social media companies, federal and state legislators are working on additional measures for protecting children online.
The issue of children’s safety online continues to garner bipartisan support in Congress. At the same time, however, laws related to protecting children in the digital sphere raise significant questions about civil liberties and overregulation.
COPPA: An Old Law Losing Steam
Earlier this year, we covered recent legislation circulating around Congress that’s aimed to address the rising rates of depression, suicide, and other mental health effects on the youth that many experts have attributed to social media. Those proposed laws have yet to pass or even gain sufficient steam, but there has been a major federal law governing children’s use of the internet that has been around for decades. It seems that lately, though, this law may be losing its teeth against the increased incentives for kids to use social media.
In 1998, the Children’s Online Privacy Protection Act (COPPA) was signed into law. The act applies to any website that collects data on persons under the age of 13, which in effect makes it apply to most sites that kids actually use; many sites collect data of some sort. The law requires that sites obtain parental consent before collecting data on users in this age group. As a consequence, most sites now require that the minimum age of users be 13, while children may also sign up via their parents who can set up profiles on their behalf.
Despite the law, however, children still tend to sign up, regardless of whether their parents consent, and prior to reaching that age, according to a recent study. After all, faking parental consent is often easier than forging a permission slip to go on that school field trip. So, as the problem has continued, lawmakers have remained concerned.
New Laws Attempt to Wrangle Kids In
In response to the continued concerns about children’s safety online, Congresspeople have been working on more legislation. As of late, the Protecting Kids on Social Media Act, the Kids Online Safety Act, and the Children and Teens’ Online Privacy Act (COPPA 2.0) have drawn much attention from the media and been described by lawmakers as responses to a post-pandemic health crisis affecting the mental well-being of young people.
For more information about this health crisis, you can read our blog “Are Social Media Platforms Responsible for Kids’ Deaths?“It highlights a study carried out internally within Facebook’s parent company, Meta, which showed a link between increased suicidality and social media use amongst teens.
The Protecting Kids on Social Media Act
Late last month, Senators Tom Cotton (R-Ark.), Chris Murphy (D-Conn.), and Katie Britt (R-Ala.), and Brian Schatz (D-Hawai’i) introduced a bill, S.B. 1291, titled The Protecting Kids on Social Media Act.
The Senators laid out the proposed law’s mission. “Our bill will help us stop the growing social media health crisis among kids,” said Senator Schatz. “[O]ur bill will put parents back in control of what their kids experience online,” said Senator Cotton. “[T]his bipartisan legislation would take important steps to protect kids and hold social media companies accountable,” said Senator Murphy.
The law would prohibit all children under the age of 13 from social media platforms, requiring social media companies to use the latest technology to verify that users are over the age of 13. It would also require parental consent occur under circumstances broader than what was required under COPPA and past legislation. And it would bar social media companies from using algorithms to recommend content to users under the age of 18.
In response to claims that legislation of this kind violates freedom of speech, Senator Schatz asserted, “The idea that an algorithm has some sort of First Amendment right to get into your kid’s brain is preposterous. And the idea that a 13-year-old has some First Amendment right to have an algorithm shove upsetting content down their throat is also preposterous.”
The Kids Online Safety Act
In February of this year, Senator Richard Blumenthal (D-Conn.) introduced the Kids Online Safety Act, alongside Senators Marsha Blackburn (R-TN.) and Edward Markley (D-MA.). The bill, S. 3663, requires that social media companies provide minors with more options concerning how to protect their information, disable features that have been deemed addictive, and turn off algorithms that suggest content for users to consume.
Senator Blumenthal has said, “Our bill provides specific tools to stop Big Tech companies from driving toxic content at kids and to hold them accountable for putting profits over safety. “Record levels of hopelessness and despair—a national teen mental health crisis—have been fueled by black box algorithms featuring eating disorders, bullying, suicidal thoughts, and more.”
In short, Blumental has touted the bill as requiring social media platforms to abide by a “duty of care” with their users.
Children and Teens Online Privacy Protection Act (COPPA 2.0)
An amendment to the Children’s Online Protection Act of 1998, COPPA 2.0 broadens the privacy protections of the original 1998 law to include minors between the ages of 12 and 16. The original law covered only minors up to the age of 12. COPPA 2.0 also broadens requirements around obtaining consent from parents, further restricts uses of algorithms for recommending content to users, and mandates procedures for protecting the information that is collected from users.
A description of the bill, S. 1628, reads that, ‘Tracking the online movements of children and teens and collecting their personal data is a widespread and harmful practice among websites, applications, and online actors of all kinds today.”
Barriers to Creating Liability for Social Media Companies
Historically, online platforms have enjoyed a significant degree of protection from various forms of liability that otherwise could stem from various digital mishaps. FindLaw staff have noted that one of the greatest barriers to creating liability for social media companies is Section 230 of the Communications Decency Act of 1996.
This section of the Act provides online platforms with immunity from civil liability stemming from third-party content, which is basically the bread-and-butter of all social media platforms. Third-party content is anything that was created by someone else besides the social media platform (say, a site user) and is credited to them when published on that website. It can take the form of a social media post, blog, article, image, or video. Thus, companies like Instagram, TikTok, and SnapChat have relied heavily on the protections provided to them by Section 230.
Laws like those introduced recently aim to reduce the protections social media platforms have enjoyed, while at the same time concerns about whether such laws would violate First Amendment rights remain prevalent. As such laws restrict speech, they can be subject to constitutional challenges.
It remains to be seen whether Democrats and Republicans can rally together in their move toward greater restrictions on big tech. Historically, big tech has been given a lot of latitude in terms of its rights to collect and sell consumers’ information, with both Democrats and Republicans turning a blind eye to the industry.
Related Resources
- After Executive Order Condemning Online Censorship, Should Social Media Companies Fear Liability? (FindLaw’s Practice of Law Blog)
- Social Media Shopping Scams Plague Young Adults (FindLaw’s Law and Daily Life Blog)
- Schools Sue Social Media Platforms Over Mental Health Harm (FindLaw’s Courtside Blog)
The post Legislators Try to Ban Social Media for Kids appeared first on .