New York state took novel legislative steps on Friday to limit children’s exposure to computer algorithmic social media feeds, passing two laws to protect children from social media content and to protect their privacy.
The Stop Addictive Feeds Exploitation (Safe) for Kids Act requires social media companies to restrict feeds on their platforms for users under 18 unless parental consent is granted, and prohibits companies from sending notifications regarding feeds to minors from 12.00am to 6.00am.
The second law, the New York Child Data Protection Act, prohibits online sites from collecting, using, sharing or selling personal data of anyone under the age of 18, unless they receive informed consent or unless doing so is necessary for the purpose of the website.
The laws authorize the state attorney general to impose civil penalties of up to $5,000 per violation.
“New York is leading the nation to protect our kids from addictive social media feeds and shield their personal data from predatory companies,” Governor Kathy Hochul said, calling the legislation a “historic step forward in our efforts to address the youth mental health crisis and create a safer digital environment for young people”.
Letitia James, the New York state attorney general, said that “children are enduring a mental health crisis, and social media is fueling the fire and profiting from the epidemic”. She added that she hoped other states would follow with legislation “to protect children and put their mental health above big tech companies’ profits”.
Educators welcomed the move. Melinda Person, president of the New York state teachers’ union, said in a statement that “educators see the harmful effects of social media on our kids every day, and this legislation is a tremendous first step toward ensuring these influences remain in their proper places”.
The Safe for Kids Act defines addictive feeds as those that are based on a user’s “liked” or “followed” content or other actions that the user may not be aware of, such as how long they look at a particular piece of media.
These feeds “make predictions about interests, mood, and other factors related to what is most likely to keep users engaged for as long as possible, creating a feed tailor-made to keep each user on the platform for longer periods”, the law states. “Since their adoption, addictive feeds have had a dramatic negative effect on children and teenagers, causing young users to spend more time on social media than they otherwise would, which has been tied to significantly higher rates of youth depression, anxiety, suicidal ideation, and self-harm.”
Comparable legislation protecting children from social media activities has been drawn up across the US on a shared belief that social media is contributing to increasing rates of mental illness among children.
Connecticut, Vermont, Illinois, New Mexico, Maryland and Minnesota are all in the process of creating or updating parental-consent laws, with some requiring online platforms to conduct children’s safety assessments, make design changes to help kids avoid harmful material, and limit who can contact minors.
Last year, 33 states sued Meta, parent company of Facebook and Instagram, alleging it violated children’s privacy. A law passed in California to force social media companies to change website design changes is being contested in federal court.
On a national level, lawmakers in Washington are moving at a glacial pace to draw up national standards.
The Kids Online Safety Act (Kosa), introduced more than two years ago, reached 60 backers in the Senate earlier this year. But a number of human-rights groups oppose the legislation, underscoring ongoing divisions among experts, lawmakers and advocates over how to keep young people safe online.
“A one-size-fits-all approach to kids’ safety won’t keep kids safe,” Aliya Bhatia, a policy analyst at the Center for Democracy and Technology, told the Guardian in March.
“This bill still rests on the premise that there is consensus around the types of content and design features that cause harm. There isn’t, and this belief will limit young people from exercising their agency and accessing the communities they need to online,” Bhatia added.
Prem M Trivedi, policy director at the Open Technology Institute, said that “rather than protecting children, this could impact access to protected speech, causing a chilling effect for all users and incentivizing companies to filter content on topics that disproportionately impact marginalized communities”.
“This is an emotionally fraught topic – there are urgent online safety issues and awful things that happen to our children at the intersection of the online world and the offline world,” Trivedi added.