A new term has gained prominence since last year: "brain rot," which has been chosen as Oxford’s Word of the Year for 2024. "Brain rot" is explained as “the supposed deterioration of a person’s mental or intellectual state, especially viewed as the result of overconsumption of material (now particularly online content) considered to be trivial or unchallenging. Also: something characterized as likely to lead to such deterioration.” This marks a pivotal moment in revealing how our societies are becoming increasingly overwhelmed by the latest wave of social media in an ever-more screen-dominated world.
Brain rot and – in a broader sense – the effects of the digital age come forward as the urgent preoccupations to be addressed for mental health. Experts are increasingly calling for immediate attention to the growing body of evidence linking social media use to declining mental health among young people. A 2025 Pew Research study revealed a sharp rise in this concern: 45% of teenagers now report spending too much time on social media, up from 36% in 2022.
The social media addiction is fairly acute, especially among adolescents and young adults who spend more than 6.5 hours a day online and many of them passively watch low-value content on social media. Behaviors linked to this addiction, such as doomscrolling, zombie scrolling and excessive screen time, are increasingly associated with symptoms like brain fog and reduced concentration. Experts warn that the dopamine-driven design of social media platforms contributes to cognitive overload and emotional burnout, intensifying the mental health risks for younger users.
As societies become increasingly immersed in digital technology, efforts to raise awareness about balanced media consumption and to mitigate the harmful effects of overexposure are gaining momentum. At the same time, digital platforms that profit from addictive design features encouraging compulsive use are facing growing scrutiny, objecting to the lack of accountability and inadequate safety standards.
Recently, the United States introduced the Kids Online Safety Act (KOSA) in response to mounting concerns over the impact of social media and online platforms on youth mental health. The proposed legislation seeks to establish stronger safeguards by holding tech companies accountable for content and design features that may harm minors. In December 2024, Australia adopted a significantly more stringent approach by approving a nationwide social media ban for individuals under the age of 16, marking a notable departure from previous measures. This introduced a minimum age requirement for social media access and removed exceptions based on parental consent or existing accounts. These regulations are being closely watched by others, with Norway announcing a similar ban and the U.K. stating that it is considering following the developments.
These developments are a significant shift in demand for accountability from tech companies, which are now expected to implement concrete safety measures, such as biometric verification or ID validation, to ensure users are over 16 and to mitigate risks to mental health and online safety. Notably, this approach is crucial in assigning direct legal and financial responsibilities to social media companies, rather than placing the burden solely on users or parents. While comprehensive policy frameworks for digital safety are still lacking, addressing the digital determinants of young people’s mental health and well-being appears to be a growing priority for governments and relevant agencies in their upcoming agendas.
Indeed, we need more coordinated, evidence-informed action from researchers, policymakers, regulators and platforms, particularly as the debate on social media’s impact on youth mental health continues to evolve more rapidly than current policies or industry practices. In the meantime, the burden of proof must shift from demonstrating that social media and other digital technologies are harmful to proving that they are safe.
As the discussion and research continue, we must employ the precautionary principle until digital platforms are proven to be safe, as advised by the World Health Organization (WHO) in its recent report on the digital determinants of youth mental health. Until digital platforms can be proven safe, authorities seem to implement protective measures increasingly that guard against serious and potentially irreversible harm to individuals and younger generations.
Creating a safer and more supportive digital sphere is complex but imperative. Effectively tackling the digital determinants of mental health requires holding the technology sector accountable for both its actions and inactions. This entails mandating transparency around commercial interests and the industry's involvement in the development and dissemination of digital platforms and emerging technologies. The next steps in policy development should include engaging the technology industry as partners with civil society, adopting robust conflict-of-interest policies and fostering safe spaces for open dialogue with industry stakeholders. As digital technologies advance, strategies must likewise evolve to guarantee that they promote, rather than jeopardize, the mental health and well-being of young people.