By Ana Cecilia Pérez
The advancement of technology has always been accompanied by regulatory debates. Today, Artificial Intelligence (AI) and social networks are at the center of the conversation. How do we strike a balance between protecting people, especially children and teenagers, without falling into excessive measures that affect freedom of access to information and innovation?
In Querétaro, it has been proposed to block social networks to minors under 14 years of age and prohibit the use of cell phones in schools, following the example of countries such as France, where phones have been restricted in the classroom since 2018, or China, which has limited the time of use of TikTok for minors. In the United States, several states have implemented similar regulations, such as requiring parental consent for minors to access social platforms.
The argument behind these initiatives is clear: mental health risks, cyberbullying and exposure to inappropriate content are real problems. However, these restrictions also reflect an uncomfortable reality: the lack of digital education from an early age and the lack of knowledge of many parents on how to guide their children in the safe use of digital platforms.
The World Health Organization (WHO) has warned about the increase in anxiety and depression problems in minors, largely related to the uncontrolled use of social networks and the lack of clear limits. Added to this is a phenomenon that I constantly observe in my workshops and conferences for children and parents: many parents not only allow, but also help their children to lie about their age to access platforms that clearly indicate that they are not suitable for them.
When adults themselves reinforce that "lying is okay" to access a social network, it opens the door to normalizing behaviors that can escalate into greater risks. The problem is not just access, but the lack of education on how to use technology safely and responsibly.
Failure to educate in the proper use of digital platforms prevents children and teenagers from developing judgment, which makes them vulnerable to manipulation, misinformation and cyber risks. It is not only a matter of preventing them from being exposed, but also of teaching them not to use technology improperly, affecting the privacy and security of others.
Meanwhile, AI regulation is advancing in various parts of the world. In Mexico there is still no specific legislation on AI, but in Latin America there are already countries such as Brazil and Peru that have developed initial regulatory frameworks.
In Europe, the European Union's AI Regulation imposes strict controls on high-risk systems, prohibiting those that may violate fundamental rights. In the United States, the Biden administration issued an executive order to regulate AI, focusing on security and transparency.
China has taken a different path, establishing strict regulations for the use of AI in social media and advertising, ensuring that AI models comply with specific rules before they are launched.
The regulatory dilemma surrounding AI and social networks has no single answer.