Instagram is stepping up its efforts to ensure that its younger users are accurately representing their ages on the platform, using artificial intelligence (AI) to verify the ages of users who may have lied about their birthdates. This move, announced by parent company Meta Platforms on Monday, comes amid growing concerns about social media’s impact on teen users’ mental health and well-being.
AI to Detect Misrepresentation of Age
Meta has been using AI for some time to estimate user ages, but Instagram is now testing a more proactive approach. The company plans to use AI to flag accounts that may belong to teenagers, even if the users initially entered an inaccurate birthdate when signing up. If an account is flagged for age misrepresentation, it will automatically be categorized as a “teen account,” which comes with additional restrictions designed to protect younger users.
Teen Accounts Get Increased Privacy and Restrictions
Once an account is confirmed as a teen account, it will be subject to stricter privacy and content controls. For instance, teen accounts will automatically be set to private, limiting who can see their posts and stories. Additionally, private messages will only be allowed from people teens already follow or are connected with. Sensitive content, such as videos of violent acts or those promoting cosmetic procedures, will also be restricted. Teens will receive notifications if they spend more than an hour on Instagram, and a new “sleep mode” feature will turn off notifications and send automatic replies to direct messages between 10 p.m. and 7 a.m.
AI Trained to Detect Age Signals
Meta’s AI will use a variety of signals to determine the likely age of an Instagram user. These signals include the type of content the account interacts with, the information listed on the profile, and the account’s creation date. The goal is to ensure that users who are underage are appropriately classified and given the necessary restrictions to protect them on the platform.
Social Media Scrutiny and Legal Challenges
This move comes as social media platforms, including Instagram, face increasing scrutiny from governments and advocacy groups over how they affect the mental health of young users. Many states are pushing for age verification laws to prevent children under 13 from using platforms like Instagram, though these laws have faced court challenges. While Meta supports the idea of app stores verifying users’ ages, it also believes that social media platforms should be responsible for ensuring their products are safe for younger audiences.
Parental Involvement in Online Safety
To further address concerns, Instagram will also notify parents with information on how they can talk to their teens about providing accurate information online. This initiative aims to empower parents to guide their children in using the platform responsibly while ensuring that age-related restrictions are enforced to protect young users from inappropriate content and excessive screen time.