Instagram, owned by Meta, has announced the launch of a new feature that alerts parents if their teenage children repeatedly search for terms related to suicide or self-harm within the app, in a move that comes amid mounting legal pressure over the impact of digital platforms on children's mental health.The feature, which was officially unveiled this week, will work exclusively on accounts that have parental supervision tools enabled. To use it, both the teenager and their parent must agree to participate in the in-app monitoring system. When repeated searches for sensitive words or phrases are detected within a short period of time, an alert is sent to the parents' account. Notifications vary between email, text messages, and in-app notifications, as well as WhatsApp, depending on the contact information available. An Instagram spokesperson explained that the goal of this mechanism is to enable families to intervene early and provide the necessary support, noting that it complements existing policies that block this type of content from teenagers and direct them to mental health services and helplines.
The move comes at a time when Meta is facing lawsuits in the United States related to the impact of its platforms on young people, including a case in Los Angeles accusing the company of designing its apps in a way that promotes addiction among minors, and another in New Mexico addressing allegations that it failed to protect young users from online exploitation.
While some experts welcome any measure that promotes parental control and supports the mental health of adolescents, critics argue that placing the responsibility for monitoring on parents is not enough, calling for a review of recommendation algorithms and content display mechanisms that may expose young people to harmful material without adequate supervision. Amid the growing controversy, there are increasing calls for stricter legislation to regulate children's access to digital platforms and set a minimum age for their use.