Washington DC
New York
Toronto
Distribution: (800) 510 9863
Press ID
  • Login
Binghamton Herald
Advertisement
Wednesday, March 11, 2026
  • Home
  • World
  • Politics
  • Business
  • Technology
  • Culture
  • Health
  • Entertainment
  • Trending
No Result
View All Result
Binghamton Herald
No Result
View All Result
Home Business

Instagram to alert parents if their teens search for suicide or self-harm terms

by Binghamton Herald Report
February 26, 2026
in Business
Share on FacebookShare on Twitter

Instagram, a social media platform popular among young people, said Thursday it will alert parents if their teens repeatedly search for suicide or self harm-related terms.

“Our goal is to empower parents to step in if their teen’s searches suggest they may need support,” the company said in a blog post.

Parents will receive a notification through text, email or WhatsApp. They will also have the option to view resources to help them have sensitive conversations with their teen.

Suicide prevention and crisis counseling resources

If you or someone you know is struggling with suicidal thoughts, seek help from a professional and call 9-8-8. The United States’ first nationwide three-digit mental health crisis hotline 988 will connect callers with trained mental health counselors. Text “HOME” to 741741 in the U.S. and Canada to reach the Crisis Text Line.

The move is the latest example of how tech companies are responding to concerns from parents, politicians and advocacy groups that they’re not doing enough to protect young people from harmful content.

A landmark trial over whether tech companies such as Instagram and YouTube can be held liable for allegedly promoting a harmful product and addicting users to their platforms is happening in Los Angeles.

The trial included testimony from Instagram boss Adam Mosseri, who told the court that the company is trying to be as “safe as possible but also censor as little as possible.”

Safety concerns have intensified as teens, some who have died by suicide, turn to AI chatbots to share some of their darkest thoughts.

Instagram has an AI assistant within its search bar. Meta, which owns Instagram, is building similar alerts if teens try to have certain conversations about suicide and self-harm with its AI assistant.

Meta has rules against posting content that encourages suicide or self-harm but allows people to discuss the topics. The parent company has also taken action against millions of suicide, self-harm and eating disorder content, Meta’s transparency reports show.

Some parents and teens, though, have alleged in lawsuits that young people have seen self-harm content on Instagram.

Roughly 63% of U.S. teens, who are between 13 and 17, use Instagram, according to a Pew Research Center survey released in December. More than half of U.S. teens also use chatbots to search for information, according to a separate survey released this week.

Instagram, which has more than 3 billion monthly active users, said that most teens don’t search for suicide or self-harm content on Instagram. It blocks searches and directs people to suicide prevention resources. Instagram said the alerts are part of its teen accounts, which includes limits on who young people can message, time limit reminders and other features.

Parents who use these tools to keep an eye on their teens will start receiving alerts in the U.S., U.K. Australia, and Canada next week. They will then roll out to other regions later this year.

Social media platforms have been taking other steps to improve safety. This month, Meta, TikTok and Snap agreed to be rated on their teen safety efforts as part of a new program from the Mental Health Coalition.

Instagram, a social media platform popular among young people, said Thursday it will alert parents if their teens repeatedly search for suicide or self harm-related terms.

“Our goal is to empower parents to step in if their teen’s searches suggest they may need support,” the company said in a blog post.

Parents will receive a notification through text, email or WhatsApp. They will also have the option to view resources to help them have sensitive conversations with their teen.

Suicide prevention and crisis counseling resources

If you or someone you know is struggling with suicidal thoughts, seek help from a professional and call 9-8-8. The United States’ first nationwide three-digit mental health crisis hotline 988 will connect callers with trained mental health counselors. Text “HOME” to 741741 in the U.S. and Canada to reach the Crisis Text Line.

The move is the latest example of how tech companies are responding to concerns from parents, politicians and advocacy groups that they’re not doing enough to protect young people from harmful content.

A landmark trial over whether tech companies such as Instagram and YouTube can be held liable for allegedly promoting a harmful product and addicting users to their platforms is happening in Los Angeles.

The trial included testimony from Instagram boss Adam Mosseri, who told the court that the company is trying to be as “safe as possible but also censor as little as possible.”

Safety concerns have intensified as teens, some who have died by suicide, turn to AI chatbots to share some of their darkest thoughts.

Instagram has an AI assistant within its search bar. Meta, which owns Instagram, is building similar alerts if teens try to have certain conversations about suicide and self-harm with its AI assistant.

Meta has rules against posting content that encourages suicide or self-harm but allows people to discuss the topics. The parent company has also taken action against millions of suicide, self-harm and eating disorder content, Meta’s transparency reports show.

Some parents and teens, though, have alleged in lawsuits that young people have seen self-harm content on Instagram.

Roughly 63% of U.S. teens, who are between 13 and 17, use Instagram, according to a Pew Research Center survey released in December. More than half of U.S. teens also use chatbots to search for information, according to a separate survey released this week.

Instagram, which has more than 3 billion monthly active users, said that most teens don’t search for suicide or self-harm content on Instagram. It blocks searches and directs people to suicide prevention resources. Instagram said the alerts are part of its teen accounts, which includes limits on who young people can message, time limit reminders and other features.

Parents who use these tools to keep an eye on their teens will start receiving alerts in the U.S., U.K. Australia, and Canada next week. They will then roll out to other regions later this year.

Social media platforms have been taking other steps to improve safety. This month, Meta, TikTok and Snap agreed to be rated on their teen safety efforts as part of a new program from the Mental Health Coalition.

Instagram, a social media platform popular among young people, said Thursday it will alert parents if their teens repeatedly search for suicide or self harm-related terms.

“Our goal is to empower parents to step in if their teen’s searches suggest they may need support,” the company said in a blog post.

Parents will receive a notification through text, email or WhatsApp. They will also have the option to view resources to help them have sensitive conversations with their teen.

Suicide prevention and crisis counseling resources

If you or someone you know is struggling with suicidal thoughts, seek help from a professional and call 9-8-8. The United States’ first nationwide three-digit mental health crisis hotline 988 will connect callers with trained mental health counselors. Text “HOME” to 741741 in the U.S. and Canada to reach the Crisis Text Line.

The move is the latest example of how tech companies are responding to concerns from parents, politicians and advocacy groups that they’re not doing enough to protect young people from harmful content.

A landmark trial over whether tech companies such as Instagram and YouTube can be held liable for allegedly promoting a harmful product and addicting users to their platforms is happening in Los Angeles.

The trial included testimony from Instagram boss Adam Mosseri, who told the court that the company is trying to be as “safe as possible but also censor as little as possible.”

Safety concerns have intensified as teens, some who have died by suicide, turn to AI chatbots to share some of their darkest thoughts.

Instagram has an AI assistant within its search bar. Meta, which owns Instagram, is building similar alerts if teens try to have certain conversations about suicide and self-harm with its AI assistant.

Meta has rules against posting content that encourages suicide or self-harm but allows people to discuss the topics. The parent company has also taken action against millions of suicide, self-harm and eating disorder content, Meta’s transparency reports show.

Some parents and teens, though, have alleged in lawsuits that young people have seen self-harm content on Instagram.

Roughly 63% of U.S. teens, who are between 13 and 17, use Instagram, according to a Pew Research Center survey released in December. More than half of U.S. teens also use chatbots to search for information, according to a separate survey released this week.

Instagram, which has more than 3 billion monthly active users, said that most teens don’t search for suicide or self-harm content on Instagram. It blocks searches and directs people to suicide prevention resources. Instagram said the alerts are part of its teen accounts, which includes limits on who young people can message, time limit reminders and other features.

Parents who use these tools to keep an eye on their teens will start receiving alerts in the U.S., U.K. Australia, and Canada next week. They will then roll out to other regions later this year.

Social media platforms have been taking other steps to improve safety. This month, Meta, TikTok and Snap agreed to be rated on their teen safety efforts as part of a new program from the Mental Health Coalition.

Instagram, a social media platform popular among young people, said Thursday it will alert parents if their teens repeatedly search for suicide or self harm-related terms.

“Our goal is to empower parents to step in if their teen’s searches suggest they may need support,” the company said in a blog post.

Parents will receive a notification through text, email or WhatsApp. They will also have the option to view resources to help them have sensitive conversations with their teen.

Suicide prevention and crisis counseling resources

If you or someone you know is struggling with suicidal thoughts, seek help from a professional and call 9-8-8. The United States’ first nationwide three-digit mental health crisis hotline 988 will connect callers with trained mental health counselors. Text “HOME” to 741741 in the U.S. and Canada to reach the Crisis Text Line.

The move is the latest example of how tech companies are responding to concerns from parents, politicians and advocacy groups that they’re not doing enough to protect young people from harmful content.

A landmark trial over whether tech companies such as Instagram and YouTube can be held liable for allegedly promoting a harmful product and addicting users to their platforms is happening in Los Angeles.

The trial included testimony from Instagram boss Adam Mosseri, who told the court that the company is trying to be as “safe as possible but also censor as little as possible.”

Safety concerns have intensified as teens, some who have died by suicide, turn to AI chatbots to share some of their darkest thoughts.

Instagram has an AI assistant within its search bar. Meta, which owns Instagram, is building similar alerts if teens try to have certain conversations about suicide and self-harm with its AI assistant.

Meta has rules against posting content that encourages suicide or self-harm but allows people to discuss the topics. The parent company has also taken action against millions of suicide, self-harm and eating disorder content, Meta’s transparency reports show.

Some parents and teens, though, have alleged in lawsuits that young people have seen self-harm content on Instagram.

Roughly 63% of U.S. teens, who are between 13 and 17, use Instagram, according to a Pew Research Center survey released in December. More than half of U.S. teens also use chatbots to search for information, according to a separate survey released this week.

Instagram, which has more than 3 billion monthly active users, said that most teens don’t search for suicide or self-harm content on Instagram. It blocks searches and directs people to suicide prevention resources. Instagram said the alerts are part of its teen accounts, which includes limits on who young people can message, time limit reminders and other features.

Parents who use these tools to keep an eye on their teens will start receiving alerts in the U.S., U.K. Australia, and Canada next week. They will then roll out to other regions later this year.

Social media platforms have been taking other steps to improve safety. This month, Meta, TikTok and Snap agreed to be rated on their teen safety efforts as part of a new program from the Mental Health Coalition.

Instagram, a social media platform popular among young people, said Thursday it will alert parents if their teens repeatedly search for suicide or self harm-related terms.

“Our goal is to empower parents to step in if their teen’s searches suggest they may need support,” the company said in a blog post.

Parents will receive a notification through text, email or WhatsApp. They will also have the option to view resources to help them have sensitive conversations with their teen.

Suicide prevention and crisis counseling resources

If you or someone you know is struggling with suicidal thoughts, seek help from a professional and call 9-8-8. The United States’ first nationwide three-digit mental health crisis hotline 988 will connect callers with trained mental health counselors. Text “HOME” to 741741 in the U.S. and Canada to reach the Crisis Text Line.

The move is the latest example of how tech companies are responding to concerns from parents, politicians and advocacy groups that they’re not doing enough to protect young people from harmful content.

A landmark trial over whether tech companies such as Instagram and YouTube can be held liable for allegedly promoting a harmful product and addicting users to their platforms is happening in Los Angeles.

The trial included testimony from Instagram boss Adam Mosseri, who told the court that the company is trying to be as “safe as possible but also censor as little as possible.”

Safety concerns have intensified as teens, some who have died by suicide, turn to AI chatbots to share some of their darkest thoughts.

Instagram has an AI assistant within its search bar. Meta, which owns Instagram, is building similar alerts if teens try to have certain conversations about suicide and self-harm with its AI assistant.

Meta has rules against posting content that encourages suicide or self-harm but allows people to discuss the topics. The parent company has also taken action against millions of suicide, self-harm and eating disorder content, Meta’s transparency reports show.

Some parents and teens, though, have alleged in lawsuits that young people have seen self-harm content on Instagram.

Roughly 63% of U.S. teens, who are between 13 and 17, use Instagram, according to a Pew Research Center survey released in December. More than half of U.S. teens also use chatbots to search for information, according to a separate survey released this week.

Instagram, which has more than 3 billion monthly active users, said that most teens don’t search for suicide or self-harm content on Instagram. It blocks searches and directs people to suicide prevention resources. Instagram said the alerts are part of its teen accounts, which includes limits on who young people can message, time limit reminders and other features.

Parents who use these tools to keep an eye on their teens will start receiving alerts in the U.S., U.K. Australia, and Canada next week. They will then roll out to other regions later this year.

Social media platforms have been taking other steps to improve safety. This month, Meta, TikTok and Snap agreed to be rated on their teen safety efforts as part of a new program from the Mental Health Coalition.

Instagram, a social media platform popular among young people, said Thursday it will alert parents if their teens repeatedly search for suicide or self harm-related terms.

“Our goal is to empower parents to step in if their teen’s searches suggest they may need support,” the company said in a blog post.

Parents will receive a notification through text, email or WhatsApp. They will also have the option to view resources to help them have sensitive conversations with their teen.

Suicide prevention and crisis counseling resources

If you or someone you know is struggling with suicidal thoughts, seek help from a professional and call 9-8-8. The United States’ first nationwide three-digit mental health crisis hotline 988 will connect callers with trained mental health counselors. Text “HOME” to 741741 in the U.S. and Canada to reach the Crisis Text Line.

The move is the latest example of how tech companies are responding to concerns from parents, politicians and advocacy groups that they’re not doing enough to protect young people from harmful content.

A landmark trial over whether tech companies such as Instagram and YouTube can be held liable for allegedly promoting a harmful product and addicting users to their platforms is happening in Los Angeles.

The trial included testimony from Instagram boss Adam Mosseri, who told the court that the company is trying to be as “safe as possible but also censor as little as possible.”

Safety concerns have intensified as teens, some who have died by suicide, turn to AI chatbots to share some of their darkest thoughts.

Instagram has an AI assistant within its search bar. Meta, which owns Instagram, is building similar alerts if teens try to have certain conversations about suicide and self-harm with its AI assistant.

Meta has rules against posting content that encourages suicide or self-harm but allows people to discuss the topics. The parent company has also taken action against millions of suicide, self-harm and eating disorder content, Meta’s transparency reports show.

Some parents and teens, though, have alleged in lawsuits that young people have seen self-harm content on Instagram.

Roughly 63% of U.S. teens, who are between 13 and 17, use Instagram, according to a Pew Research Center survey released in December. More than half of U.S. teens also use chatbots to search for information, according to a separate survey released this week.

Instagram, which has more than 3 billion monthly active users, said that most teens don’t search for suicide or self-harm content on Instagram. It blocks searches and directs people to suicide prevention resources. Instagram said the alerts are part of its teen accounts, which includes limits on who young people can message, time limit reminders and other features.

Parents who use these tools to keep an eye on their teens will start receiving alerts in the U.S., U.K. Australia, and Canada next week. They will then roll out to other regions later this year.

Social media platforms have been taking other steps to improve safety. This month, Meta, TikTok and Snap agreed to be rated on their teen safety efforts as part of a new program from the Mental Health Coalition.

Instagram, a social media platform popular among young people, said Thursday it will alert parents if their teens repeatedly search for suicide or self harm-related terms.

“Our goal is to empower parents to step in if their teen’s searches suggest they may need support,” the company said in a blog post.

Parents will receive a notification through text, email or WhatsApp. They will also have the option to view resources to help them have sensitive conversations with their teen.

Suicide prevention and crisis counseling resources

If you or someone you know is struggling with suicidal thoughts, seek help from a professional and call 9-8-8. The United States’ first nationwide three-digit mental health crisis hotline 988 will connect callers with trained mental health counselors. Text “HOME” to 741741 in the U.S. and Canada to reach the Crisis Text Line.

The move is the latest example of how tech companies are responding to concerns from parents, politicians and advocacy groups that they’re not doing enough to protect young people from harmful content.

A landmark trial over whether tech companies such as Instagram and YouTube can be held liable for allegedly promoting a harmful product and addicting users to their platforms is happening in Los Angeles.

The trial included testimony from Instagram boss Adam Mosseri, who told the court that the company is trying to be as “safe as possible but also censor as little as possible.”

Safety concerns have intensified as teens, some who have died by suicide, turn to AI chatbots to share some of their darkest thoughts.

Instagram has an AI assistant within its search bar. Meta, which owns Instagram, is building similar alerts if teens try to have certain conversations about suicide and self-harm with its AI assistant.

Meta has rules against posting content that encourages suicide or self-harm but allows people to discuss the topics. The parent company has also taken action against millions of suicide, self-harm and eating disorder content, Meta’s transparency reports show.

Some parents and teens, though, have alleged in lawsuits that young people have seen self-harm content on Instagram.

Roughly 63% of U.S. teens, who are between 13 and 17, use Instagram, according to a Pew Research Center survey released in December. More than half of U.S. teens also use chatbots to search for information, according to a separate survey released this week.

Instagram, which has more than 3 billion monthly active users, said that most teens don’t search for suicide or self-harm content on Instagram. It blocks searches and directs people to suicide prevention resources. Instagram said the alerts are part of its teen accounts, which includes limits on who young people can message, time limit reminders and other features.

Parents who use these tools to keep an eye on their teens will start receiving alerts in the U.S., U.K. Australia, and Canada next week. They will then roll out to other regions later this year.

Social media platforms have been taking other steps to improve safety. This month, Meta, TikTok and Snap agreed to be rated on their teen safety efforts as part of a new program from the Mental Health Coalition.

Instagram, a social media platform popular among young people, said Thursday it will alert parents if their teens repeatedly search for suicide or self harm-related terms.

“Our goal is to empower parents to step in if their teen’s searches suggest they may need support,” the company said in a blog post.

Parents will receive a notification through text, email or WhatsApp. They will also have the option to view resources to help them have sensitive conversations with their teen.

Suicide prevention and crisis counseling resources

If you or someone you know is struggling with suicidal thoughts, seek help from a professional and call 9-8-8. The United States’ first nationwide three-digit mental health crisis hotline 988 will connect callers with trained mental health counselors. Text “HOME” to 741741 in the U.S. and Canada to reach the Crisis Text Line.

The move is the latest example of how tech companies are responding to concerns from parents, politicians and advocacy groups that they’re not doing enough to protect young people from harmful content.

A landmark trial over whether tech companies such as Instagram and YouTube can be held liable for allegedly promoting a harmful product and addicting users to their platforms is happening in Los Angeles.

The trial included testimony from Instagram boss Adam Mosseri, who told the court that the company is trying to be as “safe as possible but also censor as little as possible.”

Safety concerns have intensified as teens, some who have died by suicide, turn to AI chatbots to share some of their darkest thoughts.

Instagram has an AI assistant within its search bar. Meta, which owns Instagram, is building similar alerts if teens try to have certain conversations about suicide and self-harm with its AI assistant.

Meta has rules against posting content that encourages suicide or self-harm but allows people to discuss the topics. The parent company has also taken action against millions of suicide, self-harm and eating disorder content, Meta’s transparency reports show.

Some parents and teens, though, have alleged in lawsuits that young people have seen self-harm content on Instagram.

Roughly 63% of U.S. teens, who are between 13 and 17, use Instagram, according to a Pew Research Center survey released in December. More than half of U.S. teens also use chatbots to search for information, according to a separate survey released this week.

Instagram, which has more than 3 billion monthly active users, said that most teens don’t search for suicide or self-harm content on Instagram. It blocks searches and directs people to suicide prevention resources. Instagram said the alerts are part of its teen accounts, which includes limits on who young people can message, time limit reminders and other features.

Parents who use these tools to keep an eye on their teens will start receiving alerts in the U.S., U.K. Australia, and Canada next week. They will then roll out to other regions later this year.

Social media platforms have been taking other steps to improve safety. This month, Meta, TikTok and Snap agreed to be rated on their teen safety efforts as part of a new program from the Mental Health Coalition.

Instagram, a social media platform popular among young people, said Thursday it will alert parents if their teens repeatedly search for suicide or self harm-related terms.

“Our goal is to empower parents to step in if their teen’s searches suggest they may need support,” the company said in a blog post.

Parents will receive a notification through text, email or WhatsApp. They will also have the option to view resources to help them have sensitive conversations with their teen.

Suicide prevention and crisis counseling resources

If you or someone you know is struggling with suicidal thoughts, seek help from a professional and call 9-8-8. The United States’ first nationwide three-digit mental health crisis hotline 988 will connect callers with trained mental health counselors. Text “HOME” to 741741 in the U.S. and Canada to reach the Crisis Text Line.

The move is the latest example of how tech companies are responding to concerns from parents, politicians and advocacy groups that they’re not doing enough to protect young people from harmful content.

A landmark trial over whether tech companies such as Instagram and YouTube can be held liable for allegedly promoting a harmful product and addicting users to their platforms is happening in Los Angeles.

The trial included testimony from Instagram boss Adam Mosseri, who told the court that the company is trying to be as “safe as possible but also censor as little as possible.”

Safety concerns have intensified as teens, some who have died by suicide, turn to AI chatbots to share some of their darkest thoughts.

Instagram has an AI assistant within its search bar. Meta, which owns Instagram, is building similar alerts if teens try to have certain conversations about suicide and self-harm with its AI assistant.

Meta has rules against posting content that encourages suicide or self-harm but allows people to discuss the topics. The parent company has also taken action against millions of suicide, self-harm and eating disorder content, Meta’s transparency reports show.

Some parents and teens, though, have alleged in lawsuits that young people have seen self-harm content on Instagram.

Roughly 63% of U.S. teens, who are between 13 and 17, use Instagram, according to a Pew Research Center survey released in December. More than half of U.S. teens also use chatbots to search for information, according to a separate survey released this week.

Instagram, which has more than 3 billion monthly active users, said that most teens don’t search for suicide or self-harm content on Instagram. It blocks searches and directs people to suicide prevention resources. Instagram said the alerts are part of its teen accounts, which includes limits on who young people can message, time limit reminders and other features.

Parents who use these tools to keep an eye on their teens will start receiving alerts in the U.S., U.K. Australia, and Canada next week. They will then roll out to other regions later this year.

Social media platforms have been taking other steps to improve safety. This month, Meta, TikTok and Snap agreed to be rated on their teen safety efforts as part of a new program from the Mental Health Coalition.

Instagram, a social media platform popular among young people, said Thursday it will alert parents if their teens repeatedly search for suicide or self harm-related terms.

“Our goal is to empower parents to step in if their teen’s searches suggest they may need support,” the company said in a blog post.

Parents will receive a notification through text, email or WhatsApp. They will also have the option to view resources to help them have sensitive conversations with their teen.

Suicide prevention and crisis counseling resources

If you or someone you know is struggling with suicidal thoughts, seek help from a professional and call 9-8-8. The United States’ first nationwide three-digit mental health crisis hotline 988 will connect callers with trained mental health counselors. Text “HOME” to 741741 in the U.S. and Canada to reach the Crisis Text Line.

The move is the latest example of how tech companies are responding to concerns from parents, politicians and advocacy groups that they’re not doing enough to protect young people from harmful content.

A landmark trial over whether tech companies such as Instagram and YouTube can be held liable for allegedly promoting a harmful product and addicting users to their platforms is happening in Los Angeles.

The trial included testimony from Instagram boss Adam Mosseri, who told the court that the company is trying to be as “safe as possible but also censor as little as possible.”

Safety concerns have intensified as teens, some who have died by suicide, turn to AI chatbots to share some of their darkest thoughts.

Instagram has an AI assistant within its search bar. Meta, which owns Instagram, is building similar alerts if teens try to have certain conversations about suicide and self-harm with its AI assistant.

Meta has rules against posting content that encourages suicide or self-harm but allows people to discuss the topics. The parent company has also taken action against millions of suicide, self-harm and eating disorder content, Meta’s transparency reports show.

Some parents and teens, though, have alleged in lawsuits that young people have seen self-harm content on Instagram.

Roughly 63% of U.S. teens, who are between 13 and 17, use Instagram, according to a Pew Research Center survey released in December. More than half of U.S. teens also use chatbots to search for information, according to a separate survey released this week.

Instagram, which has more than 3 billion monthly active users, said that most teens don’t search for suicide or self-harm content on Instagram. It blocks searches and directs people to suicide prevention resources. Instagram said the alerts are part of its teen accounts, which includes limits on who young people can message, time limit reminders and other features.

Parents who use these tools to keep an eye on their teens will start receiving alerts in the U.S., U.K. Australia, and Canada next week. They will then roll out to other regions later this year.

Social media platforms have been taking other steps to improve safety. This month, Meta, TikTok and Snap agreed to be rated on their teen safety efforts as part of a new program from the Mental Health Coalition.

Instagram, a social media platform popular among young people, said Thursday it will alert parents if their teens repeatedly search for suicide or self harm-related terms.

“Our goal is to empower parents to step in if their teen’s searches suggest they may need support,” the company said in a blog post.

Parents will receive a notification through text, email or WhatsApp. They will also have the option to view resources to help them have sensitive conversations with their teen.

Suicide prevention and crisis counseling resources

If you or someone you know is struggling with suicidal thoughts, seek help from a professional and call 9-8-8. The United States’ first nationwide three-digit mental health crisis hotline 988 will connect callers with trained mental health counselors. Text “HOME” to 741741 in the U.S. and Canada to reach the Crisis Text Line.

The move is the latest example of how tech companies are responding to concerns from parents, politicians and advocacy groups that they’re not doing enough to protect young people from harmful content.

A landmark trial over whether tech companies such as Instagram and YouTube can be held liable for allegedly promoting a harmful product and addicting users to their platforms is happening in Los Angeles.

The trial included testimony from Instagram boss Adam Mosseri, who told the court that the company is trying to be as “safe as possible but also censor as little as possible.”

Safety concerns have intensified as teens, some who have died by suicide, turn to AI chatbots to share some of their darkest thoughts.

Instagram has an AI assistant within its search bar. Meta, which owns Instagram, is building similar alerts if teens try to have certain conversations about suicide and self-harm with its AI assistant.

Meta has rules against posting content that encourages suicide or self-harm but allows people to discuss the topics. The parent company has also taken action against millions of suicide, self-harm and eating disorder content, Meta’s transparency reports show.

Some parents and teens, though, have alleged in lawsuits that young people have seen self-harm content on Instagram.

Roughly 63% of U.S. teens, who are between 13 and 17, use Instagram, according to a Pew Research Center survey released in December. More than half of U.S. teens also use chatbots to search for information, according to a separate survey released this week.

Instagram, which has more than 3 billion monthly active users, said that most teens don’t search for suicide or self-harm content on Instagram. It blocks searches and directs people to suicide prevention resources. Instagram said the alerts are part of its teen accounts, which includes limits on who young people can message, time limit reminders and other features.

Parents who use these tools to keep an eye on their teens will start receiving alerts in the U.S., U.K. Australia, and Canada next week. They will then roll out to other regions later this year.

Social media platforms have been taking other steps to improve safety. This month, Meta, TikTok and Snap agreed to be rated on their teen safety efforts as part of a new program from the Mental Health Coalition.

Instagram, a social media platform popular among young people, said Thursday it will alert parents if their teens repeatedly search for suicide or self harm-related terms.

“Our goal is to empower parents to step in if their teen’s searches suggest they may need support,” the company said in a blog post.

Parents will receive a notification through text, email or WhatsApp. They will also have the option to view resources to help them have sensitive conversations with their teen.

Suicide prevention and crisis counseling resources

If you or someone you know is struggling with suicidal thoughts, seek help from a professional and call 9-8-8. The United States’ first nationwide three-digit mental health crisis hotline 988 will connect callers with trained mental health counselors. Text “HOME” to 741741 in the U.S. and Canada to reach the Crisis Text Line.

The move is the latest example of how tech companies are responding to concerns from parents, politicians and advocacy groups that they’re not doing enough to protect young people from harmful content.

A landmark trial over whether tech companies such as Instagram and YouTube can be held liable for allegedly promoting a harmful product and addicting users to their platforms is happening in Los Angeles.

The trial included testimony from Instagram boss Adam Mosseri, who told the court that the company is trying to be as “safe as possible but also censor as little as possible.”

Safety concerns have intensified as teens, some who have died by suicide, turn to AI chatbots to share some of their darkest thoughts.

Instagram has an AI assistant within its search bar. Meta, which owns Instagram, is building similar alerts if teens try to have certain conversations about suicide and self-harm with its AI assistant.

Meta has rules against posting content that encourages suicide or self-harm but allows people to discuss the topics. The parent company has also taken action against millions of suicide, self-harm and eating disorder content, Meta’s transparency reports show.

Some parents and teens, though, have alleged in lawsuits that young people have seen self-harm content on Instagram.

Roughly 63% of U.S. teens, who are between 13 and 17, use Instagram, according to a Pew Research Center survey released in December. More than half of U.S. teens also use chatbots to search for information, according to a separate survey released this week.

Instagram, which has more than 3 billion monthly active users, said that most teens don’t search for suicide or self-harm content on Instagram. It blocks searches and directs people to suicide prevention resources. Instagram said the alerts are part of its teen accounts, which includes limits on who young people can message, time limit reminders and other features.

Parents who use these tools to keep an eye on their teens will start receiving alerts in the U.S., U.K. Australia, and Canada next week. They will then roll out to other regions later this year.

Social media platforms have been taking other steps to improve safety. This month, Meta, TikTok and Snap agreed to be rated on their teen safety efforts as part of a new program from the Mental Health Coalition.

Instagram, a social media platform popular among young people, said Thursday it will alert parents if their teens repeatedly search for suicide or self harm-related terms.

“Our goal is to empower parents to step in if their teen’s searches suggest they may need support,” the company said in a blog post.

Parents will receive a notification through text, email or WhatsApp. They will also have the option to view resources to help them have sensitive conversations with their teen.

Suicide prevention and crisis counseling resources

If you or someone you know is struggling with suicidal thoughts, seek help from a professional and call 9-8-8. The United States’ first nationwide three-digit mental health crisis hotline 988 will connect callers with trained mental health counselors. Text “HOME” to 741741 in the U.S. and Canada to reach the Crisis Text Line.

The move is the latest example of how tech companies are responding to concerns from parents, politicians and advocacy groups that they’re not doing enough to protect young people from harmful content.

A landmark trial over whether tech companies such as Instagram and YouTube can be held liable for allegedly promoting a harmful product and addicting users to their platforms is happening in Los Angeles.

The trial included testimony from Instagram boss Adam Mosseri, who told the court that the company is trying to be as “safe as possible but also censor as little as possible.”

Safety concerns have intensified as teens, some who have died by suicide, turn to AI chatbots to share some of their darkest thoughts.

Instagram has an AI assistant within its search bar. Meta, which owns Instagram, is building similar alerts if teens try to have certain conversations about suicide and self-harm with its AI assistant.

Meta has rules against posting content that encourages suicide or self-harm but allows people to discuss the topics. The parent company has also taken action against millions of suicide, self-harm and eating disorder content, Meta’s transparency reports show.

Some parents and teens, though, have alleged in lawsuits that young people have seen self-harm content on Instagram.

Roughly 63% of U.S. teens, who are between 13 and 17, use Instagram, according to a Pew Research Center survey released in December. More than half of U.S. teens also use chatbots to search for information, according to a separate survey released this week.

Instagram, which has more than 3 billion monthly active users, said that most teens don’t search for suicide or self-harm content on Instagram. It blocks searches and directs people to suicide prevention resources. Instagram said the alerts are part of its teen accounts, which includes limits on who young people can message, time limit reminders and other features.

Parents who use these tools to keep an eye on their teens will start receiving alerts in the U.S., U.K. Australia, and Canada next week. They will then roll out to other regions later this year.

Social media platforms have been taking other steps to improve safety. This month, Meta, TikTok and Snap agreed to be rated on their teen safety efforts as part of a new program from the Mental Health Coalition.

Instagram, a social media platform popular among young people, said Thursday it will alert parents if their teens repeatedly search for suicide or self harm-related terms.

“Our goal is to empower parents to step in if their teen’s searches suggest they may need support,” the company said in a blog post.

Parents will receive a notification through text, email or WhatsApp. They will also have the option to view resources to help them have sensitive conversations with their teen.

Suicide prevention and crisis counseling resources

If you or someone you know is struggling with suicidal thoughts, seek help from a professional and call 9-8-8. The United States’ first nationwide three-digit mental health crisis hotline 988 will connect callers with trained mental health counselors. Text “HOME” to 741741 in the U.S. and Canada to reach the Crisis Text Line.

The move is the latest example of how tech companies are responding to concerns from parents, politicians and advocacy groups that they’re not doing enough to protect young people from harmful content.

A landmark trial over whether tech companies such as Instagram and YouTube can be held liable for allegedly promoting a harmful product and addicting users to their platforms is happening in Los Angeles.

The trial included testimony from Instagram boss Adam Mosseri, who told the court that the company is trying to be as “safe as possible but also censor as little as possible.”

Safety concerns have intensified as teens, some who have died by suicide, turn to AI chatbots to share some of their darkest thoughts.

Instagram has an AI assistant within its search bar. Meta, which owns Instagram, is building similar alerts if teens try to have certain conversations about suicide and self-harm with its AI assistant.

Meta has rules against posting content that encourages suicide or self-harm but allows people to discuss the topics. The parent company has also taken action against millions of suicide, self-harm and eating disorder content, Meta’s transparency reports show.

Some parents and teens, though, have alleged in lawsuits that young people have seen self-harm content on Instagram.

Roughly 63% of U.S. teens, who are between 13 and 17, use Instagram, according to a Pew Research Center survey released in December. More than half of U.S. teens also use chatbots to search for information, according to a separate survey released this week.

Instagram, which has more than 3 billion monthly active users, said that most teens don’t search for suicide or self-harm content on Instagram. It blocks searches and directs people to suicide prevention resources. Instagram said the alerts are part of its teen accounts, which includes limits on who young people can message, time limit reminders and other features.

Parents who use these tools to keep an eye on their teens will start receiving alerts in the U.S., U.K. Australia, and Canada next week. They will then roll out to other regions later this year.

Social media platforms have been taking other steps to improve safety. This month, Meta, TikTok and Snap agreed to be rated on their teen safety efforts as part of a new program from the Mental Health Coalition.

Instagram, a social media platform popular among young people, said Thursday it will alert parents if their teens repeatedly search for suicide or self harm-related terms.

“Our goal is to empower parents to step in if their teen’s searches suggest they may need support,” the company said in a blog post.

Parents will receive a notification through text, email or WhatsApp. They will also have the option to view resources to help them have sensitive conversations with their teen.

Suicide prevention and crisis counseling resources

If you or someone you know is struggling with suicidal thoughts, seek help from a professional and call 9-8-8. The United States’ first nationwide three-digit mental health crisis hotline 988 will connect callers with trained mental health counselors. Text “HOME” to 741741 in the U.S. and Canada to reach the Crisis Text Line.

The move is the latest example of how tech companies are responding to concerns from parents, politicians and advocacy groups that they’re not doing enough to protect young people from harmful content.

A landmark trial over whether tech companies such as Instagram and YouTube can be held liable for allegedly promoting a harmful product and addicting users to their platforms is happening in Los Angeles.

The trial included testimony from Instagram boss Adam Mosseri, who told the court that the company is trying to be as “safe as possible but also censor as little as possible.”

Safety concerns have intensified as teens, some who have died by suicide, turn to AI chatbots to share some of their darkest thoughts.

Instagram has an AI assistant within its search bar. Meta, which owns Instagram, is building similar alerts if teens try to have certain conversations about suicide and self-harm with its AI assistant.

Meta has rules against posting content that encourages suicide or self-harm but allows people to discuss the topics. The parent company has also taken action against millions of suicide, self-harm and eating disorder content, Meta’s transparency reports show.

Some parents and teens, though, have alleged in lawsuits that young people have seen self-harm content on Instagram.

Roughly 63% of U.S. teens, who are between 13 and 17, use Instagram, according to a Pew Research Center survey released in December. More than half of U.S. teens also use chatbots to search for information, according to a separate survey released this week.

Instagram, which has more than 3 billion monthly active users, said that most teens don’t search for suicide or self-harm content on Instagram. It blocks searches and directs people to suicide prevention resources. Instagram said the alerts are part of its teen accounts, which includes limits on who young people can message, time limit reminders and other features.

Parents who use these tools to keep an eye on their teens will start receiving alerts in the U.S., U.K. Australia, and Canada next week. They will then roll out to other regions later this year.

Social media platforms have been taking other steps to improve safety. This month, Meta, TikTok and Snap agreed to be rated on their teen safety efforts as part of a new program from the Mental Health Coalition.

Instagram, a social media platform popular among young people, said Thursday it will alert parents if their teens repeatedly search for suicide or self harm-related terms.

“Our goal is to empower parents to step in if their teen’s searches suggest they may need support,” the company said in a blog post.

Parents will receive a notification through text, email or WhatsApp. They will also have the option to view resources to help them have sensitive conversations with their teen.

Suicide prevention and crisis counseling resources

If you or someone you know is struggling with suicidal thoughts, seek help from a professional and call 9-8-8. The United States’ first nationwide three-digit mental health crisis hotline 988 will connect callers with trained mental health counselors. Text “HOME” to 741741 in the U.S. and Canada to reach the Crisis Text Line.

The move is the latest example of how tech companies are responding to concerns from parents, politicians and advocacy groups that they’re not doing enough to protect young people from harmful content.

A landmark trial over whether tech companies such as Instagram and YouTube can be held liable for allegedly promoting a harmful product and addicting users to their platforms is happening in Los Angeles.

The trial included testimony from Instagram boss Adam Mosseri, who told the court that the company is trying to be as “safe as possible but also censor as little as possible.”

Safety concerns have intensified as teens, some who have died by suicide, turn to AI chatbots to share some of their darkest thoughts.

Instagram has an AI assistant within its search bar. Meta, which owns Instagram, is building similar alerts if teens try to have certain conversations about suicide and self-harm with its AI assistant.

Meta has rules against posting content that encourages suicide or self-harm but allows people to discuss the topics. The parent company has also taken action against millions of suicide, self-harm and eating disorder content, Meta’s transparency reports show.

Some parents and teens, though, have alleged in lawsuits that young people have seen self-harm content on Instagram.

Roughly 63% of U.S. teens, who are between 13 and 17, use Instagram, according to a Pew Research Center survey released in December. More than half of U.S. teens also use chatbots to search for information, according to a separate survey released this week.

Instagram, which has more than 3 billion monthly active users, said that most teens don’t search for suicide or self-harm content on Instagram. It blocks searches and directs people to suicide prevention resources. Instagram said the alerts are part of its teen accounts, which includes limits on who young people can message, time limit reminders and other features.

Parents who use these tools to keep an eye on their teens will start receiving alerts in the U.S., U.K. Australia, and Canada next week. They will then roll out to other regions later this year.

Social media platforms have been taking other steps to improve safety. This month, Meta, TikTok and Snap agreed to be rated on their teen safety efforts as part of a new program from the Mental Health Coalition.

Previous Post

‘Heavy Attacks’ Launched: Afghanistan Responds To Pakistani Airstrikes

Next Post

Review: ‘The Gray House’ is inspired by Civil War history but isn’t itself inspiring

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

BROWSE BY CATEGORIES

  • Business
  • Culture
  • Entertainment
  • Health
  • Politics
  • Technology
  • Trending
  • Uncategorized
  • World
Binghamton Herald

© 2024 Binghamton Herald or its affiliated companies.

Navigate Site

  • About
  • Advertise
  • Terms & Conditions
  • Privacy Policy
  • Disclaimer
  • Contact

Follow Us

No Result
View All Result
  • Home
  • World
  • Politics
  • Business
  • Technology
  • Culture
  • Health
  • Entertainment
  • Trending

© 2024 Binghamton Herald or its affiliated companies.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In