Washington DC
New York
Toronto
Distribution: (800) 510 9863
Press ID
  • Login
Binghamton Herald
Advertisement
Monday, April 27, 2026
  • Home
  • World
  • Politics
  • Business
  • Technology
  • Culture
  • Health
  • Entertainment
  • Trending
No Result
View All Result
Binghamton Herald
No Result
View All Result
Home Business

OpenAI CEO Sam Altman apologizes for not alerting police about Canada shooting suspect

by Binghamton Herald Report
April 27, 2026
in Business
Share on FacebookShare on Twitter

OpenAI Chief Executive Sam Altman apologized to a Canadian community for failing to alert police about a mass shooter’s conversations with its chatbot.

Authorities said Jesse Van Rootselaar, 18, killed eight people including schoolchildren in Tumbler Ridge, British Columbia, before taking her own life in February.

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said in a letter Thursday. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.”

On Friday, British Columbia Premier David Eby posted the letter on social media.

The letter came after the Wall Street Journal, citing people familiar with the matter, reported this year that Van Rootselaar conversed with ChatGPT about gun violence, prompting OpenAI employees to debate whether to alert Canadian law enforcement.

OpenAI banned the user’s account but decided to not notify the police after looking at whether the activity would be considered an imminent and serious risk of physical harm to others.

Technology companies have faced more scrutiny in the wake of mass shootings over how criminals use their tools to plan attacks or broadcast killings. But the rise of artificial intelligence chatbots that quickly answer questions and generate content also means that people are spilling their darkest thoughts online. AI companies are now reckoning with debates about balancing public safety and privacy while also grappling with new lawsuits and investigations.

In March, the family of a Tumbler Ridge shooting victim, who was hospitalized, sued OpenAI, alleging that the company knew the shooter was planning a mass attack but failed to alert law enforcement.

In a post Friday on X, Eby called the apology “necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge.”

In the letter, Altman said he spoke to Tumbler Ridge Mayor Darryl Krakowka and Eby about the shooting and they agreed on a public apology. Altman said he was committed to finding ways to prevent such tragedies.

“Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again,” Altman said in the letter.

OpenAI is also grappling with backlash over whether it is doing enough to protect public safety in the United States.

Last week, Florida’s attorney general launched a criminal investigation into ChatGPT and OpenAI to determine whether the San Francisco AI company “bears criminal responsibility” for the chatbot’s actions in a Florida State University shooting last year that left two people dead. Prosecutors had been reviewing conversations between the suspect, Phoenix Ikner, and ChatGPT.

OpenAI Chief Executive Sam Altman apologized to a Canadian community for failing to alert police about a mass shooter’s conversations with its chatbot.

Authorities said Jesse Van Rootselaar, 18, killed eight people including schoolchildren in Tumbler Ridge, British Columbia, before taking her own life in February.

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said in a letter Thursday. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.”

On Friday, British Columbia Premier David Eby posted the letter on social media.

The letter came after the Wall Street Journal, citing people familiar with the matter, reported this year that Van Rootselaar conversed with ChatGPT about gun violence, prompting OpenAI employees to debate whether to alert Canadian law enforcement.

OpenAI banned the user’s account but decided to not notify the police after looking at whether the activity would be considered an imminent and serious risk of physical harm to others.

Technology companies have faced more scrutiny in the wake of mass shootings over how criminals use their tools to plan attacks or broadcast killings. But the rise of artificial intelligence chatbots that quickly answer questions and generate content also means that people are spilling their darkest thoughts online. AI companies are now reckoning with debates about balancing public safety and privacy while also grappling with new lawsuits and investigations.

In March, the family of a Tumbler Ridge shooting victim, who was hospitalized, sued OpenAI, alleging that the company knew the shooter was planning a mass attack but failed to alert law enforcement.

In a post Friday on X, Eby called the apology “necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge.”

In the letter, Altman said he spoke to Tumbler Ridge Mayor Darryl Krakowka and Eby about the shooting and they agreed on a public apology. Altman said he was committed to finding ways to prevent such tragedies.

“Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again,” Altman said in the letter.

OpenAI is also grappling with backlash over whether it is doing enough to protect public safety in the United States.

Last week, Florida’s attorney general launched a criminal investigation into ChatGPT and OpenAI to determine whether the San Francisco AI company “bears criminal responsibility” for the chatbot’s actions in a Florida State University shooting last year that left two people dead. Prosecutors had been reviewing conversations between the suspect, Phoenix Ikner, and ChatGPT.

OpenAI Chief Executive Sam Altman apologized to a Canadian community for failing to alert police about a mass shooter’s conversations with its chatbot.

Authorities said Jesse Van Rootselaar, 18, killed eight people including schoolchildren in Tumbler Ridge, British Columbia, before taking her own life in February.

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said in a letter Thursday. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.”

On Friday, British Columbia Premier David Eby posted the letter on social media.

The letter came after the Wall Street Journal, citing people familiar with the matter, reported this year that Van Rootselaar conversed with ChatGPT about gun violence, prompting OpenAI employees to debate whether to alert Canadian law enforcement.

OpenAI banned the user’s account but decided to not notify the police after looking at whether the activity would be considered an imminent and serious risk of physical harm to others.

Technology companies have faced more scrutiny in the wake of mass shootings over how criminals use their tools to plan attacks or broadcast killings. But the rise of artificial intelligence chatbots that quickly answer questions and generate content also means that people are spilling their darkest thoughts online. AI companies are now reckoning with debates about balancing public safety and privacy while also grappling with new lawsuits and investigations.

In March, the family of a Tumbler Ridge shooting victim, who was hospitalized, sued OpenAI, alleging that the company knew the shooter was planning a mass attack but failed to alert law enforcement.

In a post Friday on X, Eby called the apology “necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge.”

In the letter, Altman said he spoke to Tumbler Ridge Mayor Darryl Krakowka and Eby about the shooting and they agreed on a public apology. Altman said he was committed to finding ways to prevent such tragedies.

“Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again,” Altman said in the letter.

OpenAI is also grappling with backlash over whether it is doing enough to protect public safety in the United States.

Last week, Florida’s attorney general launched a criminal investigation into ChatGPT and OpenAI to determine whether the San Francisco AI company “bears criminal responsibility” for the chatbot’s actions in a Florida State University shooting last year that left two people dead. Prosecutors had been reviewing conversations between the suspect, Phoenix Ikner, and ChatGPT.

OpenAI Chief Executive Sam Altman apologized to a Canadian community for failing to alert police about a mass shooter’s conversations with its chatbot.

Authorities said Jesse Van Rootselaar, 18, killed eight people including schoolchildren in Tumbler Ridge, British Columbia, before taking her own life in February.

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said in a letter Thursday. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.”

On Friday, British Columbia Premier David Eby posted the letter on social media.

The letter came after the Wall Street Journal, citing people familiar with the matter, reported this year that Van Rootselaar conversed with ChatGPT about gun violence, prompting OpenAI employees to debate whether to alert Canadian law enforcement.

OpenAI banned the user’s account but decided to not notify the police after looking at whether the activity would be considered an imminent and serious risk of physical harm to others.

Technology companies have faced more scrutiny in the wake of mass shootings over how criminals use their tools to plan attacks or broadcast killings. But the rise of artificial intelligence chatbots that quickly answer questions and generate content also means that people are spilling their darkest thoughts online. AI companies are now reckoning with debates about balancing public safety and privacy while also grappling with new lawsuits and investigations.

In March, the family of a Tumbler Ridge shooting victim, who was hospitalized, sued OpenAI, alleging that the company knew the shooter was planning a mass attack but failed to alert law enforcement.

In a post Friday on X, Eby called the apology “necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge.”

In the letter, Altman said he spoke to Tumbler Ridge Mayor Darryl Krakowka and Eby about the shooting and they agreed on a public apology. Altman said he was committed to finding ways to prevent such tragedies.

“Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again,” Altman said in the letter.

OpenAI is also grappling with backlash over whether it is doing enough to protect public safety in the United States.

Last week, Florida’s attorney general launched a criminal investigation into ChatGPT and OpenAI to determine whether the San Francisco AI company “bears criminal responsibility” for the chatbot’s actions in a Florida State University shooting last year that left two people dead. Prosecutors had been reviewing conversations between the suspect, Phoenix Ikner, and ChatGPT.

OpenAI Chief Executive Sam Altman apologized to a Canadian community for failing to alert police about a mass shooter’s conversations with its chatbot.

Authorities said Jesse Van Rootselaar, 18, killed eight people including schoolchildren in Tumbler Ridge, British Columbia, before taking her own life in February.

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said in a letter Thursday. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.”

On Friday, British Columbia Premier David Eby posted the letter on social media.

The letter came after the Wall Street Journal, citing people familiar with the matter, reported this year that Van Rootselaar conversed with ChatGPT about gun violence, prompting OpenAI employees to debate whether to alert Canadian law enforcement.

OpenAI banned the user’s account but decided to not notify the police after looking at whether the activity would be considered an imminent and serious risk of physical harm to others.

Technology companies have faced more scrutiny in the wake of mass shootings over how criminals use their tools to plan attacks or broadcast killings. But the rise of artificial intelligence chatbots that quickly answer questions and generate content also means that people are spilling their darkest thoughts online. AI companies are now reckoning with debates about balancing public safety and privacy while also grappling with new lawsuits and investigations.

In March, the family of a Tumbler Ridge shooting victim, who was hospitalized, sued OpenAI, alleging that the company knew the shooter was planning a mass attack but failed to alert law enforcement.

In a post Friday on X, Eby called the apology “necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge.”

In the letter, Altman said he spoke to Tumbler Ridge Mayor Darryl Krakowka and Eby about the shooting and they agreed on a public apology. Altman said he was committed to finding ways to prevent such tragedies.

“Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again,” Altman said in the letter.

OpenAI is also grappling with backlash over whether it is doing enough to protect public safety in the United States.

Last week, Florida’s attorney general launched a criminal investigation into ChatGPT and OpenAI to determine whether the San Francisco AI company “bears criminal responsibility” for the chatbot’s actions in a Florida State University shooting last year that left two people dead. Prosecutors had been reviewing conversations between the suspect, Phoenix Ikner, and ChatGPT.

OpenAI Chief Executive Sam Altman apologized to a Canadian community for failing to alert police about a mass shooter’s conversations with its chatbot.

Authorities said Jesse Van Rootselaar, 18, killed eight people including schoolchildren in Tumbler Ridge, British Columbia, before taking her own life in February.

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said in a letter Thursday. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.”

On Friday, British Columbia Premier David Eby posted the letter on social media.

The letter came after the Wall Street Journal, citing people familiar with the matter, reported this year that Van Rootselaar conversed with ChatGPT about gun violence, prompting OpenAI employees to debate whether to alert Canadian law enforcement.

OpenAI banned the user’s account but decided to not notify the police after looking at whether the activity would be considered an imminent and serious risk of physical harm to others.

Technology companies have faced more scrutiny in the wake of mass shootings over how criminals use their tools to plan attacks or broadcast killings. But the rise of artificial intelligence chatbots that quickly answer questions and generate content also means that people are spilling their darkest thoughts online. AI companies are now reckoning with debates about balancing public safety and privacy while also grappling with new lawsuits and investigations.

In March, the family of a Tumbler Ridge shooting victim, who was hospitalized, sued OpenAI, alleging that the company knew the shooter was planning a mass attack but failed to alert law enforcement.

In a post Friday on X, Eby called the apology “necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge.”

In the letter, Altman said he spoke to Tumbler Ridge Mayor Darryl Krakowka and Eby about the shooting and they agreed on a public apology. Altman said he was committed to finding ways to prevent such tragedies.

“Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again,” Altman said in the letter.

OpenAI is also grappling with backlash over whether it is doing enough to protect public safety in the United States.

Last week, Florida’s attorney general launched a criminal investigation into ChatGPT and OpenAI to determine whether the San Francisco AI company “bears criminal responsibility” for the chatbot’s actions in a Florida State University shooting last year that left two people dead. Prosecutors had been reviewing conversations between the suspect, Phoenix Ikner, and ChatGPT.

OpenAI Chief Executive Sam Altman apologized to a Canadian community for failing to alert police about a mass shooter’s conversations with its chatbot.

Authorities said Jesse Van Rootselaar, 18, killed eight people including schoolchildren in Tumbler Ridge, British Columbia, before taking her own life in February.

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said in a letter Thursday. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.”

On Friday, British Columbia Premier David Eby posted the letter on social media.

The letter came after the Wall Street Journal, citing people familiar with the matter, reported this year that Van Rootselaar conversed with ChatGPT about gun violence, prompting OpenAI employees to debate whether to alert Canadian law enforcement.

OpenAI banned the user’s account but decided to not notify the police after looking at whether the activity would be considered an imminent and serious risk of physical harm to others.

Technology companies have faced more scrutiny in the wake of mass shootings over how criminals use their tools to plan attacks or broadcast killings. But the rise of artificial intelligence chatbots that quickly answer questions and generate content also means that people are spilling their darkest thoughts online. AI companies are now reckoning with debates about balancing public safety and privacy while also grappling with new lawsuits and investigations.

In March, the family of a Tumbler Ridge shooting victim, who was hospitalized, sued OpenAI, alleging that the company knew the shooter was planning a mass attack but failed to alert law enforcement.

In a post Friday on X, Eby called the apology “necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge.”

In the letter, Altman said he spoke to Tumbler Ridge Mayor Darryl Krakowka and Eby about the shooting and they agreed on a public apology. Altman said he was committed to finding ways to prevent such tragedies.

“Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again,” Altman said in the letter.

OpenAI is also grappling with backlash over whether it is doing enough to protect public safety in the United States.

Last week, Florida’s attorney general launched a criminal investigation into ChatGPT and OpenAI to determine whether the San Francisco AI company “bears criminal responsibility” for the chatbot’s actions in a Florida State University shooting last year that left two people dead. Prosecutors had been reviewing conversations between the suspect, Phoenix Ikner, and ChatGPT.

OpenAI Chief Executive Sam Altman apologized to a Canadian community for failing to alert police about a mass shooter’s conversations with its chatbot.

Authorities said Jesse Van Rootselaar, 18, killed eight people including schoolchildren in Tumbler Ridge, British Columbia, before taking her own life in February.

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said in a letter Thursday. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.”

On Friday, British Columbia Premier David Eby posted the letter on social media.

The letter came after the Wall Street Journal, citing people familiar with the matter, reported this year that Van Rootselaar conversed with ChatGPT about gun violence, prompting OpenAI employees to debate whether to alert Canadian law enforcement.

OpenAI banned the user’s account but decided to not notify the police after looking at whether the activity would be considered an imminent and serious risk of physical harm to others.

Technology companies have faced more scrutiny in the wake of mass shootings over how criminals use their tools to plan attacks or broadcast killings. But the rise of artificial intelligence chatbots that quickly answer questions and generate content also means that people are spilling their darkest thoughts online. AI companies are now reckoning with debates about balancing public safety and privacy while also grappling with new lawsuits and investigations.

In March, the family of a Tumbler Ridge shooting victim, who was hospitalized, sued OpenAI, alleging that the company knew the shooter was planning a mass attack but failed to alert law enforcement.

In a post Friday on X, Eby called the apology “necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge.”

In the letter, Altman said he spoke to Tumbler Ridge Mayor Darryl Krakowka and Eby about the shooting and they agreed on a public apology. Altman said he was committed to finding ways to prevent such tragedies.

“Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again,” Altman said in the letter.

OpenAI is also grappling with backlash over whether it is doing enough to protect public safety in the United States.

Last week, Florida’s attorney general launched a criminal investigation into ChatGPT and OpenAI to determine whether the San Francisco AI company “bears criminal responsibility” for the chatbot’s actions in a Florida State University shooting last year that left two people dead. Prosecutors had been reviewing conversations between the suspect, Phoenix Ikner, and ChatGPT.

OpenAI Chief Executive Sam Altman apologized to a Canadian community for failing to alert police about a mass shooter’s conversations with its chatbot.

Authorities said Jesse Van Rootselaar, 18, killed eight people including schoolchildren in Tumbler Ridge, British Columbia, before taking her own life in February.

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said in a letter Thursday. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.”

On Friday, British Columbia Premier David Eby posted the letter on social media.

The letter came after the Wall Street Journal, citing people familiar with the matter, reported this year that Van Rootselaar conversed with ChatGPT about gun violence, prompting OpenAI employees to debate whether to alert Canadian law enforcement.

OpenAI banned the user’s account but decided to not notify the police after looking at whether the activity would be considered an imminent and serious risk of physical harm to others.

Technology companies have faced more scrutiny in the wake of mass shootings over how criminals use their tools to plan attacks or broadcast killings. But the rise of artificial intelligence chatbots that quickly answer questions and generate content also means that people are spilling their darkest thoughts online. AI companies are now reckoning with debates about balancing public safety and privacy while also grappling with new lawsuits and investigations.

In March, the family of a Tumbler Ridge shooting victim, who was hospitalized, sued OpenAI, alleging that the company knew the shooter was planning a mass attack but failed to alert law enforcement.

In a post Friday on X, Eby called the apology “necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge.”

In the letter, Altman said he spoke to Tumbler Ridge Mayor Darryl Krakowka and Eby about the shooting and they agreed on a public apology. Altman said he was committed to finding ways to prevent such tragedies.

“Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again,” Altman said in the letter.

OpenAI is also grappling with backlash over whether it is doing enough to protect public safety in the United States.

Last week, Florida’s attorney general launched a criminal investigation into ChatGPT and OpenAI to determine whether the San Francisco AI company “bears criminal responsibility” for the chatbot’s actions in a Florida State University shooting last year that left two people dead. Prosecutors had been reviewing conversations between the suspect, Phoenix Ikner, and ChatGPT.

OpenAI Chief Executive Sam Altman apologized to a Canadian community for failing to alert police about a mass shooter’s conversations with its chatbot.

Authorities said Jesse Van Rootselaar, 18, killed eight people including schoolchildren in Tumbler Ridge, British Columbia, before taking her own life in February.

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said in a letter Thursday. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.”

On Friday, British Columbia Premier David Eby posted the letter on social media.

The letter came after the Wall Street Journal, citing people familiar with the matter, reported this year that Van Rootselaar conversed with ChatGPT about gun violence, prompting OpenAI employees to debate whether to alert Canadian law enforcement.

OpenAI banned the user’s account but decided to not notify the police after looking at whether the activity would be considered an imminent and serious risk of physical harm to others.

Technology companies have faced more scrutiny in the wake of mass shootings over how criminals use their tools to plan attacks or broadcast killings. But the rise of artificial intelligence chatbots that quickly answer questions and generate content also means that people are spilling their darkest thoughts online. AI companies are now reckoning with debates about balancing public safety and privacy while also grappling with new lawsuits and investigations.

In March, the family of a Tumbler Ridge shooting victim, who was hospitalized, sued OpenAI, alleging that the company knew the shooter was planning a mass attack but failed to alert law enforcement.

In a post Friday on X, Eby called the apology “necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge.”

In the letter, Altman said he spoke to Tumbler Ridge Mayor Darryl Krakowka and Eby about the shooting and they agreed on a public apology. Altman said he was committed to finding ways to prevent such tragedies.

“Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again,” Altman said in the letter.

OpenAI is also grappling with backlash over whether it is doing enough to protect public safety in the United States.

Last week, Florida’s attorney general launched a criminal investigation into ChatGPT and OpenAI to determine whether the San Francisco AI company “bears criminal responsibility” for the chatbot’s actions in a Florida State University shooting last year that left two people dead. Prosecutors had been reviewing conversations between the suspect, Phoenix Ikner, and ChatGPT.

OpenAI Chief Executive Sam Altman apologized to a Canadian community for failing to alert police about a mass shooter’s conversations with its chatbot.

Authorities said Jesse Van Rootselaar, 18, killed eight people including schoolchildren in Tumbler Ridge, British Columbia, before taking her own life in February.

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said in a letter Thursday. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.”

On Friday, British Columbia Premier David Eby posted the letter on social media.

The letter came after the Wall Street Journal, citing people familiar with the matter, reported this year that Van Rootselaar conversed with ChatGPT about gun violence, prompting OpenAI employees to debate whether to alert Canadian law enforcement.

OpenAI banned the user’s account but decided to not notify the police after looking at whether the activity would be considered an imminent and serious risk of physical harm to others.

Technology companies have faced more scrutiny in the wake of mass shootings over how criminals use their tools to plan attacks or broadcast killings. But the rise of artificial intelligence chatbots that quickly answer questions and generate content also means that people are spilling their darkest thoughts online. AI companies are now reckoning with debates about balancing public safety and privacy while also grappling with new lawsuits and investigations.

In March, the family of a Tumbler Ridge shooting victim, who was hospitalized, sued OpenAI, alleging that the company knew the shooter was planning a mass attack but failed to alert law enforcement.

In a post Friday on X, Eby called the apology “necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge.”

In the letter, Altman said he spoke to Tumbler Ridge Mayor Darryl Krakowka and Eby about the shooting and they agreed on a public apology. Altman said he was committed to finding ways to prevent such tragedies.

“Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again,” Altman said in the letter.

OpenAI is also grappling with backlash over whether it is doing enough to protect public safety in the United States.

Last week, Florida’s attorney general launched a criminal investigation into ChatGPT and OpenAI to determine whether the San Francisco AI company “bears criminal responsibility” for the chatbot’s actions in a Florida State University shooting last year that left two people dead. Prosecutors had been reviewing conversations between the suspect, Phoenix Ikner, and ChatGPT.

OpenAI Chief Executive Sam Altman apologized to a Canadian community for failing to alert police about a mass shooter’s conversations with its chatbot.

Authorities said Jesse Van Rootselaar, 18, killed eight people including schoolchildren in Tumbler Ridge, British Columbia, before taking her own life in February.

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said in a letter Thursday. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.”

On Friday, British Columbia Premier David Eby posted the letter on social media.

The letter came after the Wall Street Journal, citing people familiar with the matter, reported this year that Van Rootselaar conversed with ChatGPT about gun violence, prompting OpenAI employees to debate whether to alert Canadian law enforcement.

OpenAI banned the user’s account but decided to not notify the police after looking at whether the activity would be considered an imminent and serious risk of physical harm to others.

Technology companies have faced more scrutiny in the wake of mass shootings over how criminals use their tools to plan attacks or broadcast killings. But the rise of artificial intelligence chatbots that quickly answer questions and generate content also means that people are spilling their darkest thoughts online. AI companies are now reckoning with debates about balancing public safety and privacy while also grappling with new lawsuits and investigations.

In March, the family of a Tumbler Ridge shooting victim, who was hospitalized, sued OpenAI, alleging that the company knew the shooter was planning a mass attack but failed to alert law enforcement.

In a post Friday on X, Eby called the apology “necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge.”

In the letter, Altman said he spoke to Tumbler Ridge Mayor Darryl Krakowka and Eby about the shooting and they agreed on a public apology. Altman said he was committed to finding ways to prevent such tragedies.

“Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again,” Altman said in the letter.

OpenAI is also grappling with backlash over whether it is doing enough to protect public safety in the United States.

Last week, Florida’s attorney general launched a criminal investigation into ChatGPT and OpenAI to determine whether the San Francisco AI company “bears criminal responsibility” for the chatbot’s actions in a Florida State University shooting last year that left two people dead. Prosecutors had been reviewing conversations between the suspect, Phoenix Ikner, and ChatGPT.

OpenAI Chief Executive Sam Altman apologized to a Canadian community for failing to alert police about a mass shooter’s conversations with its chatbot.

Authorities said Jesse Van Rootselaar, 18, killed eight people including schoolchildren in Tumbler Ridge, British Columbia, before taking her own life in February.

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said in a letter Thursday. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.”

On Friday, British Columbia Premier David Eby posted the letter on social media.

The letter came after the Wall Street Journal, citing people familiar with the matter, reported this year that Van Rootselaar conversed with ChatGPT about gun violence, prompting OpenAI employees to debate whether to alert Canadian law enforcement.

OpenAI banned the user’s account but decided to not notify the police after looking at whether the activity would be considered an imminent and serious risk of physical harm to others.

Technology companies have faced more scrutiny in the wake of mass shootings over how criminals use their tools to plan attacks or broadcast killings. But the rise of artificial intelligence chatbots that quickly answer questions and generate content also means that people are spilling their darkest thoughts online. AI companies are now reckoning with debates about balancing public safety and privacy while also grappling with new lawsuits and investigations.

In March, the family of a Tumbler Ridge shooting victim, who was hospitalized, sued OpenAI, alleging that the company knew the shooter was planning a mass attack but failed to alert law enforcement.

In a post Friday on X, Eby called the apology “necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge.”

In the letter, Altman said he spoke to Tumbler Ridge Mayor Darryl Krakowka and Eby about the shooting and they agreed on a public apology. Altman said he was committed to finding ways to prevent such tragedies.

“Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again,” Altman said in the letter.

OpenAI is also grappling with backlash over whether it is doing enough to protect public safety in the United States.

Last week, Florida’s attorney general launched a criminal investigation into ChatGPT and OpenAI to determine whether the San Francisco AI company “bears criminal responsibility” for the chatbot’s actions in a Florida State University shooting last year that left two people dead. Prosecutors had been reviewing conversations between the suspect, Phoenix Ikner, and ChatGPT.

OpenAI Chief Executive Sam Altman apologized to a Canadian community for failing to alert police about a mass shooter’s conversations with its chatbot.

Authorities said Jesse Van Rootselaar, 18, killed eight people including schoolchildren in Tumbler Ridge, British Columbia, before taking her own life in February.

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said in a letter Thursday. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.”

On Friday, British Columbia Premier David Eby posted the letter on social media.

The letter came after the Wall Street Journal, citing people familiar with the matter, reported this year that Van Rootselaar conversed with ChatGPT about gun violence, prompting OpenAI employees to debate whether to alert Canadian law enforcement.

OpenAI banned the user’s account but decided to not notify the police after looking at whether the activity would be considered an imminent and serious risk of physical harm to others.

Technology companies have faced more scrutiny in the wake of mass shootings over how criminals use their tools to plan attacks or broadcast killings. But the rise of artificial intelligence chatbots that quickly answer questions and generate content also means that people are spilling their darkest thoughts online. AI companies are now reckoning with debates about balancing public safety and privacy while also grappling with new lawsuits and investigations.

In March, the family of a Tumbler Ridge shooting victim, who was hospitalized, sued OpenAI, alleging that the company knew the shooter was planning a mass attack but failed to alert law enforcement.

In a post Friday on X, Eby called the apology “necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge.”

In the letter, Altman said he spoke to Tumbler Ridge Mayor Darryl Krakowka and Eby about the shooting and they agreed on a public apology. Altman said he was committed to finding ways to prevent such tragedies.

“Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again,” Altman said in the letter.

OpenAI is also grappling with backlash over whether it is doing enough to protect public safety in the United States.

Last week, Florida’s attorney general launched a criminal investigation into ChatGPT and OpenAI to determine whether the San Francisco AI company “bears criminal responsibility” for the chatbot’s actions in a Florida State University shooting last year that left two people dead. Prosecutors had been reviewing conversations between the suspect, Phoenix Ikner, and ChatGPT.

OpenAI Chief Executive Sam Altman apologized to a Canadian community for failing to alert police about a mass shooter’s conversations with its chatbot.

Authorities said Jesse Van Rootselaar, 18, killed eight people including schoolchildren in Tumbler Ridge, British Columbia, before taking her own life in February.

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said in a letter Thursday. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.”

On Friday, British Columbia Premier David Eby posted the letter on social media.

The letter came after the Wall Street Journal, citing people familiar with the matter, reported this year that Van Rootselaar conversed with ChatGPT about gun violence, prompting OpenAI employees to debate whether to alert Canadian law enforcement.

OpenAI banned the user’s account but decided to not notify the police after looking at whether the activity would be considered an imminent and serious risk of physical harm to others.

Technology companies have faced more scrutiny in the wake of mass shootings over how criminals use their tools to plan attacks or broadcast killings. But the rise of artificial intelligence chatbots that quickly answer questions and generate content also means that people are spilling their darkest thoughts online. AI companies are now reckoning with debates about balancing public safety and privacy while also grappling with new lawsuits and investigations.

In March, the family of a Tumbler Ridge shooting victim, who was hospitalized, sued OpenAI, alleging that the company knew the shooter was planning a mass attack but failed to alert law enforcement.

In a post Friday on X, Eby called the apology “necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge.”

In the letter, Altman said he spoke to Tumbler Ridge Mayor Darryl Krakowka and Eby about the shooting and they agreed on a public apology. Altman said he was committed to finding ways to prevent such tragedies.

“Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again,” Altman said in the letter.

OpenAI is also grappling with backlash over whether it is doing enough to protect public safety in the United States.

Last week, Florida’s attorney general launched a criminal investigation into ChatGPT and OpenAI to determine whether the San Francisco AI company “bears criminal responsibility” for the chatbot’s actions in a Florida State University shooting last year that left two people dead. Prosecutors had been reviewing conversations between the suspect, Phoenix Ikner, and ChatGPT.

OpenAI Chief Executive Sam Altman apologized to a Canadian community for failing to alert police about a mass shooter’s conversations with its chatbot.

Authorities said Jesse Van Rootselaar, 18, killed eight people including schoolchildren in Tumbler Ridge, British Columbia, before taking her own life in February.

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said in a letter Thursday. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.”

On Friday, British Columbia Premier David Eby posted the letter on social media.

The letter came after the Wall Street Journal, citing people familiar with the matter, reported this year that Van Rootselaar conversed with ChatGPT about gun violence, prompting OpenAI employees to debate whether to alert Canadian law enforcement.

OpenAI banned the user’s account but decided to not notify the police after looking at whether the activity would be considered an imminent and serious risk of physical harm to others.

Technology companies have faced more scrutiny in the wake of mass shootings over how criminals use their tools to plan attacks or broadcast killings. But the rise of artificial intelligence chatbots that quickly answer questions and generate content also means that people are spilling their darkest thoughts online. AI companies are now reckoning with debates about balancing public safety and privacy while also grappling with new lawsuits and investigations.

In March, the family of a Tumbler Ridge shooting victim, who was hospitalized, sued OpenAI, alleging that the company knew the shooter was planning a mass attack but failed to alert law enforcement.

In a post Friday on X, Eby called the apology “necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge.”

In the letter, Altman said he spoke to Tumbler Ridge Mayor Darryl Krakowka and Eby about the shooting and they agreed on a public apology. Altman said he was committed to finding ways to prevent such tragedies.

“Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again,” Altman said in the letter.

OpenAI is also grappling with backlash over whether it is doing enough to protect public safety in the United States.

Last week, Florida’s attorney general launched a criminal investigation into ChatGPT and OpenAI to determine whether the San Francisco AI company “bears criminal responsibility” for the chatbot’s actions in a Florida State University shooting last year that left two people dead. Prosecutors had been reviewing conversations between the suspect, Phoenix Ikner, and ChatGPT.

Previous Post

10 minutes backstage with Darius Rucker at Stagecoach

Next Post

Supreme Court wary of barring police from phone searches to find crime suspects

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

BROWSE BY CATEGORIES

  • Business
  • Culture
  • Entertainment
  • Health
  • Politics
  • Technology
  • Trending
  • Uncategorized
  • World
Binghamton Herald

© 2024 Binghamton Herald or its affiliated companies.

Navigate Site

  • About
  • Advertise
  • Terms & Conditions
  • Privacy Policy
  • Disclaimer
  • Contact

Follow Us

No Result
View All Result
  • Home
  • World
  • Politics
  • Business
  • Technology
  • Culture
  • Health
  • Entertainment
  • Trending

© 2024 Binghamton Herald or its affiliated companies.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In