Victim's Attorney: FSU Shooter Was in 'Constant Communication' with ChatGPT, Used AI to Plan Attack
Context:
A Florida State University shooting in April 2025 killed two people and wounded six, with victims’ lawyers planning to sue OpenAI, contending the attacker used ChatGPT to plan the attack and was in constant contact with the AI. OpenAI acknowledged identifying an account linked to the suspect and cooperating with law enforcement. The suit also raises potential liability for the Leon County Sheriff’s Office, citing concerns about the shooter’s exposure to firearms through a youth advisory program. The motive remains unclear, and authorities have not linked the assailant to the victims. The case underscores ongoing debates about AI safety, accountability, and the role of institutions in preventing violence.
Dive Deeper:
Robert Morales, a 57-year-old Aramark worker and father from Tallahassee, was killed in the attack, which also claimed Tiru Chabba, a 45-year-old Aramark vendor, and left six students wounded. The defense team representing Morales’s family plans to sue OpenAI, alleging the shooter relied on ChatGPT to facilitate the crime.
Court records show more than 270 images of ChatGPT conversations listed as exhibits, though the content has not been publicly disclosed. OpenAI stated it identified a ChatGPT account believed to be associated with the suspect and shared this information with law enforcement while cooperating with authorities.
Investigators described the shooting as occurring outside the student union; the attacker used a service pistol owned by his stepmother, a deputy, and possessed a shotgun, though the latter was not used. He was taken into custody after being shot by police, and he faces multiple charges including first-degree murder.
Morales’s lawyers argued the Leon County Sheriff’s Office’s handling of the shooter’s exposure to firearms and participation in a Youth Advisory Council, where concerning behavior was observed, contributed to the tragedy. They suggest possible liability for the sheriff’s office.
Separately, the report connects OpenAI to another Canadian incident involving a 17-year-old shooter, alleging the platform failed to alert authorities despite monitoring staff recognizing a risk, highlighting ongoing legal challenges around AI safety and responsibility.