Wrongful death litigation involving generative AI is no longer hypothetical. A new lawsuit arising from the death of an Connecticut woman, tests whether existing tort doctrines can be extended to alleged harmed caused not by a human advisor or professional, but by an AI system built to stimulate conversation, reinforce user engagement, and respond convincingly even in sensitive contexts. This case highlights the central legal question courts are increasingly being asked to answer: Can an AI developer be held responsible under negligence or product liability frameworks when their product is alleged to foreseeably amplify delusion, dependency, or paranoia?
Wrongful Death Claims and Generative AI
In late 2025, the estate of an elderly Connecticut woman filed a wrongful death action and alleged that the interactions between her son and a generative AI chatbot materially contributed to a homicide-suicide. The case, First County Bank on behalf of the Estate of Suzanne Adams v. OpenAI Foundation, OpenAI OpCo, LLC, Microsoft Corp, Sam Altman, and Does 1-20, asserts that the chatbot reinforced delusional beliefs, failed to redirect the user toward mental health resources, and fostered emotional dependency, culminating in fatal harm. The suit names both the AI developer and its commercial partner, framing the technology as a defective and unreasonably dangerous product. Although still at an early procedural stage, the case represents a growing wave of litigation attempting to impose civil liability on AI developers for downstream human conduct.
Negligence and Duty of Care
At its core, the complaint advances a negligence theory requiring that the plaintiff bring evidence of duty, breach causation, and damages—an age old standard articulated in the flagstone case of Palsgraf v. Long Island R.R. Co., 248 N.Y. 339 (1928). Plaintiffs argue that AI developers owe a duty of reasonable care to foreseeable users, particularly when deploying systems are designed to engage with potentially vulnerable populations.
Courts evaluating analogous claims have historically been reluctant to impose this type of duty on software providers absent a special relationship. Doe v. Internet Brands, 824 F.3d 846 (9th Cir. 2016). In that case, the court recognized a limited duty to warn where a platform had specific knowledge of a known danger. Whether a chatbot's adaptive and personalized outputs create a comparable duty remains unresolved by the courts.
Product Liability and Design Defect Theories
Plaintiffs have also attempted to characterize the AI system as a defective product, invoking strict liability principles under Restatement (Second) of Torts section 402A. Traditionally, courts have declined to treat software as a "product" for strict liability purposes. See Winter v. G.P. Puitnam's Sons, 938 F.2d 1033 (9th Cir. 1991). While this case involved a book, it is nonetheless foundational. The Winter court found that books (as information) are not products, and placed them in a separate category from distinguishing tangible goods. Courts have applied this reasoning to software, focusing on the intangible nature of the data. However, as AI systems increasingly perform autonomous functions, litigants are testing whether courts will adopt a more functional approach. Hardin v. PDX, Inc., 173 Cal.App.4th 418 (2009).The Hardin court found that the modification of software to delete safety warnings, placed the defendant in the role of content creator or editor instead of a "passive transmitter." This potentially exposed the business to liability under traditional tort theories.
Liability Theories at Play
This growing trend in litigation potentially raises several tort theories that have not yet been tested extensively in the AI and biometric space[i]:
- Product Liability and Design Defect: Plaintiffs argue that the AI system is a defective product whose design failed to anticipate and mitigate foreseeable psychological harms. If accepted, this theory could transform generative AI platforms into products subject to standard consumer safety expectations.
- Negligence and Failure to Warn: The claim suggests that Open AI and its partners had a duty to implement reasonable safeguards and to warn users, especially vulnerable individuals, of potential risks.
- Causation and Foreseeability: A central challenge for plaintiffs will be proving causation by demonstrating the chatbot's output was a proximate cause of the harm rather than a coincidental factor tied to the user's mental condition. Courts may struggle to define a meaningful legal standard for foreseeability when the technology intermediates human agency,
- First Amendment and Algorithmic Speech: AI companies have already argued that their outputs are speech protected by the First Amendment- a defense that has met mixed results. Whether courts will treat algorithm-generated content as protected speech or as a product subject to liability remains a critical unresolved issue.
Broader Litigation Landscape
The Connecticut case is not isolated, Multiple lawsuits across several states are alleging that AI chatbots contributed to suicides and other harms by validating harmful information. Plaintiffs in some cases have already overcome motions to dismiss, signaling judicial willingness to entertain liability claims against AI developers.[ii]
If these cases proceed, they could force new doctrines on:
- Duty of care standards for autonomous vehicles and semi-autonomous technologies.
- Standards for data safety and the deployment of AI in sensitive contexts.
- Limits on algorithmic personalization that prioritize engagement over user welfare.
Conclusion
The Adams litigation illustrates a fast-emerging reality. AI tools are moving from productivity products to emotionally influential systems, and courts may soon be forced to decide whether traditional negligence and product-liability principles can be adapted to algorithmic interactions that allegedly contribute to real world tragedy. While questions of duty, causation, and constitutional defenses remain unsettled, the case signals increasing pressure on developers to treat safety measures not as optional brand features, but as core risk controls. For insurers and businesses integrating generative AI into consumer-facing services, the most immediate takeaway is that "AI harm" allegations can now be pled as mainstream tort claims that are framed around design choices, foreseeable risks, and preventable outcomes rather than futuristic speculation.
________________
[i] Qanbar, Yassin. Emerging Theories in Liability in the Age of AI: A New Frontier for Litigation. Rain Intelligence. (April 9, 2025).
[ii] Ummer-Hashim Shabna. AI Chatbot Lawsuits and Teen Mental Health. ABA Health Law Section. (October 27, 2025).

