On September 18, 2024, Texas Attorney General Ken Paxton announced a settlement with Pieces Technology under the Texas Deceptive Trade Practices-Consumer Protection Act (DPTA). The enforcement action centered on alleged false advertising of the company’s AI capabilities, marking the first generative AI settlement under a state consumer protection act.
Pieces Technology uses AI to assist hospitals by summarizing and drafting clinical notes. The company advertised its AI as having a "critical hallucination rate" below 0.001%—a metric describing the likelihood of false or misleading outputs.
The Texas AG argued that these claims were “false, misleading, or deceptive” under the DPTA.
Although Pieces denied wrongdoing, the settlement requires the company to:
No monetary penalties were imposed, but Pieces must comply with future state demands to demonstrate adherence to the settlement terms.
The case is part of a broader trend where state attorneys general focus on AI through consumer protection laws. Earlier this year, Massachusetts issued guidance on AI reliability, and Colorado passed a law regulating algorithmic discrimination, effective in 2026.
The Texas settlement with Pieces Technology sheds light on AI transparency and accountability, with several notable takeaways:
In a June press release, Attorney General Paxton stated, “Any entity abusing or exploiting Texans’ sensitive data will be met with the full force of the law. Companies that collect and sell data in an unauthorized manner, harm consumers financially, or use artificial intelligence irresponsibly present risks to our citizens that we take very seriously.”
“As many companies seek more and more ways to exploit data they collect about consumers, I am doubling down to protect privacy rights.”
As AI becomes more integrated into the healthcare sector, organizations must improve their accuracy, transparency, and consumer safety when deploying AI systems.
The Texas settlement sets a precedent for how state attorneys general will address AI-related claims. Businesses should anticipate heightened scrutiny and comply with new measures to mitigate future regulatory and legal risks.
Related: The future of AI regulation
Yes, AI-powered features can be integrated with HIPAA compliant emailing platforms, like Paubox, to automate processes like patient consent management and sending personalized emails while maintaining HIPAA compliance.
Yes, healthcare providers must ensure that AI-powered features comply with HIPAA regulations and industry best practices for data security and privacy. Additionally, providers should evaluate the reliability of AI algorithms to avoid potential risks or compliance issues.
Providers must use a Business or Enterprise plan, sign a business associate agreement (BAA) with Google, and use a HIPAA compliant platform to protect patient information.
Learn more: How to set up HIPAA compliant emails on Google