Texas Settles with AI Company in Healthcare Space
AI or not, false advertising/marketing is all the same.
Key Parties:
Petitioner: The State of Texas, represented by the Attorney General of Texas, Ken Paxton.
Respondent: Pieces Technologies, Inc., an AI-focused technology company.
Case Background:
At the heart of this settlement is Pieces Technology, an AI tech company operating in Texas. They develop generative AI products specifically for in-patient healthcare facilities like hospitals. Their AI tools that help doctors and nurses by summarizing, charting, and even drafting clinical notes.
To showcase the accuracy of their AI, Pieces Technology came up with a series of metrics, many of which included the term "hallucination." In the world of AI, "hallucination" refers to instances where the AI generates incorrect or misleading information. Pieces Technology boldly claimed that their products had incredibly low "critical hallucination" and "severe hallucination" rates – less than 0.001% and less than 1 per 100,000, respectively.
However, the State of Texas alleged that these representations might have been a bit too optimistic, potentially violating the Texas Deceptive Trade Practices Act (DTPA) due to their potentially false, misleading, or deceptive nature.
The Texas Attorney General's Consumer Protection Division investigated Pieces Technology for potential violations of the Texas Deceptive Trade Practices Act (DTPA). The State alleged that Pieces Technology made misleading claims about the accuracy of its generative AI products used in healthcare settings. Specifically, the low "hallucination rate."
Settlement Terms:
Without admitting any wrongdoing or liability, Pieces Technology agreed to the following terms for a period of five years:
Clear and Conspicuous Disclosures: When marketing its AI products, the company must clearly define any metrics used (like "hallucination rate") and explain how those metrics are calculated. Alternatively, they can hire an independent auditor to verify their claims.
Prohibitions Against Misrepresentations: Pieces Technology is prohibited from making false or misleading claims about its AI products, including their accuracy, testing methods, and data used for training the AI.
Disclosures to Customers: The company must provide detailed documentation to its customers disclosing any known or potential harmful uses or misuses of its AI products. This includes information about the AI's limitations, risks to patients and healthcare providers, and how to monitor for inaccuracies.
Compliance Monitoring: Pieces Technology must cooperate with the State's requests for information related to its compliance with the settlement terms.
Legal Precedents and Implications:
This settlement highlights the increasing scrutiny of AI claims, especially in sensitive areas like healthcare. It sets a precedent for how the Texas AG and potentially other regulators will approach misleading or unsubstantiated claims about AI performance. It also emphasizes the importance of transparency and clear disclosures in AI marketing and product documentation.
Key Takeaways for You:
AI Regulation is Increasing: This case signals a trend toward greater regulation of AI, particularly in industries with high risks like healthcare.
Marketing Claims Must Be Substantiated: Companies making claims about AI capabilities must be able to support those claims with clear evidence and transparent methodologies.
Disclosure is Crucial: Providing detailed information about AI limitations, potential biases, and risks is essential to avoid legal challenges.
Ethical Considerations are Paramount: Lawyers advising clients on AI development and deployment should prioritize ethical considerations and responsible AI practices.
This settlement serves as a valuable reminder for lawyers and AI companies to stay informed about the evolving legal landscape surrounding AI and to prioritize transparency and accountability in their AI-related practices.