In a recent settlement, the Texas attorney general resolved allegations that Pieces Technologies, Inc. (Pieces), a healthcare generative AI company, misrepresented the hallucination rate of its generative AI product to healthcare providers and ultimately overstated the accuracy and safety of the product’s underlying software. In the press release concerning the settlement, the Texas attorney general emphasized the state’s significant interest in examining the employment of AI in “high-risk” environments, such as healthcare, to safeguard public safety.
Pieces’ software summarizes, charts, and drafts clinical notes for doctors and nurses. Allegedly, Pieces marketed its software as having a “critical hallucination rate” and “severe hallucination rate” of less than .001%. The Texas attorney general alleged that these claims were false, misleading, and deceptive and thus violated the 1973 Texas Deceptive Trade Practices-Consumer Protection Act, which, among other things, prohibits the dissemination of a statement that a person knows materially misrepresents the character of a service for the purpose of selling or inducing a person to enter into a contract with regard to the service.
Pieces denied the attorney general’s allegations. However, it agreed to implement the following measures:
- Disclose the definition of any metric, benchmark, or measurement of its generative AI products used in its marketing content and the methodology used to calculate it, or use an independent third-party auditor to assess its services and substantiate any marketing claims concerning its services
- Avoid making false, misleading, or unsubstantiated claims about its generative AI product features, accuracy, reliability, efficacy, testing methods, monitoring methodologies, metric definitions, or training data; misleading customers or users about the accuracy, functionality, purpose, or features of its products; and do not fail to disclose any financial or similar arrangements with individuals involved in marketing, advertising, endorsements, or promotions
- Provide all current and future customers documentation that reveals any known or reasonably knowable potential harmful uses of its generative AI products or services, including:
- Training data and/or models used
- Intended purpose and user guidance
- Known limitations or misuses
- Any other necessary documentation to understand the output, monitor inaccuracies, and prevent misuse
While other states and governmental bodies are creating new laws to regulate generative AI products, such as the Colorado Artificial Intelligence Act, this agreement highlights that state regulators may opt to address AI-related risks under existing consumer protection laws without passing new legislation. At the federal level, the Federal Trade Commission has used similar consumer protection laws to investigate AI companies.
What implications does this settlement have on companies developing generative AI tools intended for healthcare applications, as well as for the healthcare organizations that will eventually adopt them?
To effectively evaluate potential AI software vendors, healthcare organizations considering the implementation of generative AI tools must carefully understand the product’s marketing claims and intended uses. Similarly, companies developing generative AI tools need to ensure the accuracy of disclosures that educate customers about the risks, limitations, and appropriate use of their AI offerings and the substantiation of any claims about their technology.
Furthermore, it is advisable for both generative AI companies and healthcare entities using these technologies to engage in ongoing risk assessments and implement risk management strategies specifically tailored to generative AI use. Such measures are crucial not only for risk mitigation and liability prevention but also for ensuring compliance with the evolving regulatory and enforcement landscape related to AI.
Follow our blog to receive additional updates and alerts on generative AI regulatory developments in the healthcare and life sciences industries. We will continue to monitor state and federal government activity on this topic.
This informational piece, which may be considered advertising under the ethical rules of certain jurisdictions, is provided on the understanding that it does not constitute the rendering of legal advice or other professional advice by Goodwin or its lawyers. Prior results do not guarantee similar outcomes.
Contacts
- /en/people/c/cohen-roger
Roger A. Cohen
Partner - /en/people/o/otenaike-simone
Simone Otenaike
Associate