Fintech Flash
October 1, 2024

Double Clicking on Innovation in Consumer Finance: Responsible Use of AI

Artificial intelligence (AI) is becoming ubiquitous across sectors, and the financial services industry is no exception. With the rise of AI, there is also an increase in scrutiny by regulators over its use. This Fintech Flash addresses, and provides recommendations regarding, the responsible use of AI in credit underwriting, marketing, and “chatbots.”1 For more insights into pertinent GenAI topics please see Goodwin’s GenAI hub.

Responsible Use of AI in Credit Underwriting

In a separate Fintech Flash, we discussed the benefits and risks of using alternative data in credit underwriting. Like alternative data, AI is increasingly being used to assist lenders in underwriting credit applications and can provide many benefits for lenders and applicants alike. For example, AI in credit underwriting can help improve credit risk assessments and provide new means for extending credit to applicants who would not normally be eligible under traditional credit-score-based underwriting. Critics and regulators, however, both contend that AI in credit underwriting can lead to unlawful bias.2 

Many regulators3 are paying close attention to the risk that AI could increase the use of illegal practices similar to redlining. Specifically, bias — “a systematic distortion of a statistical result” — is one of the concerns related to the use of AI in underwriting because, critics say, algorithms may simply automate the incorporation of discrimination into credit decisions.4 Regulators worry that algorithms supporting AI might intentionally or unintentionally exclude borrowers based on race, ethnicity, or other prohibited bases for discrimination.5 Additionally, many AI systems operate as “black boxes,” with their internal mechanisms not transparent to most people, including the developers themselves. This opacity can complicate the ability to determine the fairness of AI in the credit underwriting process.6

Regarding specific fair lending requirements, the Equal Credit Opportunity Act (ECOA) and Regulation B mandate that creditors provide notices of adverse action that accurately inform consumers about the specific reasons behind those actions.7 AI black boxes can make it difficult to comply with this requirement. Acknowledging this challenge, the Consumer Financial Protection Bureau (CFPB) has issued guidance on the necessity of providing accurate adverse action notices regardless of the type of technology used to reach a conclusion on a credit applicant.8 In particular, the CFPB’s guidance clarifies that creditors must provide precise and specific reasons for adverse decisions made by AI tools. Creditors cannot use vague categories to explain these actions and are required to disclose specific reasons even if it might shock, upset, or anger consumers to find out their credit applications were judged on the basis of data that may not seem to be directly related to their finances.9  

Similar to the CFPB, the Federal Trade Commission (FTC), which has jurisdiction over consumer protection laws, has opined on the matter of AI in credit underwriting and recognizes that the use of big data analytics may lead to bias or other adverse effects on consumers.10 The FTC advises that AI tools should be “empirically derived, demonstrably and statistically sound.”11 For the responsible use of AI in credit underwriting, institutions should (1) understand what data is being used in their model and how it influences the outcomes; (2) be able to explain this process to consumers; and (3) ensure that AI models are validated and revalidated to ensure that they work as intended and do not illegally discriminate.12  

Based on the regulatory guidance discussed above, institutions and their service providers should focus on the following: 

  • Transparency. Ensure that any AI tool used for underwriting facilitates the transparency needed to meet adverse action notice requirements.
  • Testing. Compare sample groups of creditworthy and noncreditworthy applicants, follow recognized statistical methods, and regularly revalidate the models and adjust them, if necessary, to ensure accuracy.13  
  • Validation. Be able to answer the following fundamental questions before implementing AI in credit underwriting models:
    • How representative is your dataset?
    • Does the data model account for biases?
    • How accurate are the predictions based on big data?
    • Does your reliance on big data raise ethical or fairness concerns?14 

AI in Marketing

AI offers numerous applications in marketing, such as identifying optimal times and locations to target audiences and helping with ad creation. Recent technological advances have revolutionized marketing, with AI playing a major role. 

The CFPB, however, is scrutinizing the use of AI in marketing in the consumer financial sector.15 In August 2022, the CFPB issued an interpretive rule stating that digital marketers who help identify or select potential customers, or who influence consumer behavior through content placement, are usually considered service providers under the Consumer Financial Protection Act (CFPA).16 If these actions, like AI-based targeting, break federal consumer financial protection laws, digital marketers can be held responsible, particularly for committing unfair, deceptive, or abusive acts or practices (UDAAP), as well as other consumer financial protection violations.17 

Based on the CFPB’s current focus, entities that use AI in marketing should consider the following practices:

  • Inclusive Design. Reduce bias in design by including diverse perspectives and involving a multitude of individuals in AI development.
  • Governance Principles. Develop well-defined governance frameworks for AI marketing that incorporate best practices in ethics and consider the latest innovations in technology.
  • Continuous Audit. Regularly audit AI systems to prevent bias and to ensure alignment with ethical and legal obligations. 
  • Personal Data. Allow users to manage their personal data and inform them about its use.
  • Feedback Loops. Create feedback systems and reporting channels to quickly address any issues related to AI marketing.18

Responsible Use of AI Chatbots

In today’s technology-driven world, chatbots have become valuable assets by improving customer service, optimizing operations, and offering immediate support. However, creating and using chatbots involves numerous legal and ethical considerations.19 

The CFPB, while acknowledging that chatbots can help consumers receive immediate assistance while simultaneously reducing customer service costs, has raised the alarm on the expansive adoption and use of chatbots by institutions.20 In the agency’s view, poorly deployed chatbots can impede customers from resolving problems, which can violate the law, including obligations that require financial institutions offering consumer financial products or services to resolve specific kinds of disputes and inquiries from their customers.21 To meet these obligations, institutions must competently interact with customers regarding the products or services offered. The regulatory concern is that the use of AI chatbots may fall short of these obligations.22  

For example, the CFPB has consistently determined that providing customers with incorrect information — including information, or the lack thereof, given by an AI chatbot — can constitute a UDAAP.23 The CFPB has also indicated that beyond complying with other federal data security regulations for financial institutions, entities may commit a UDAAP if their data protection or information security measures related to AI chatbots are inadequate.24 Courts have also ruled that using automated decision-making tools in the form of AI chatbots may introduce bias prohibited by civil rights laws.25 As more institutions adopt the use of chatbots, the CFPB plans to monitor compliance with laws such as the ECOA and the CFPA.

The CFPB advises institutions not to rely on chatbots as their main customer service tool when it’s clear they can’t meet customer needs or comply with laws. The agency also warns that the use of chatbots will be scrutinized in future examinations, stating that it “is actively monitoring the market and expects institutions using chatbots to do so in a manner consistent with customer and legal obligations.”26 

Adhering to the following principles can help institutions use chatbots in a responsible manner and address regulator concerns:

  • Accuracy. Tool design and testing should be in place for the chatbot tool to provide accurate information.
  • Human Accessibility. Users of the chatbot tool should be able to escalate to a human representative if the situation requires.
  • Service Accessibility. The chatbot tool should not hinder customers’ access to their funds or ability to make payments.
  • Bias Minimization. Companies should use designers of chatbot tools that train the AI with diverse, representative datasets in order to minimize bias. Companies should regularly audit interactions, gather user feedback, and adjust the chatbot accordingly to respond to user experiences and minimize the risk of bias.
  • Data Privacy and Security Management. Companies should incorporate data encryption, secure storage methods, and stringent data access policies to prevent data breaches and unauthorized access and to comply with applicable privacy laws.
  • Transparent Use. The chatbot tool should disclose or make obvious that it is using AI at the start of interactions.27 

Conclusion

In conclusion, the responsible use of AI is paramount to balancing the tool’s technological benefits with the risks of unintentional consumer harm that accompany its use. As AI continues to evolve and integrate into the financial services sector, regulators expect providers and users of AI tools to prioritize ethical principles such as transparency, accountability, and equity. Companies should adopt robust governance frameworks and conduct regular audits, and the responsible deployment of AI should align with legal and regulatory standards to safeguard individual rights, prevent bias, and promote trust. 

If your company needs assistance in navigating the intricate legal landscape surrounding the use of AI, please contact the Goodwin Fintech Team.

 


Goodwin’s Fintech Team
We practice in every fintech vertical, including lending, alternative finance (e.g., merchant cash advances, earned wage access, and factoring), payments, deposits, insurance, broker-dealers, and investment advisers. In addition to doing product and service development regulatory work, we assist our fintech clients that choose to deliver their solutions through banks in entering into bank partnership and platform agreements.

 


Upcoming Event: Join Goodwin and J.P. Morgan for Lunch During Money 20/20
On October 29, Goodwin's Fintech group and co-host J.P. Morgan will be hosting a luncheon reception at TAO Asian Bistro during the Money 20/20 conference. For more information and to register, please click here.


 

[1] The Consumer Financial Protection Bureau (CFPB) refers to “chatbots” as systems that “simulate human-life responses using computer programming.” See CFPB, “Chatbots in Consumer Finance” (June 6, 2023).
[2] See, e.g., CFPB, “CFPB Comment on Request for Information on Uses, Opportunities, and Risks of Artificial Intelligence in the Financial Services Sector” (Aug. 12, 2024) (responding to a Treasury Department request for information).
[3] See, e.g., US Department of Justice, “Joint Statement on Enforcement of Civil Rights, Fair Competition, Consumer Protection, and Equal Opportunity Laws in Automated Systems (Apr. 04, 2024).
[4] Forbes, “How to Control for AI Bias in Lending” (Oct.18, 2023).
[5] Id.
[6] CFPB, DOJ, EEOC, and FTC, “Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems (Apr. 25, 2024).
[7] 12 CFR § 1002.9.
[8] CFPB, “CFPB Issues Guidance on Credit Denials by Lenders Using Artificial Intelligence” (Sept. 19, 2023).
[9] Id.
[10] Federal Trade Commission, “Using Artificial Intelligence and Algorithms (Apr. 8, 2020).
[11] Id.
[12] Id.
[13] Id.
[14] Id.
[15] CFPB, “CFPB Warns that Digital Marketing Providers Must Comply with Federal Consumer Finance Protections” (Aug. 10, 2022).
[16] CFPB, “Limited Applicability of Consumer Financial Protection Act’s ‘Time or Space’ Exception with Respect to Digital Marketing Providers,” interpretive rule (Aug. 10, 2022).
[17] CFPB, “CFPB Warns that Digital Marketing Providers Must Comply with Federal Consumer Finance Protections” (Aug. 10, 2022).
[18] Silverback Strategies, “Ethical Considerations of AI In Marketing: Balancing Innovation with Responsibility (July 2, 2024).
[19] AgentX, “Ethical Considerations in AI Chatbot Design” (Apr. 10, 2024).
[20] CFPB, “Chatbots in Consumer Finance” (June 6, 2023).
[21] Id.
[22] ABA Banking Journal, “CFPB to ‘Crack Down’ on Bank Chatbots (Aug. 14, 2024). See also The White House, “FACT SHEET: Biden-⁠Harris Administration Launches New Effort to Crack Down on Everyday Headaches and Hassles That Waste Americans’ Time and Money (Aug. 12, 2024). (“[C]hatbots frequently provide inaccurate information and give the run-around to customers seeking a real person. The CFPB is planning to issue rules or guidance to crack down on ineffective and time-wasting chatbots used by banks and other financial institutions in lieu of customer service. The CFPB will identify when the use of automated chatbots or automated artificial intelligence voice recordings is unlawful, including in situations in which customers believe they are speaking with a human being.”)
[23] CFPB, “CFPB Takes Action Against Carrington Mortgage for Cheating Homeowners out of CARES Act Rights” (Nov. 17, 2022). See also “In the Matter of: Carrington Mortgage Services, LLC, Consent Order, File No. 2022-CFPB-0010” (Nov. 17, 2022); “In the Matter of: EDFINANCIAL SERVICES, LLC, File No. 2022-CFPB-0001” (March 30, 2022). See also CFPB, “The CFPB has entered the chat” (June 07, 2023).
[24] Id.
[25] See Huskey v. State Farm Fire & Cas. Co., 2023 WL 5848164, at *8 (N.D. Ill. Sept. 11, 2023).
[26] CFPB, “Chatbots in Consumer Finance” (June 6, 2023).
[27] AgentX, “Ethical Considerations in AI Chatbot Design” (Apr. 10, 2024). See also Perkins Coie, “Do You Have to Disclose When Your Users Are Interacting With a Bot?” (June 26, 2024).

 

This informational piece, which may be considered advertising under the ethical rules of certain jurisdictions, is provided on the understanding that it does not constitute the rendering of legal advice or other professional advice by Goodwin or its lawyers. Prior results do not guarantee a similar outcome.