President’s Administration Warns Financial Firms of AI Risks

Decisions made with data sets can lead to more discrimination, not less

The heads of the Consumer Financial Protection Bureau, the Justice Department’s civil rights unit, and the Federal Trade Commission recently warned financial firms and other companies of the risks associated with the use of current artificial intelligence (AI) in the workplace. 

AI tools such as Open AI’s ChatGPT use vast, pre-existing data sets to make decisions. Financial firms are legally required to explain the reasons behind their financial decisions, and federal leaders cautioned that citing AI as the reason for their decisions does not comply with current U.S. law. In addition, they said that pre-existing data sets, especially with regard to credit scores and mortgage approval, often contain biases within them, and AI decisions made according to those data sets can increase financial discrimination instead of lessening it.

“What we’re talking about here is often the use of expansive amounts of data and developing correlations and other analyses to generate content and make decisions,” said Consumer Financial Protection Bureau Director Rohit Chopra. “What we’re saying here is there is a responsibility you have for those decisions.”

As the Lord Leads, Pray with Us…

  • For Director Chopra as he seeks to prevent biases within lending and financial institutions.
  • For Assistant Attorney General Kirsten Clarke as she heads the DOJ’s civil rights unit.
  • For wisdom for Lina Khan as she chairs the Federal Trade Commission.

Sources: Reuters, Washington Post

RECENT PRAYER UPDATES


Back to top
FE3