Armenians and the Potential for the Misuse of ‘AI’

And I put AI in scare quotes because this really has to do with algorithms, not AI per se. First, a blast from the (not, though it should be) past–discrimination against Armenians (boldface mine):

People are rightly concerned about the potential for new technology to foster discrimination. You can load parameters into an algorithm, or allow artificial intelligence to make lending decisions based on a statistical analysis of the creditworthiness of people with certain characteristics, and you’ve created a kind of digital redlining. No human hands would dictate the racial or gender prejudice, it would just happen through the bot

In an enforcement action yesterday, the CFPB accused Citibank—not some fly-by-night operation but the nation’s third-largest bank—with violating the Equal Credit Opportunity Act from at least 2015 to 2021, by deliberately denying credit cards to … Armenians. The organization had apparently decided Armenians were all criminals prone to “bust outs,” who would rack up charges and then leave the country. The CFPB had records of employees referring to “Armenian bad guys” or the “Southern California Armenian Mafia.”

So how did Citi pull off this novel racism? Was it AI? Was it machine learning? No, they pulled any application for a credit card or an increased line of credit whose name had an -ian or -yan suffix, or applications around Glendale, California, home to about 15 percent of all Armenian Americans. And then they would just deny credit to those people, or place holds on their account, or send the application to the fraud prevention unit, or ask for income and asset verification that they wouldn’t need for anyone else.

Supervisors and trainers instructed the line-level workers to hide this blatantly illegal conduct, “including by telling Respondent employees not to discuss it in writing or on recorded phone lines.” They were then told to make up fake reasons for denying credit to Armenians. If employees didn’t flag Armenian names or Glendale-area residents, they would be reprimanded.

This is where I think David Dayen gets it wrong:

But the lesson here is that Citi is big enough that they could have buried this “do not sell to Armenians” directive in lines of computer code. That they went old-school and just searched for suffixes suggests that there wasn’t much to be gained from the whiz-bang algorithm. Our zeal to regulate new and exciting developments in rip-offs and lies should not overlook the old standby of random stereotyping. Some things never go out of style.

The concern is that, next time, they will use algorithmic methods to discriminate intentionally (whether that be against Armenians or anyone else). My hunch is Citi ‘did it old school’ because they were stupid and didn’t think anyone would rat them out. Now they know and managers will use algorithmic or coding methods to do this.

This entry was posted in Bidness. Bookmark the permalink.

3 Responses to Armenians and the Potential for the Misuse of ‘AI’

  1. David Chase says:

    But…. why? WTF?

  2. Pingback: Mike’s Blog Round-Up ... from Crooks & Liars Tengrain - Tom Bettenhausen's

  3. edivimo says:

    Oh, yes! Next time will be so easy to put that in the some of the algortihms they already use to accept or deny credit.
    And the only barrier will be the ethical thinking of the coders and the future maintainers of the code reviewing it.

Leave a Reply