SAN FRANCISCO — Meta agreed to alter its ad-targeting technology and pay a penalty of $115,054 on Tuesday, in a settlement with the Justice Department over claims that the company had engaged in housing discrimination by letting advertisers restrict who was able to see ads on the platform based on their race, gender and ZIP code.
Under the agreement, Meta, the company formerly known as Facebook, said it would change its technology and use a new computer-assisted method that aims to regularly check whether the audiences who are targeted and eligible to receive housing ads are, in fact, seeing those ads. The new method, which is referred to as a “variance reduction system,” relies on machine learning to ensure that advertisers are delivering ads related to housing to specific protected classes of people.
Meta also said it will no longer use a feature called “special ad audiences,” a tool it had developed to help advertisers expand the groups of people their ads would reach. The company said the tool was an early effort to fight against biases, and that its new methods would be more effective.
“We’re going to be occasionally taking a snapshot of marketers’ audiences, seeing who they target, and removing as much variance as we can from that audience,” Roy L. Austin, Meta’s vice president of civil rights and a deputy general counsel, said in an interview. He called it “a significant technological advancement for how machine learning is used to deliver personalized ads.”
Facebook, which became a business colossus by collecting its users’ data and letting advertisers target ads based on the characteristics of an audience, has faced complaints for years that some of those practices are biased and discriminatory. The company’s ad systems have allowed marketers to choose who saw their ads by using thousands of different characteristics, which have also let those advertisers exclude people who fall under a number of protected categories.
Read More on Artificial Intelligence
- Are These People Real?: We created our own A.I. system to understand how easy it is for a computer to generate fake faces.
- Imitating Life: New A.I. systems can write original prose and generate an image at your command. The implications could be profound.
- What Happened to Watson?: IBM’s supercomputer was supposed to transform industries and generate riches for the company. Neither ambition has panned out.
- A.I./Real Life Series: From identifying mental disorders to making chatbots smarter, The Times is looking at A.I.’s potential to solve everyday problems.
While Tuesday’s settlement pertains to housing ads, Meta said it also plans to apply its new system to check the targeting of ads related to employment and credit. The company has previously faced blowback for allowing bias against women in job ads and excluding certain groups of people from seeing credit card ads.
“Because of this groundbreaking lawsuit, Meta will — for the first time — change its ad delivery system to address algorithmic discrimination,” Damian Williams, a U.S. attorney, said in a statement. “But if Meta fails to demonstrate that it has sufficiently changed its delivery system to guard against algorithmic bias, this office will proceed with the litigation.”
The issue of biased ad targeting has been especially debated in housing ads. In 2018, Ben Carson, the secretary of the Department of Housing and Urban Development at the time, announced a formal complaint against Facebook, accusing the company of having ad systems that “unlawfully discriminated” based on categories such as race, religion and disability. Facebook’s potential for ad discrimination was also revealed in a 2016 investigation by ProPublica, which showed that the company made it simple for marketers to exclude specific ethnic groups for advertising purposes.
In 2019, HUD sued Facebook for engaging in housing discrimination and violating the Fair Housing Act. The agency said Facebook’s systems did not deliver ads to “a diverse audience,” even if an advertiser wanted the ad to be seen broadly.
“Facebook is discriminating against people based upon who they are and where they live,” Mr. Carson said at the time. “Using a computer to limit a person’s housing choices can be just as discriminatory as slamming a door in someone’s face.”
The HUD suit came amid a broader push from civil rights groups claiming that the vast and complicated advertising systems that underpin some of the largest internet platforms have inherent biases built into them, and that tech companies like Meta, Google and others should do more to bat back those biases.
The area of study, known as “algorithmic fairness,” has been a significant topic of interest among computer scientists in the field of artificial intelligence. Leading researchers, including former Google scientists like Timnit Gebru and Margaret Mitchell, have sounded the alarm bell on such biases for years.
In the years since, Facebook has clamped down on the types of categories that marketers could choose from when purchasing housing ads, cutting the number down to hundreds and eliminating options to target based on race, age and ZIP code.
Meta’s new system, which is still in development, will occasionally check on who is being served ads for housing, employment and credit, and make sure those audiences match up with the people who marketers want to target. If the ads being served begin to skew heavily toward white men in their 20s, for example, the new system will theoretically recognize this and shift the ads to be served more equitably among broader and more varied audiences.
Meta said it will work with HUD over the coming months to incorporate the technology into Meta’s ad targeting systems, and agreed to a third-party audit of the new system’s effectiveness.
The penalty that Meta is paying in the settlement is the maximum available under the Fair Housing Act, the Justice Department said.
Did you miss our previous article...
https://trendinginthenews.com/tech-giants/microsoft-plans-to-eliminate-face-analysis-tools-in-push-for-responsible-ai