Back in 2021, a Facebook user filed a lawsuit because they didn’t think they were getting a fair shot at viewing advertisements. Wanting to see ads might seem absurd — if you’re anything like me, you want ads off your social media experience at all costs. Still, to a 55-year-old prospective tenant in the Washington, D.C. area, it was about more than a simple publicity blurb on Facebook. It, the plaintiff argued, had grave real-life consequences.
So Neuhtah Opiotennione filed a class-action lawsuit against nine companies that manage various apartment buildings in the D.C. area, alleging that they engaged in “digital housing discrimination” by excluding older people — like her — from viewing advertisements on Facebook. She alleges that because the defendants deliberately excluded people over the age of 50 from viewing their ads — something you could once do on Facebook — she was denied the opportunity to receive certain housing advertisements targeted to younger potential tenants.
“In creating a targeted Facebook advertisement, advertisers can determine who sees their advertisements based on such characteristics as age, gender, location, and preferences,” the lawsuit reads. The plaintiff alleged that rental companies used Facebook’s targeting function to exclude people like her because of her age, instead directing the ads to younger prospective tenants.
David Brody, counsel and senior fellow for privacy and technology at the Lawyers’ Committee for Civil Rights Under Law, which filed a brief in favor of the plaintiff, said in a press release that “Facebook is not giving the user what the user wants – Facebook is giving the user what it thinks a demographic stereotype wants. Redlining is discriminatory and unjust whether it takes place online or offline, and we must not allow corporations to blame technology for harmful decisions made by CEOs.”
The case was ultimately dismissed because the judge felt that online targeting of advertisements causes no injury to consumers. However, Ballard Spahr LLP, a law firm that focuses on litigation, securities and regulatory enforcement, business and finance, intellectual property, public finance, and real estate matters, said that the ruling could have a significant impact on how we view discrimination online.
“It seems likely to make it more difficult for private parties to attempt to bring lawsuits related to online ad targeting on social media networks or through methods like paid search,” the firm said. “But, secondarily, we wonder whether it will serve as a barrier to regulatory actions as well.”
Opiotennione v. Bozzuto Mgmt. is just one of many lawsuits against Facebook alleging discrimination. We already know how nefarious these ads can be, from spying on us to collecting our data and creating a world with further devastating partisan divides. But there’s something else harmful going on with ads online, particularly on one of the largest ad platforms ever, Facebook. According to Facebook’s parent company, Meta, the platform has a total advertising audience of more than two billion people. Any one of them could be missing out on ads — for housing, credit opportunities, and other important issues that impact the wealth gap — due to digital redlining. Here’s why that’s important.
Wait, what is digital redlining?
Traditional redlining is when people and companies purposefully withhold loans and other resources from people who live in specific neighborhoods. This tends to land along racial and financial divides, and it works to deepen those divides. It can happen online, too.
Digital redlining refers to any use of technology to perpetuate discrimination. It’s how The Greenlining Institute, a California-based organization that works to fix digital redlining, describes the practice of internet companies failing to provide infrastructures for service — such as broadband internet — to lower-income communities, as it’s seen as less profitable to do so.
That kind of digital redlining results in lower-income people having to turn to prepaid plans and other more expensive options for internet while also having to deal with slower speeds than those in wealthier — and often whiter — communities, which have a digital infrastructure. The Greenlining Institute isn’t the only organization working to fix this kind of digital redlining. The Federal Communications Commission (FCC) is also forming an agency task force focused on combating digital discrimination and promoting equal broadband access nationwide.
But digital redlining also refers to unfair ad-targeting practices. According to the ACLU, online ad-targeting can replicate existing disparities in society, which can exclude people who belong to historically marginalized groups from opportunities for housing, jobs, and credit.
“In today’s digital world, digital redlining has become the new frontier of discrimination, as social media platforms like Facebook and online advertisers have increasingly used personal data to target ads based on race, gender, and other protected traits,” the ACLU said in a press release from January. “This type of online discrimination is harmful and disproportionately impacts people of color, women, and other marginalized groups, yet courts have held that platforms like Facebook and online advertisers can’t be held accountable for withholding ads for jobs, housing, and credit from certain users. Despite agreements to make sweeping changes to its ad platform, digital redlining still persists on Facebook.”
It’s not that digital redlining is more harmful on Facebook than it is on other online platforms, but, as Galen Sherwin, a senior staff attorney revealed it’s “more prevalent in that Facebook is an industry leader and has such a huge market share here in this space.” Facebook says its algorithm treats everyone equally and the fault lies with its advertisers — advertisers that pay Facebook, and where the majority of its money is made.
“The fact that Facebook has offered these tools that not just permit, but invite advertisers to exclude users based on certain characteristics, including their membership and protected classes is tremendously harmful,” Sherwin said. “And even though there have been some steps to mitigate those harms and to remove the worst or most blatant of the ways in which the platform can operate that way in the housing, employment and credit space, there’s still a really long way to go before that’s eradicated truly from the space.”
Many activists agree that while Facebook has made moves to resolve its ad discrimination problems since a 2016 report from ProPublica, not enough has been done.
How does digital redlining work?
Let’s say a realtor group wants to only share ads for their homes with wealthy, upper-class people who live in upper-class neighborhoods and exist within upper-class communities, or a restaurant wants to only share ads for an upcoming job opening to specific candidates. When that company chooses a platform like Google or Facebook to push out those ads, it will look for ways to siphon its ad coverage to those specifically targeted groups. Targeting tools on those platforms allow companies to choose who can and cannot see their ads. On Facebook, users can take two general approaches to creating a target audience: specific and broad. Specific targeting can lead to a potential audience that’s smaller, like parents living in Tucson, Arizona, while broad targeting includes categories like gender and age.
After many court-based struggles (we’ll get to that shortly), housing, employment, and credit have been deemed special ad categories. That means they have restricted targeting options in their ads manager. A company looking to place ads for housing, employment, or credit can still target an advertisement to a specific audience instead of just sending it out widely. Still, they can’t do it based on protected characteristics, such as age, gender, and where the potential consumers live. At least, that’s the goal.
Facebook wrote in 2019 that “these ads will not allow targeting by age, gender, zip code, multicultural affinity, or any detailed options describing or appearing to relate to protected characteristics,” like race, sex, religion, national origin, physical disability, or sexual orientation and gender identity. Advertisers for these protected classes can also not use lookalike audiences, a way to reach new people likely to be interested in a business because they are similar to that businesses’ existing customers.
But is that enough?
There are other aspects of Facebook’s platform that cause scrutiny and concern, despite the work Facebook has done. Research from October 2021 pulled from public voting records in North Carolina analyzed the impacts of Facebook’s advertisements and found that it has discriminatory outcomes.
“This was true for both the Lookalike Audience tool and the Special Ad Audience tool that Facebook designed to explicitly not use sensitive demographic attributes when finding similar users,” the report read.
“If you were to provide Facebook with a set of names of contacts, [like] your client list, it would then target ads to Facebook users that were of a similar profile as your client list. And in engaging in that targeting, there were certain interest metrics that were specifically concerning, and that, from our perspective, would have segregated targeting of those ads,” Williams said. “In our settlement, we agreed to remove a number of those interest factors and simply allow Facebook to proceed with targeting on the basis of [things like] internet usage, but we still have concerns about this.”
Advertisers on Facebook trying to reach audiences in the U.S. with housing, employment, or credit ads can’t use the lookalike feature, but they can create a special ad audience. That’s an audience based on online behavior similarities that doesn’t consider things like age, gender, or zip code. But activists argue there might be some shady ways untrustworthy users can target protected traits within a special ad audience, too. For example, you can create a custom audience target by using sources like customer lists, website or app traffic, or engagement on Facebook.
Special ad audiences allow advertisers to give Facebook a seed audience, and then Facebook selects other Facebook users who look like that seed audience. So, advertisers aren’t saying “show this ad to 27 year old queer people who live in Brooklyn,” they’re saying “show this to people like Christianna Silva” — and Christianna Silva happens to be a 27-year-old queer person living in Brooklyn.
Obviously, if your seed audience reflect a certain demographic, the matching audience will also reflect that demographic.
“Obviously, if your seed audience reflect a certain demographic, the matching audience will also reflect that demographic,” Sherwin said. “And while Facebook made some changes to that tool, it did not make significant enough changes and there have been studies since then that demonstrate that, essentially, the patterns of discriminatory output are unchanged.”
Facebook’s ad-delivery algorithm then chooses which users matching those criteria will actually see the ads based on predictions relying on a bunch of user data about who they are, where they live, what they like or post, and what groups they join. While this may seem harmless, it can lead to discrimination because data about who we are, where we live, what we live and post, and what groups we join are indicative of our protected traits.
Is this legal?
To be clear, targeting ads based on protected traits is illegal. Despite this, a 2021 study of discrimination in job ad delivery on Facebook and LinkedIn conducted by independent researchers at the University of Southern California found that Facebook’s ad-delivery system showed different employment ads to women and men, even though the jobs require the same qualifications and the targeting parameters chosen by the advertiser included all genders. This is illegal, but there’s confusion about how Section 230 of the Communications Decency Act, which is designed to shield platforms from liability for content that users post, and other civil rights laws apply to online ad targeting.
Facebook has been hiding behind Section 230 in its litigation. And while the ACLU mostly supports Section 230 and the protections it allows platforms, their position here is that “it doesn’t protect Facebook from this conduct because Facebook itself was the architect of the targeting tools.”
Changes have been made
To its credit, Facebook has made sweeping changes to its ad-delivery system.
Facebook has made “significant investments” to help prevent discrimination on their ad platforms. The spokesperson’s example was that its terms and advertising policies have “long emphasized” that advertisers cannot use their platform to engage in wrongful discrimination. That feels like a pretty weak point, considering that no one ever really reads a single word of the terms and conditions. And, of course, it’s not so much a question of if the user reads the terms as it is whether or not Facebook is policing the rules in it own terms. Facebook says it is, but the platform is famously terrible at policing its own rules — just consider the way misinformation continues to spread on the platform.
Advertisers also can’t use interests, demographics, or behaviors for exclusion targeting. Since advertisers self-report on whether they’re posting ads about jobs and housing and the like, (obviously not a foolproof system), Facebook also uses human reviewers and machine-learning algorithms to identify the ads in case they are incorrectly identified. Meta hasn’t disclosed how well this actually works.
In the U.S., Canada, and the EU, people running housing, employment, or credit ads have to use special advertisement categories with restricted targeting options, including that they aren’t allowed to target by gender, age, or zip code, and must instead target a minimum 15-mile radius from a specific location, the Meta spokesperson said. But Facebook still gives housing providers the ability to target potential renters or homeowners by a radius of a certain place — which, according to the ACLU, is “a clear proxy for race in our still-segregated country.”
Are those changes enough?
The courts have forced Facebook to make plenty of changes. But many activists argue that the steps they’ve taken so far have been far too incremental.
In March 2019, Facebook disabled a feature for housing, credit, and job ads after settling several lawsuits, but algorithms still showed ads to statistically distinct demographic groups even following the move. For instance, one 2021 study showed that a Domino’s pizza ad was shown to more men than women, while an ad for the grocery delivery and pick-up service Instacart was shown to more women than men. The same audit also found that employment advertisements for sales associates for cars on Facebook were shown to more men than women, while more women than men were shown ads for sales associates for jewelry on Facebook.
In one lawsuit, which was dismissed, prospective tenants alleged that Facebook’s advertising platform excluded them from receiving housing advertisements because of their protected characteristics.
“While ad classification will never be perfect, we’re always improving our systems to improve our detection and enforcement over time,” the Meta spokesperson said.
In January 2022, Facebook began removing more targeting options related to topics people may perceive as sensitive, such as options referencing causes, organizations, or public figures that relate to health, race or ethnicity, political affiliation, religion, or sexual orientation. That’s because you can make some assumptions about protected classes based upon which political affiliation, religion, or sexual orientation topics they “like” on Facebook. This is for all types of ads, according to the Meta spokesperson. Facebook also built a section of its Ad Library that allows users in the U.S. and Canada to search for all active housing, employment, and credit opportunity ads by advertisers and the location they’re targeted to, regardless of whether they’re in the advertiser’s intended audience.
Until Facebook’s appetite changes, much of the work lands upon the shoulders of activists and lawmakers.
“I think making the housing and employment opportunities searchable through the marketplace was one step forward,” Sherwin said. “That takes it out of the advertiser’s hands and puts some control in the hands of the user to affirmatively seek out opportunities rather than relying passively on the Facebook feed.”
Sherwin said it’s an “important step,” but acknowledged that Facebook hasn’t shown “any real appetite to crack the ad delivery algorithm.” After all, advertising income is the bulk of Facebook’s revenue. In 2021, the company made $29 billion through ad sales in the three months ending in June.
Until Facebook’s appetite changes, much of the work lands upon the shoulders of activists and lawmakers. But, hey, we can always delete our profiles.
Author Profile
-
Executive Managing editor
Editor and Admin at MarkMeets since Nov 2012. Columnist, reviewer and entertainment writer and oversees all of the section's news, features and interviews. During his career, he has written for numerous magazines.
Follow on Twitter https://twitter.com/ExclusiveGoss/
Email Dan@MarkMeets.com
Latest entries
- AccessoriesWednesday, 11 December 2024, 17:00The epitome of Style and Warmth: A Guide to Quality Men’s Coats
- PostsWednesday, 11 December 2024, 11:40Top 5 Strategies for the Effective Use of Sports Bonuses
- FinanceSunday, 8 December 2024, 9:46The Importance of Financial Tools in Everyday Money Management
- HealthSunday, 8 December 2024, 9:45Enhancing Patient Care Through Sleep Health Education