FACEBOOK APPROVED AN ISRAELI AD CALLING FOR ASSASSINATION OF PRO-PALESTINE ACTIVIST

YouTube player

After the ad was dis­cov­ered, dig­i­tal rights advo­cates ran an exper­i­ment test­ing the lim­its of Facebook’s machine-learn­ing moderation.

By Sam Biddle

A SERIES OF adver­tise­ments dehu­man­iz­ing and call­ing for vio­lence against Palestinians, intend­ed to test Facebook’s con­tent mod­er­a­tion stan­dards, were all approved by the social net­work, accord­ing to mate­ri­als shared with The Intercept.

The sub­mit­ted ads, in both Hebrew and Arabic, includ­ed fla­grant vio­la­tions of poli­cies for Facebook and its par­ent com­pa­ny Meta. Some con­tained vio­lent con­tent direct­ly call­ing for the mur­der of Palestinian civil­ians, like ads demand­ing a “holo­caust for the Palestinians” and to wipe out “Gazan women and chil­dren and the elder­ly.” Others posts, like those describ­ing kids from Gaza as “future ter­ror­ists” and a ref­er­ence to “Arab pigs,” con­tained dehu­man­iz­ing language.

The approval of these ads is just the lat­est in a series of Meta’s fail­ures towards the Palestinian people.”

The approval of these ads is just the lat­est in a series of Meta’s fail­ures towards the Palestinian peo­ple,” Nadim Nashif, founder of the Palestinian social media research and advo­ca­cy group 7amleh, which sub­mit­ted the test ads, told The Intercept. “Throughout this cri­sis, we have seen a con­tin­ued pat­tern of Meta’s clear bias and dis­crim­i­na­tion against Palestinians.”

7amleh’s idea to test Facebook’s machine-learn­ing cen­sor­ship appa­ra­tus arose last month, when Nashif dis­cov­ered an ad on his Facebook feed explic­it­ly call­ing for the assas­si­na­tion of American activist Paul Larudee, a co-founder of the Free Gaza Movement. Facebook’s auto­mat­ic trans­la­tion of the text ad read: “It’s time to assas­si­nate Paul Larudi [sic], the anti-Semitic and ‘human rights’ ter­ror­ist from the United States.” Nashif report­ed the ad to Facebook, and it was tak­en down.

The ad had been placed by Ad Kan, a right-wing Israeli group found­ed by for­mer Israel Defense Force and intel­li­gence offi­cers to com­bat “anti-Israeli orga­ni­za­tions” whose fund­ing comes from pur­port­ed­ly anti­se­mit­ic sources, accord­ing to its web­site. (Neither Larudee nor Ad Kan imme­di­ate­ly respond­ed to requests for comment.)

Calling for the assas­si­na­tion of a polit­i­cal activist is a vio­la­tion of Facebook’s adver­tis­ing rules. That the post spon­sored by Ad Kan appeared on the plat­form indi­cates Facebook approved it despite those rules. The ad like­ly passed through fil­ter­ing by Facebook’s auto­mat­ed process, based on machine-learn­ing, that allows its glob­al adver­tis­ing busi­ness to oper­ate at a rapid clip

Our ad review sys­tem is designed to review all ads before they go live,” accord­ing to a Facebook ad pol­i­cy overview. As Meta’s human-based mod­er­a­tion, which his­tor­i­cal­ly relied almost entire­ly on out­sourced con­trac­tor labor, has drawn greater scruti­ny and crit­i­cism, the com­pa­ny has come to lean more heav­i­ly on auto­mat­ed text-scan­ning soft­ware to enforce its speech rules and cen­sor­ship policies.

While these tech­nolo­gies allow the com­pa­ny to skirt the labor issues asso­ci­at­ed with human mod­er­a­tors, they also obscure how mod­er­a­tion deci­sions are made behind secret algorithms.

Last year, an exter­nal audit com­mis­sioned by Meta found that while the com­pa­ny was rou­tine­ly using algo­rith­mic cen­sor­ship to delete Arabic posts, the com­pa­ny had no equiv­a­lent algo­rithm in place to detect “Hebrew hos­tile speech” like racist rhetoric and vio­lent incite­ment. Following the audit, Meta claimed it had “launched a Hebrew ‘hos­tile speech’ clas­si­fi­er to help us proac­tive­ly detect more vio­lat­ing Hebrew con­tent.” Content, that is, like an ad espous­ing murder

Incitement to Violence on Facebook

Amid the Israeli war on Palestinians in Gaza, Nashif was trou­bled enough by the explic­it call in the ad to mur­der Larudee that he wor­ried sim­i­lar paid posts might con­tribute to vio­lence against Palestinians.

Large-scale incite­ment to vio­lence jump­ing from social media into the real world is not a mere hypo­thet­i­cal: In 2018, United Nations inves­ti­ga­tors found vio­lent­ly inflam­ma­to­ry Facebook posts played a “deter­min­ing role” in Myanmar’s Rohingya geno­cide. (Last year, anoth­er group ran test ads incit­ing against Rohingya, a project along the same lines as 7amleh’s exper­i­ment; in that case, all the ads were also approved.)

The quick removal of the Larudee post didn’t explain how the ad was approved in the first place. In light of assur­ances from Facebook that safe­guards were in place, Nashif and 7amleh, which for­mal­ly part­ners with Meta on cen­sor­ship and free expres­sion issues, were puzzled.

Meta has a track record of not doing enough to pro­tect mar­gin­al­ized communities.”

Curious if the approval was a fluke, 7amleh cre­at­ed and sub­mit­ted 19 ads, in both Hebrew and Arabic, with text delib­er­ate­ly, fla­grant­ly vio­lat­ing com­pa­ny rules — a test for Meta and Facebook. 7amleh’s ads were designed to test the approval process and see whether Meta’s abil­i­ty to auto­mat­i­cal­ly screen vio­lent and racist incite­ment had got­ten bet­ter, even with unam­bigu­ous exam­ples of vio­lent incitement.

We knew from the exam­ple of what hap­pened to the Rohingya in Myanmar that Meta has a track record of not doing enough to pro­tect mar­gin­al­ized com­mu­ni­ties,” Nashif said, “and that their ads man­ag­er sys­tem was par­tic­u­lar­ly vulnerable.”

Meta’s appears to have failed 7amleh’s test.

The company’s Community Standards rule­book — which ads are sup­posed to com­ply with to be approved — pro­hib­it not just text advo­cat­ing for vio­lence, but also any dehu­man­iz­ing state­ments against peo­ple based on their race, eth­nic­i­ty, reli­gion, or nation­al­i­ty. Despite this, con­fir­ma­tion emails shared with The Intercept show Facebook approved every sin­gle ad.

Though 7amleh told The Intercept the orga­ni­za­tion had no inten­tion to actu­al­ly run these ads and was going to pull them before they were sched­uled to appear, it believes their approval demon­strates the social plat­form remains fun­da­men­tal­ly myopic around non-English speech — lan­guages used by a great major­i­ty of its over 4 bil­lion users. (Meta retroac­tive­ly reject­ed 7amleh’s Hebrew ads after The Intercept brought them to the company’s atten­tion, but the Arabic ver­sions remain approved with­in Facebook’s ad system.)

Facebook spokesper­son Erin McPike con­firmed the ads had been approved acci­den­tal­ly. “Despite our ongo­ing invest­ments, we know that there will be exam­ples of things we miss or we take down in error, as both machines and peo­ple make mis­takes,” she said. “That’s why ads can be reviewed mul­ti­ple times, includ­ing once they go live.”

Related

TikTok, Instagram Target Outlet Covering Israel – Palestine Amid Siege on Gaza

Just days after its own exper­i­men­tal ads were approved, 7amleh dis­cov­ered an Arabic ad run by a group call­ing itself “Migrate Now” call­ing on “Arabs in Judea and Sumaria” — the name Israelis, par­tic­u­lar­ly set­tlers, use to refer to the occu­pied Palestinian West Bank — to relo­cate to Jordan.

According to Facebook doc­u­men­ta­tion, auto­mat­ed, soft­ware-based screen­ing is the “pri­ma­ry method” used to approve or deny ads. But it’s unclear if the “hos­tile speech” algo­rithms used to detect vio­lent or racist posts are also used in the ad approval process. In its offi­cial response to last year’s audit, Facebook said its new Hebrew-lan­guage clas­si­fi­er would “sig­nif­i­cant­ly improve” its abil­i­ty to han­dle “major spikes in vio­lat­ing con­tent,” such as around flare-ups of con­flict between Israel and Palestine. Based on 7amleh’s exper­i­ment, how­ev­er, this clas­si­fi­er either doesn’t work very well or is for some rea­son not being used to screen adver­tise­ments. (McPike did not answer when asked if the approval of 7amleh’s ads reflect­ed an under­ly­ing issue with the hos­tile speech classifier.)

Either way, accord­ing to Nashif, the fact that these ads were approved points to an over­all prob­lem: Meta claims it can effec­tive­ly use machine learn­ing to deter explic­it incite­ment to vio­lence, while it clear­ly cannot.

We know that Meta’s Hebrew clas­si­fiers are not oper­at­ing effec­tive­ly, and we have not seen the com­pa­ny respond to almost any of our con­cerns,” Nashif said in his state­ment. “Due to this lack of action, we feel that Meta may hold at least par­tial respon­si­bil­i­ty for some of the harm and vio­lence Palestinians are suf­fer­ing on the ground.”

The approval of the Arabic ver­sions of the ads come as a par­tic­u­lar sur­prise fol­low­ing a recent report by the Wall Street Journal that Meta had low­ered the lev­el of cer­tain­ty its algo­rith­mic cen­sor­ship sys­tem need­ed to remove Arabic posts — from 80 per­cent con­fi­dence that the post broke the rules, to just 25 per­cent. In oth­er words, Meta was less sure that the Arabic posts it was sup­press­ing or delet­ing actu­al­ly con­tained pol­i­cy violations.

Nashif said, “There have been sus­tained actions result­ing in the silenc­ing of Palestinian voices.”

%d