Meta Accused of Trying To Discredit Ad Researchers

Thomas Claburn reports via The Register: Meta allegedly tried to discredit university researchers in Brazil who had flagged fraudulent adverts on the social network’s ad platform. Nucleo, a Brazil-based news organization, said it has obtained government documents showing that attorneys representing Meta questioned the credibility of researchers from NetLab, which is part of the Federal University of Rio de Janeiro (UFRJ). NetLab’s research into Meta’s ads contributed to Brazil’s National Consumer Secretariat (Senacon) decision in 2023 to fine Meta $1.7 million (9.3 million BRL), which is still being appealed. Meta (then Facebook) was separately fined of $1.2 million (6.6 million BRL) related to Cambridge Analytica.

As noted by Nucleo, NetLab’s report showed that Facebook, despite being notified about the issues, had failed to remove more than 1,800 scam ads that fraudulently used the name of a government program that was supposed to assist those in debt. In response to the fine, attorneys representing Meta from law firm TozziniFreire allegedly accused the NetLab team of bias and of failing to involve Meta in the research process. Nucleo says that it obtained the administrative filing through freedom of information requests to Senacon. The documents are said to date from December 26 last year and to be part of the ongoing case against Meta. A spokesperson for NetLab, who asked not to be identified by name due to online harassment directed at the organization’s members, told The Register that the research group was aware of the Nucleo report. “We were kind of surprised to see the account of our work in this law firm document,” the spokesperson said. “We expected to be treated with more fairness for our work. Honestly, it comes at a very bad moment because NetLab particularly, but also Brazilian science in general, is being attacked by far-right groups.”

On Thursday, more than 70 civil society groups including NetLab published an open letter decrying Meta’s legal tactics. “This is an attack on scientific research work, and attempts at intimidation of researchers and researchers who are performing excellent work in the production of knowledge from empirical analysis that have been fundamental to qualify the public debate on the accountability of social media platforms operating in the country, especially with regard to paid content that causes harm to consumers of these platforms and that threaten the future of our democracy,” the letter says. “This kind of attack and intimidation is made even more dangerous by aligning with arguments that, without any evidence, have been used by the far right to discredit the most diverse scientific productions, including NetLab itself.” The claim, allegedly made by Meta’s attorneys, is that the ad biz was “not given the opportunity to appoint a technical assistant and present questions” in the preparation of the NetLabs report. This is particularly striking given Meta’s efforts to limit research into its ad platform. A Meta spokesperson told The Register: “We value input from civil society organizations and academic institutions for the context they provide as we constantly work toward improving our services. Meta’s defense filed with the Brazilian Consumer Regulator questioned the use of the NetLab report as legal evidence, since it was produced without giving us prior opportunity to contribute meaningfully, in violation of local legal requirements.”

Read more of this story at Slashdot.

Mark Zuckerberg Assembles Team of Tech Execs For AI Advisory Council

An anonymous reader quotes a report from Quartz: Mark Zuckerberg has assembled some of his fellow tech chiefs into an advisory council to guide Meta on its artificial intelligence and product developments. The Meta Advisory Group will periodically meet with Meta’s management team, Bloomberg reported. Its members include: Stripe CEO and co-founder Patrick Collison, former GitHub CEO Nat Friedman, Shopify CEO Tobi Lutke, and former Microsoft executive and investor Charlie Songhurst.

“I’ve come to deeply respect this group of people and their achievements in their respective areas, and I’m grateful that they’re willing to share their perspectives with Meta at such an important time as we take on new opportunities with AI and the metaverse,” Zuckerberg wrote in an internal note to Meta employees, according to Bloomberg. The advisory council differs from Meta’s 11-person board of directors because its members are not elected by shareholders, nor do they have fiduciary duty to Meta, a Meta spokesperson told Bloomberg. The spokesperson said that the men will not be paid for their roles on the advisory council. TechCrunch notes that the council features “only white men on it.” This “differs from Meta’s actual board of directors and its Oversight Board, which is more diverse in gender and racial representation,” reports TechCrunch.

“It’s telling that the AI advisory council is composed entirely of businesspeople and entrepreneurs, not ethicists or anyone with an academic or deep research background. … it’s been proven time and time again that AI isn’t like other products. It’s a risky business, and the consequences of getting it wrong can be far-reaching, particularly for marginalized groups.”

Read more of this story at Slashdot.

Meta Wants EU Users To Apply For Permission To Opt Out of Data Collection

Meta announced that starting next Wednesday, some Facebook and Instagram users in the European Union will for the first time be able to opt out of sharing first-party data used to serve highly personalized ads, The Wall Street Journal reported. The move marks a big change from Meta’s current business model, where every video and piece of content clicked on its platforms provides a data point for its online advertisers. Ars Technica reports: People “familiar with the matter” told the Journal that Facebook and Instagram users will soon be able to access a form that can be submitted to Meta to object to sweeping data collection. If those requests are approved, those users will only allow Meta to target ads based on broader categories of data collection, like age range or general location. This is different from efforts by other major tech companies like Apple and Google, which prompt users to opt in or out of highly personalized ads with the click of a button. Instead, Meta will review objection forms to evaluate reasons provided by individual users to end such data collection before it will approve any opt-outs. It’s unclear what cause Meta may have to deny requests.

A Meta spokesperson told Ars that Meta is not sharing the objection form publicly at this time but that it will be available to EU users in its Help Center starting on April 5. That’s the deadline Meta was given to comply with an Irish regulator’s rulings that it was illegal in the EU for Meta to force Facebook and Instagram users to give consent to data collection when they signed contracts to use the platforms. Meta still plans to appeal those Irish Data Protection Commission (DPC) rulings, believing that its prior contract’s legal basis complies with the EU’s General Data Protection Regulation (GDPR). In the meantime, though, the company must change the legal basis for data collection. Meta announced in a blog post today that it will now argue that it does not need to directly obtain user consent because it has a “legitimate interest” to collect data to operate its social platforms. “We believe that our previous approach was compliant under GDPR, and our appeal on both the substance of the rulings and the fines continues,” Meta’s blog said. “However, this change ensures that we comply with the DPC’s decision.”

Read more of this story at Slashdot.

Anti-Vaccine Groups Avoid Facebook Bans By Using Emojis

Pizza slices, cupcakes, and carrots are just a few emojis that anti-vaccine activists use to speak in code and continue spreading COVID-19 misinformation on Facebook. Ars Technica reports: Bloomberg reported that Facebook moderators have failed to remove posts shared in anti-vaccine groups and on pages that would ordinarily be considered violating content, if not for the code-speak. One group that Bloomberg reviewed, called “Died Suddenly,” is a meeting ground for anti-vaccine activists supposedly mourning a loved one who died after they got vaccines — which they refer to as having “eaten the cake.” Facebook owner Meta told Bloomberg that “it’s removed more than 27 million pieces of content for violating its COVID-19 misinformation policy, an ongoing process,” but declined to tell Ars whether posts relying on emojis and code-speak were considered in violation of the policy.

According to Facebook community standards, the company says it will “remove misinformation during public health emergencies,” like the pandemic, “when public health authorities conclude that the information is false and likely to directly contribute to the risk of imminent physical harm.” Pages or groups risk being removed if they violate Facebook’s rules or if they “instruct or encourage users to employ code words when discussing vaccines or COVID-19 to evade our detection.” However, the policy remains vague regarding the everyday use of emojis and code words. The only policy that Facebook seems to have on the books directly discussing improper use of emojis as coded language deals with community standards regarding sexual solicitation. It seems that while anti-vaccine users’ emoji-speak can expect to remain unmoderated, anyone using “contextually specific and commonly sexual emojis or emoji strings” does actually risk having posts removed if moderators determine they are using emojis to ask for or offer sex.

In total, Bloomberg reviewed six anti-vaccine groups created in the past year where Facebook users employ emojis like peaches and apples to suggest people they know have been harmed by vaccines. Meta’s seeming failure to moderate the anti-vaccine emoji-speak suggests that blocking code-speak is likely not currently a priority. Last year, when BBC discovered that anti-vaccine groups were using carrots to mask COVID-19 vaccine misinformation, Meta immediately took down the groups identified. However, BBC reported that soon after, the same groups popped back up, and more recently, Bloomberg reported that some of the groups that it tracked seemed to change names frequently, possibly to avoid detection.

Read more of this story at Slashdot.