Threads Expands Fediverse Beta, Letting Users See Replies (and Likes) on Other Fediverse Sites like Mastodon

An anonymous Slashdot reader shared this report from the Verge:

Threads will now let people like and see replies to their Threads posts that appear on other federated social media platforms, the company announced on Tuesday.

Previously, if you made a post on Threads that was syndicated to another platform like Mastodon, you wouldn’t be able to see responses to that post while still inside Threads. That meant you’d have to bounce back and forth between the platforms to stay up-to-date on replies… [I]n a screenshot, Meta notes that you can’t reply to replies “yet,” so it sounds like that feature will arrive in the future.
“Threads is Meta’s first app built to be compatible with the fediverse…” according to a Meta blog post. “Our vision is that people using other fediverse-compatible servers will be able to follow and interact with people on Threads without having a Threads profile, and vice versa, connecting communities…” [If you turn on “sharing”…] “Developers can build new types of features and user experiences that can easily plug into other open social networks, accelerating the pace of innovation and experimentation.”

And this week Instagram/Threads top executive Adam Mosseri posted that Threads is “also expanding the availability of the fediverse beta experience to more than 100 countries, and hope to roll it out everywhere soon.”

Read more of this story at Slashdot.

Meta Is Tagging Real Photos As ‘Made With AI,’ Says Photographers

Since May, Meta has been labeling photos created with AI tools on its social networks to help users better identify the content they’re consuming. However, as TechCrunch’s Ivan Mehta reports, this approach has faced criticism as many photos not created using AI tools have been incorrectly labeled, prompting Meta to reevaluate its labeling strategy to better reflect the actual use of AI in images. From the report: There are plenty of examples of Meta automatically attaching the label to photos that were not created through AI. For example, this photo of Kolkata Knight Riders winning the Indian Premier League Cricket tournament. Notably, the label is only visible on the mobile apps and not on the web. Plenty of other photographers have raised concerns over their images having been wrongly tagged with the “Made with AI” label. Their point is that simply editing a photo with a tool should not be subject to the label.

Former White House photographer Pete Souza said in an Instagram post that one of his photos was tagged with the new label. Souza told TechCrunch in an email that Adobe changed how its cropping tool works and you have to “flatten the image” before saving it as a JPEG image. He suspects that this action has triggered Meta’s algorithm to attach this label. “What’s annoying is that the post forced me to include the ‘Made with AI’ even though I unchecked it,” Souza told TechCrunch.

Meta would not answer on the record to TechCrunch’s questions about Souza’s experience or other photographers’ posts who said their posts were incorrectly tagged. However, after publishing of the story, Meta said the company is evaluating its approach to indicate labels reflect the amount of AI used in an image. “Our intent has always been to help people know when they see content that has been made with AI. We are taking into account recent feedback and continue to evaluate our approach so that our labels reflect the amount of AI used in an image,” a Meta spokesperson told TechCrunch. “For now, Meta provides no separate labels to indicate if a photographer used a tool to clean up their photo, or used AI to create it,” notes TechCrunch. “For users, it might be hard to understand how much AI was involved in a photo.”

“Meta’s label specifies that ‘Generative AI may have been used to create or edit content in this post’ — but only if you tap on the label. Despite this approach, there are plenty of photos on Meta’s platforms that are clearly AI-generated, and Meta’s algorithm hasn’t labeled them.”

Read more of this story at Slashdot.

The Word ‘Bot’ Is Increasingly Being Used As an Insult On Social Media

The definition of the word “bot” is shifting to become an insult to someone you know is human, according to researchers who analyzed more than 22 million tweets. Researchers found this shift began around 2017, with left-leaning users more likely to accuse right-leaning users of being bots. “A potential explanation might be that media frequently reported about right-wing bot networks influencing major events like the [2016] US election,” says Dennis Assenmacher at Leibniz Institute for Social Sciences in Cologne, Germany. “However, this is just speculation and would need confirmation.” NewScientist reports: To investigate, Assenmacher and his colleagues looked at how users perceive what is a bot or not. They did so by looking at how the word “bot” was used on Twitter between 2007 and December 2022 (the social network changed its name to X in 2023, following its purchase by Elon Musk), analyzing the words that appeared next to it in more than 22 million English-language tweets. The team found that before 2017, the word was usually deployed alongside allegations of automated behavior of the type that would traditionally fit the definition of a bot, such as “software,” “script” or “machine.” After that date, the use shifted. “Now, the accusations have become more like an insult, dehumanizing people, insulting them, and using this as a technique to deny their intelligence and deny their right to participate in a conversation,” says Assenmacher. The study has been published in the journal Proceedings of the Eighteenth International AAAI Conference on Web and Social Media.

Read more of this story at Slashdot.

Reddit Grows, Seeks More AI Deals, Plans ‘Award’ Shops, and Gets Sued

Reddit reported its first results since going public in late March. Yahoo Finance reports:

Daily active users increased 37% year over year to 82.7 million. Weekly active unique users rose 40% from the prior year. Total revenue improved 48% to $243 million, nearly doubling the growth rate from the prior quarter, due to strength in advertising. The company delivered adjusted operating profits of $10 million, versus a $50.2 million loss a year ago. [Reddit CEO Steve] Huffman declined to say when the company would be profitable on a net income basis, noting it’s a focus for the management team. Other areas of focus include rolling out a new user interface this year, introducing shopping capabilities, and searching for another artificial intelligence content licensing deal like the one with Google.

Bloomberg notes that already Reddit “has signed licensing agreements worth $203 million in total, with terms ranging from two to three years. The company generated about $20 million from AI content deals last quarter, and expects to bring in more than $60 million by the end of the year.”

And elsewhere Bloomberg writes that Reddit “plans to expand its revenue streams outside of advertising into what Huffman calls the ‘user economy’ — users making money from others on the platform… ”

In the coming months Reddit plans to launch new versions of awards, which are digital gifts users can give to each other, along with other products… Reddit also plans to continue striking data licensing deals with artificial intelligence companies, expanding into international markets and evaluating potential acquisition targets in areas such as search, he said.

Meanwhile, ZDNet notes that this week a Reddit announcement “introduced a new public content policy that lays out a framework for how partners and third parties can access user-posted content on its site.”

The post explains that more and more companies are using unsavory means to access user data in bulk, including Reddit posts. Once a company gets this data, there’s no limit to what it can do with it. Reddit will continue to block “bad actors” that use unauthorized methods to get data, the company says, but it’s taking additional steps to keep users safe from the site’s partners…. Reddit still supports using its data for research: It’s creating a new subreddit — r/reddit4researchers — to support these initiatives, and partnering with OpenMined to help improve research. Private data is, however, going to stay private.

If a company wants to use Reddit data for commercial purposes, including advertising or training AI, it will have to pay. Reddit made this clear by saying, “If you’re interested in using Reddit data to power, augment, or enhance your product or service for any commercial purposes, we require a contract.” To be clear, Reddit is still selling users’ data — it’s just making sure that unscrupulous actors have a tougher time accessing that data for free and researchers have an easier time finding what they need.

And finally, there’s some court action, according to the Register. Reddit “was sued by an unhappy advertiser who claims that internet giga-forum sold ads but provided no way to verify that real people were responsible for clicking on them.”

The complaint [PDF] was filed this week in a U.S. federal court in northern California on behalf of LevelFields, a Virginia-based investment research platform that relies on AI. It says the biz booked pay-per-click ads on the discussion site starting September 2022… That arrangement called for Reddit to use reasonable means to ensure that LevelField’s ads were delivered to and clicked on by actual people rather than bots and the like. But according to the complaint, Reddit broke that contract…

LevelFields argues that Reddit is in a particularly good position to track click fraud because it’s serving ads on its own site, as opposed to third-party properties where it may have less visibility into network traffic… Nonetheless, LevelFields’s effort to obtain IP address data to verify the ads it was billed for went unfulfilled. The social media site “provided click logs without IP addresses,” the complaint says. “Reddit represented that it was not able to provide IP addresses.”

“The plaintiffs aspire to have their claim certified as a class action,” the article adds — along with an interesting statistic.
“According to Juniper Research, 22 percent of ad spending last year was lost to click fraud, amounting to $84 billion.”

Read more of this story at Slashdot.