TikTik is Banned in China, Notes X User Community – Along With Most US Social Media

Newsweek points out that a Chinese government post arguing the bill is “on the wrong side of fair competition” was flagged by users on X. “TikTok is banned in the People’s Republic of China,” the X community note read. (The BBC reports that “Instead, Chinese users use a similar app, Douyin, which is only available in China and subject to monitoring and censorship by the government.”)

Newsweek adds that China “has also blocked access to YouTube, Facebook, Instagram, and Google services. X itself is also banned — though Chinese diplomats use the microblogging app to deliver Beijing’s messaging to the wider world.”

From the Wall Street Journal:
Among the top concerns for [U.S.] intelligence leaders is that they wouldn’t even necessarily be able to detect a Chinese influence operation if one were taking place [on TikTok] due to the opacity of the platform and how its algorithm surfaces content to users. Such operations, FBI director Christopher Wray said this week in congressional testimony, “are extraordinarily difficult to detect, which is part of what makes the national-security concerns represented by TikTok so significant….”

Critics of the bill include libertarian-leaning lawmakers, such as Sen. Rand Paul (R., Ky.), who have decried it as a form of government censorship. “The Constitution says that you have a First Amendment right to express yourself,” Paul told reporters Thursday. TikTok’s users “express themselves through dancing or whatever else they do on TikTok. You can’t just tell them they can’t do that.” In the House, a bloc of 50 Democrats voted against the bill, citing concerns about curtailing free speech and the impact on people who earn income on the app. Some Senate Democrats have raised similar worries, as well as an interest in looking at a range of social-media issues at rival companies such as Meta Platforms.

“The basic idea should be to put curbs on all social media, not just one,” Sen. Elizabeth Warren (D., Mass.) said Thursday. “If there’s a problem with privacy, with how our children are treated, then we need to curb that behavior wherever it occurs.”
Some context from the Columbia Journalism Review:
Roughly one-third of Americans aged 18-29 regularly get their news from TikTok, the Pew Research Center found in a late 2023 survey. Nearly half of all TikTok users say they regularly get news from the app, a higher percentage than for any other social media platform aside from Twitter.

Almost 40 percent of young adults were using TikTok and Instagram for their primary Web search instead of the traditional search engines, a Google senior vice president said in mid-2022 — a number that’s almost certainly grown since then. Overall, TikTok claims 150 million American users, almost half the US population; two-thirds of Americans aged 18-29 use the app.
Some U.S. politicians believe TikTok “radicalized” some of their supporters “with disinformation or biased reporting,” according to the article.

Meanwhile in the Guardian, a Duke University law professor argues “this saga demands a broader conversation about safeguarding democracy in the digital age.”

The European Union’s newly enacted AI act provides a blueprint for a more holistic approach, using an evidence- and risk-based system that could be used to classify platforms like TikTok as high-risk AI systems subject to more stringent regulatory oversight, with measures that demand transparency, accountability and defensive measures against misuse.

Read more of this story at Slashdot.

Instagram and Threads Will Stop Recommending Political Content

In a blog post today, Meta announced that it’ll stop showing political content across Instagram and Threads unless users explicitly choose to have it recommended to them. The Verge reports: Meta announced that it’s expanding an existing Reels policy that limits political content from people you’re not following (including posts about social issues) from appearing in recommended feeds to more broadly cover the company’s Threads and Instagram platforms. “Our goal is to preserve the ability for people to choose to interact with political content, while respecting each person’s appetite for it,” said Instagram head Adam Mosseri, announcing on Threads that the changes will be applied over the next few weeks. Facebook is also expected to roll out these new controls at a later, undisclosed date.

Users who still want to have content “likely to mention governments, elections, or social topics that affect a group of people and/or society at large” recommended to them can choose to turn off this limitation within their account settings. The changes will apply to public accounts when enabled and only in places where content is being recommended, such as Explore, Reels, in-feed recommendations, and suggested users. The update won’t change how users view content from accounts they choose to follow, so accounts that aren’t eligible to be recommended can still post political content to their followers via their feed and Stories.

For creators, Meta says that “if your account is not eligible to be recommended, none of your content will be recommended regardless of whether or not all of your content goes against our recommendations guidelines.” When these changes do go live, professional accounts on Instagram will be able to use the Account Status feature to check if posting political content is impacting their eligibility for recommendation. Professional accounts can also use Account Status to contest decisions that revoke this eligibility, alongside editing, removing, or pausing politically related posts until the account is eligible to be recommended again.

Read more of this story at Slashdot.

Bluesky Opens To the Public

An anonymous reader quotes a report from TechCrunch: After almost a year as an invite-only app, Bluesky is now open to the public. Funded by Twitter co-founder Jack Dorsey, Bluesky is one of the more promising micro-blogging platforms that could provide an alternative to Elon Musk’s X. Before opening to the public, the platform had about 3 million sign-ups. Now that anyone can join, the young platform faces a challenge: How can it meaningfully stand up to Threads’ 130 million monthly active users, or even Mastodon’s 1.8 million?

Bluesky looks and functions like Twitter at the outset, but the platform stands out because of what lies under the hood. The company began as a project inside of Twitter that sought to build a decentralized infrastructure called the AT Protocol for social networking. As a decentralized platform, Bluesky’s code is completely open source, which gives people outside of the company transparency into what is being built and how. Developers can even write their own code on top of the AT Protocol, so they can create anything from a custom algorithm to an entirely new social platform.

“What decentralization gets you is the ability to try multiple things in parallel, and so you’re not bottlenecking change on one organization,” Bluesky CEO Jay Graber told TechCrunch. “The way we built Bluesky actually lets anyone insert a change into the product.” This setup gives users more agency to control and curate their social media experience. On a centralized platform like Instagram, for example, users have revolted against algorithm changes that they dislike, but there’s not much they can do to revert or improve upon an undesired app update.

Read more of this story at Slashdot.

Major Psychologists’ Group Warns of Social Media’s Potential Harm To Kids

For the first time, the American Psychological Association (APA) has issued guidelines for teenagers, parents, teachers and policymakers on how to use social media, with the aim of reducing the rate of depression, anxiety and loneliness in adolescents. NPR reports: The 10 recommendations in the report summarize recent scientific findings and advise actions, primarily by parents, such as monitoring teens’ feeds and training them in social media literacy, even before they begin using these platforms. But some therapists and clinicians say the recommendations place too much of the burden on parents. To implement this guidance requires cooperation from the tech companies and possibly regulators.

While social media can provide opportunities for staying connected, especially during periods of social isolation, like the pandemic, the APA says adolescents should be routinely screened for signs of “problematic social media use.” The APA recommends that parents should also closely monitor their children’s social media feed during early adolescence, roughly ages 10-14. Parents should try to minimize or stop the dangerous content their child is exposed to, including posts related to suicide, self-harm, disordered eating, racism and bullying. Studies suggest that exposure to this type of content may promote similar behavior in some youth, the APA notes.

Another key recommendation is to limit the use of social media for comparison, particularly around beauty — or appearance-related content. Research suggests that when kids use social media to pore over their own and others’ appearance online, this is linked with poor body image and depressive symptoms, particularly among girls. As kids age and gain digital literacy skills they should have more privacy and autonomy in their social media use, but parents should always keep an open dialogue about what they are doing online. The report also cautions parents to monitor their own social media use, citing research that shows that adults’ attitudes toward social media and how they use it in front of kids may affect young people.

The APA’s report does contain recommendations that could be picked up by policy makers seeking to regulate the industry. For instance it recommends the creation of “reporting structures” to identify and remove or deprioritize social media content depicting “illegal or psychologically maladaptive behavior,” such as self-harm, harming others, and disordered eating. It also notes that the design of social media platforms may need to be changed to take into account “youths’ development capabilities,” including features like endless scrolling and recommended content. It suggests that teens should be warned “explicitly and repeatedly” about how their personal data could be stored, shared and used.

Read more of this story at Slashdot.

Can Consumers Break Free of the Tech Industry’s Hold on Their Messaging History?

The Washington Post reports on “a relatively young app called Beeper that pulls all your chats into one place.” This is significant, the Post argues, because “we’re better off if we have the freedom to pick up our digital lives and move on. Tech companies should feel terrified that you’ll walk if they disappoint you…”

If different people send you messages in Apple’s Messages (a.k.a., iMessage), WhatsApp, LinkedIn and Slack, you don’t have to check multiple apps to read and reply. Maybe the best promise of Beeper is that you can ditch your iPhone or Samsung phone for another company’s device and keep your text messages…

Eric Migicovsky, Beeper’s co-founder, told me that if you’re pulling Apple Messages into Beeper, you need a Mac computer to upload a digital file. All chat apps have different limits on how much history you can access in the app.

There’s also a wait list of about 170,000 people for Beeper. (Add yourself to the list here.) The app is free, but Beeper says it will start charging for a version with extra features.

To put this all in context, the Post’s reporter remembers the hassle of using a cable to transfer a long history of iPhone messages to a new Google Pixel phone, complaining that Apple makes it more difficult than other companies to switch to a different kind of system. “Many of you are happy to live in Apple’s world. Great! But if you want the option to leave at some point, try to limit your use of Apple apps when possible…”

They look ahead to next year, when the EU “will require large tech companies to make their products compatible with those of competitors” — though it’s not clear how much change that will bring. In the meantime, the existence of a small company like Beeper “gives me hope that we don’t have to rely on the kindness of technology giants to make it easier to move to a different phone or computer system… You deserve the option of a no-hassle tech divorce at a moment’s notice.”

Read more of this story at Slashdot.

Instagram Co-Founders Launch Personalized News App ‘Artifact’

Artifact, the personalized news reader built by Instagram’s co-founders, is now open to the public, no sign-up required. TechCrunch reports: With today’s launch, Artifact is dropping its waitlist and phone number requirements, introducing the app’s first social feature and adding feedback controls to better personalize the news reading experience, among other changes. […] With today’s launch, Artifact will now give users more visibility into their news reading habits with a newly added stats feature that shows you the categories you’ve read as well as the recent articles you read within those categories, plus the publishers you’ve been reading the most. But it will also group your reading more narrowly by specific topics. In other words, instead of just “tech” or “AI,” you might find you’ve read a lot about the topic “ChatGPT,” specifically.

In time, Artifact’s goal is to provide tools that would allow readers to click a button to show more or less from a given topic to better control, personalize and diversify their feed. In the meantime, however, users can delve into settings to manage their interests by blocking or pausing publishers or selecting and unselecting general interest categories. Also new today is a feature that allows you to upload your contacts in order to see a signal that a particular article is popular in your network. This is slightly different from Twitter’s Top Articles feature, which shows you articles popular with the people you follow, because Artifact’s feature is more privacy-focused.

“It doesn’t tell you who read it. It doesn’t tell you how many of them read it, so it keeps privacy — and we clearly don’t do it with just one read. So you can’t have one contact and like figure out what that one contact is reading … it has to meet a certain minimum threshold,” notes [Instagram co-founder Kevin Systrom]. This way, he adds, the app isn’t driven by what your friends are reading, but it can use that as a signal to highlight items that everyone was reading. In time, the broader goal is to expand the social experience to also include a way to discuss the news articles within Artifact itself. The beta version, limited to testers, offers a Discover feed where users can share articles and like and comment on those shared by others. There’s a bit of a News Feed or even Instagram-like quality to engaging with news in this way, we found.

Read more of this story at Slashdot.

TikTok Unveils New US-Based ‘Transparency and Accountability Center’

The Verge was part of “a handful” of journalists invited to Los Angeles to tour TikTok’s new “Transparency and Accountability Center…. part of a multi-week press blitz by TikTok to push Project Texas, a novel proposal to the US government that would partition off American user data in lieu of a complete ban.”
TikTok says it has already taken thousands of people and over $1.5 billion to create Project Texas. The effort involves TikTok creating a separate legal entity dubbed USDS with an independent board from ByteDance that reports directly to the US government. More than seven outside auditors, including Oracle, will review all data that flows in and out of the US version of TikTok. Only American user data will be available to train the algorithm in the US, and TikTok says there will be strict compliance requirements for any internal access to US data. If the proposal is approved by the government, it will cost TikTok an estimated $700 million to $1 billion per year to maintain…..
At one point during the tour, I tried asking what would hypothetically happen if, once Project Texas is greenlit, a Bytedance employee in China makes an uncomfortable request to an employee in TikTok’s US entity. I was quickly told by a member of TikTok’s PR team that the question wasn’t appropriate for the tour.

Other notes from the tour:

The journalists weren’t allowed to enter a special server room “housing the app’s source code for outside auditors to review.”

A room that explained TikTok’s algorithm using iMacs running “code simulators” was “frustratingly vague”

“Despite it being called a transparency center, TikTok’s PR department made everyone agree to not quote or directly attribute comments made by employees leading the tour.”

The Verge ultimately concludes TikTok’s Transparency and Accountability Center is “a lot of smoke and mirrors designed to give the impression that it really cares.”

Read more of this story at Slashdot.