Instagram Jumps Into NFTs With Minting and Selling Feature

Meta’s Instagram will soon allow artists to create and sell their own NFTs both right on the social media platform and off it. Axios reports: IG’s feature will roll out to a small group of select creators in the U.S. to start, according to Meta. The first creators tapped to test the feature include photographer Isaac âoeDriftâ Wright, known as DrifterShoots, and artist Amber Vittoria. Meta won’t charge fees for posting or sharing an NFT on IG, though, app store fees still apply. Separately, there will be a “professional mode” for Facebook profiles for creators to build a social media presence separate from their personal one. Artist royalties appear to be a part of the plan.

Minting or the creating of NFTs on IG will start on Polygon, a boon for the layer-2 blockchain (a separate blockchain built on top of Ethereum) given the potential onboarding of IG’s billion active users. The price of Polygon’s token MATIC jumped 17% from Wednesday evening to Thursday morning, boosted by IG news but also, because JPMorgan conducted its first live DeFi trade using that blockchain. The platform is adding support for Solana blockchain and Phantom wallet with the latest feature update, adding them to the list of already-supported wallets such as MetaMask, Coinbase Wallet, Dapper Wallet, Rainbow and Trust Wallet. Ethereum and Flow blockchains are already supported. Info for selected collections with OpenSea metadata, like collection name and description, will show up on IG.

Read more of this story at Slashdot.

Tumblr Will Now Allow Nudity But Not Explicit Sex

Tumblr has made an update it hinted at in September, changing its rules to allow nudity — but not sexually explicit images — on the platform. The Verge reports: The company updated its community guidelines earlier today, laying out a set of rules that stops short of its earlier permissive attitude toward sexuality but that formally allows a wider range of imagery. “We now welcome a broader range of expression, creativity, and art on Tumblr, including content depicting the human form (yes, that includes the naked human form). So, even if your creations contain nudity, mature subject matter, or sexual themes, you can now share them on Tumblr using the appropriate Community Label,” the post says. “Visual depictions of sexually explicit acts remain off-limits on Tumblr.”

A help center post and the community guidelines offer a little more detail. They say that “text, images, and videos that contain nudity, offensive language, sexual themes, or mature subject matter” is allowed on Tumblr, but “visual depictions of sexually explicit acts (or content with an overt focus on genitalia)” aren’t. There’s an exception for “historically significant art that you may find in a mainstream museum and which depicts sex acts — such as from India’s Sunga Empire,” although it must be labeled with a mature content or “sexual themes” tag so that users can filter it from their dashboards.

“Nudity and other kinds of adult material are generally welcome. We’re not here to judge your art, we just ask that you add a Community Label to your mature content so that people can choose to filter it out of their Dashboard if they prefer,” say the community guidelines. However, users can’t post links or ads to “adult-oriented affiliate networks,” they can’t advertise “escort or erotic services,” and they can’t post content that “promotes pedophilia,” including “sexually suggestive” content with images of children. On December 17th, 2018, Tumblr permanently banned adult content from its platform. The site was owned by Verizon at the time and later sold to WordPress.com owner Automattic, which largely maintained the ban “in large part because internet infrastructure services — like payment processors and Apple’s iOS App Store — typically frown on explicit adult content,” reports The Verge.

Read more of this story at Slashdot.

Behind TikTok’s Boom: A Legion of Traumatized, $10-A-Day Content Moderators

Time magazine teamed up with a London based non-profit newsroom called the Bureau of Investigative Journalism, in an investigation that reveals that “horrific” videos “are part and parcel of everyday work for TikTok moderators in Colombia.”

They told the Bureau of Investigative Journalism about widespread occupational trauma and inadequate psychological support, demanding or impossible performance targets, punitive salary deductions and extensive surveillance. Their attempts to unionize to secure better conditions have been opposed repeatedly. TikTok’s rapid growth in Latin America — it has an estimated 100 million users in the region — has led to the hiring of hundreds of moderators in Colombia to fight a never-ending battle against disturbing content. They work six days a week on day and night shifts, with some paid as little as 1.2 million pesos ($254) a month, compared to around $2,900 for content moderators based in the U.S….

The nine moderators could only speak anonymously for fear they might lose their jobs, or undermine their future employment prospects…. The TikTok moderation system described by these moderators is built on exacting performance targets. If workers do not get through a huge number of videos, or return late from a break, they can lose out on a monthly bonus worth up to a quarter of their salary. It is easy to lose out on the much-needed extra cash. Ãlvaro, a current TikTok moderator, has a target of 900 videos per day, with about 15 seconds to view each video. He works from 6am to 3pm, with two hours of break time, and his base salary is 1.2m pesos ($254) a month, only slightly higher than Colombia’s minimum salary…. He once received a disciplinary notice known internally as an “action form” for only managing to watch 700 videos in a shift, which was considered “work avoidance”. Once a worker has an action form, he says, they cannot receive a bonus that month….

Outsourcing moderation to countries in the global south like Colombia works for businesses because it is cheap, and workers are poorly protected…. For now… TikTok’s low-paid moderators will keep working to their grueling targets, sifting through some of the internet’s most nightmarish content.
The moderators interviewed all had “contractor” status with Paris-based Teleperformance, which last year reported €557 million ($620m) in profit on €7.1 billion ($8.1 billion) in revenue. In fact, Teleperformance has more than 7,000 content moderators globally, according to stats from Market Research Future, and the moderators interviewed said that besides TikTok, Teleperformance also provided content moderators to Meta, Discord, and Microsoft.

Read more of this story at Slashdot.

Children May Be Losing the Equivalent of One Night’s Sleep a Week From Social Media Use, Study Suggests

Children under 12 may be losing the equivalent of one night’s sleep every week due to excessive social media use, a new study suggests. Insider reports: Almost 70% of the 60 children under 12 surveyed by De Montfort University in Leicester, UK, said they used social media for four hours a day or more. Two thirds said they used social media apps in the two hours before going to bed. The study also found that 12.5% of the children surveyed were waking up in the night to check their notifications.

Psychology lecturer John Shaw, who headed up the study, said children were supposed to sleep for between nine to 11 hours a night, per NHS guidelines, but those surveyed reported sleeping an average of 8.7 hours nightly. He said: “The fear of missing out, which is driven by social media, is directly affecting their sleep. They want to know what their friends are doing, and if you’re not online when something is happening, it means you’re not taking part in it. “And it can be a feedback loop. If you are anxious you are more likely to be on social media, you are more anxious as a result of that. And you’re looking at something, that’s stimulating and delaying sleep.” “TikTok had the most engagement from the children, with 90% of those surveyed saying they used the app,” notes Insider. “Snapchat was used by 84%, while just over half those surveyed said they used Instagram.”

Read more of this story at Slashdot.

TikTok Hits Pause On Its Most Controversial Privacy Update Yet

Early last month, TikTok users across Europe were told that, starting July 13th, the platform would begin using their on-app data to serve up targeted ads, even if those users didn’t consent to the practice. Now, less than a day before that change would have rolled out European Union-wide, it looks like the company’s reconsidering things a bit. Gizmodo reports: A company spokesperson told TechCrunch on Tuesday that TikTok is “pausing” the update while it “engage[s] on the questions from stakeholders,” about the way it handles personalized ads. And needless to say, there are quite a lot of questions about that right now — from data protection authorities in the EU, from lawmakers in the US, and from privacy experts pretty much everywhere.

For context: until this point, European users that opened the TikTok app needed to offer express consent to let the company use their data for targeted ads. This update planned to do away with the need for that pesky consent by on a legal basis known as “legitimate interest” to target those ads instead. In a nutshell, the “legitimate interest” clause would let TikTok process people’s data, consent-free, if it was for a purpose that TikTok deemed reasonable. This means the company could say, for example, that because targeted ads bring in more money than their un-targeted equivalent, it would be reasonable to serve all users — consenting or otherwise — targeted ads. Reasonable, right?

Read more of this story at Slashdot.

As TikTok Promises US Servers, FCC Commissioner Remains Critical of Data Privacy

On Tuesday Brendan Carr, a commissioner on America’s Federal Communications Commission,warned on Twitter that TikTok, owned by China-based company ByteDance, “doesn’t just see its users dance videos:
It collects search and browsing histories, keystroke patterns, biometric identifiers, draft messages and metadata, plus it has collected the text, images, and videos that are stored on a device’s clipboard. Tiktok’s pattern of misrepresentations coupled with its ownership by an entity beholden to the Chinese Community Party has resulted in U.S. military branches and national security agencies banning it from government devices…. The CCP has a track record longer than a CVS receipt of conducting business & industrial espionage as well as other actions contrary to U.S. national security, which is what makes it so troubling that personnel in Beijing are accessing this sensitive and personnel data.

Today CNN interviewed Carr, while also bringing viewers an update. TikTok’s China-based employees accessed data on U.S. TikTok users, BuzzFeed had reported — after which TikTok announced it intends to move backup data to servers in the U.S., allowing them to eventually delete U.S. data from their servers. But days later Republican Senator Blackburn was still arguing to Bloomberg that “Americans need to know if they are on TikTok, communist China has their information.”

And FCC commissioner Carr told CNN he remains suspicious too:
Carr: For years TikTok has been asked directly by U.S. lawmakers, ‘Is any information, any data, being accessed by personnel back in Beijing?’ And rather than being forthright and saying ‘Yes, and here’s the extent of it and here’s why we don’t think it’s a problem,’ they’ve repeatedly said ‘All U.S. user data is stored in the U.S.,” leaving people with the impression that there’s no access…. This recent bombshell reporting from BuzzFeed shows at least some of the extent to which massive amounts of data has allegedy been going back to Beijing.

And that’s a problem, and not just a national security problem. But to me it looks like a violation of the terms of the app store, and that’s why I wrote a letter to Google and Apple saying that they should remove TikTok and boot them out of the app store… I’ve left them until July 8th to give me a response, so we’ll see what they say. I look forward to hearing from them. But there’s precedence for this. Before when applications have taken data surreptitiously and put it in servers in China or otherwise been used for reasons other than servicing the application itself, they have booted them from the app store. And so I would hope that they would just apply the plain terms of their policy here.

When CNN points out the FCC doesn’t have jurisdiction over social media, Carr notes “speaking for myself as one member” they’ve developed “expertise in terms of understanding how the CCP can effectively take data and infiltrate U.S. communications’ networks. And he points out that the issue is also being raised by Congressional hearings and by Republican and Democrat Senators signing joint letters together, so “I’m just one piece of a broader federal effort that’s looking at the very serious risks that come from TikTok.”
Carr: At the end of the day, it functions as sophisticated surveillance tool that is harvesting vast amounts of data on U.S. users. And I think TikTok should answer point-blank, has any CCP member obtained non-public user data or viewed it. Not to answer with a dodge, and say they’ve never been asked for it or never received a request. Can they say no, no CCP member has ever seen non-public U.S. user data.
Carr’s appearance was followed by an appearance by TikTok’s VP and head of public policy for the Americas. But this afternoon Carr said on Twitter that TikTok’s response contradicted its own past statements:

Today, a TikTok exec said it was “simply false” for me to say that they collect faceprints, browsing history, & keystroke patterns.

Except, I was quoting directly from TikTok’s own disclosures.

TikTok’s concerning pattern of misrepresentations about U.S. user data continues.
toay

Read more of this story at Slashdot.