Shutterstock Launches Generative AI Image Tool

Shutterstock, one of the internet’s biggest sources of stock photos and illustrations, is now offering its customers the option to generate their own AI images. Gizmodo reports: In October, the company announced a partnership with OpenAI, the creator of the wildly popular and controversial DALL-E AI tool. Now, the results of that deal are in beta testing and available to all paying Shutterstock users. The new platform is available in “every language the site offers,” and comes included with customers’ existing licensing packages, according to a press statement from the company. And, according to Gizmodo’s own test, every text prompt you feed Shutterstock’s machine results in four images, ostensibly tailored to your request. At the bottom of the page, the site also suggests “More AI-generated images from the Shutterstock library,” which offer unrelated glimpses into the void.

In an attempt to pre-empt concerns about copyright law and artistic ethics, Shutterstock has said it uses “datasets licensed from Shutterstock” to train its DALL-E and LG EXAONE-powered AI. The company also claims it will pay artists whose work is used in its AI-generation. Shutterstock plans to do so through a “Contributor Fund.” That fund “will directly compensate Shutterstock contributors if their IP was used in the development of AI-generative models, like the OpenAI model, through licensing of data from Shutterstock’s library,” the company explains in an FAQ section on its website. “Shutterstock will continue to compensate contributors for the future licensing of AI-generated content through the Shutterstock AI content generation tool,” it further says.

Further, Shutterstock includes a clever caveat in their use guidelines for AI images. “You must not use the generated image to infringe, misappropriate, or violate the intellectual property or other rights of any third party, to generate spam, false, misleading, deceptive, harmful, or violent imagery,” the company notes. And, though I am not a legal expert, it would seem this clause puts the onus on the customer to avoid ending up in trouble. If a generated image includes a recognizable bit of trademarked material, or spits out celebrity’s likeness — it’s on the user of Shutterstock’s tool to notice and avoid republishing the problem content.

Read more of this story at Slashdot.

FBI Probes Snapchat’s Role In Fentanyl Poisoning Deaths

Federal agencies are questioning Snapchat’s role in the spread and sale of fentanyl-laced pills in the United States as part of a broader probe into the deadly counterfeit drugs crisis. The Los Angeles Times reports: FBI agents and Justice Department attorneys are zeroing in on fentanyl poisoning cases where the sales were arranged to young buyers via Snapchat […]. The agents have interviewed parents of children who died and are working to access their social media accounts to trace the suppliers of the lethal drugs, according to the people. In many cases, subpoenaed records from Snapchat have shown that the teenagers thought they were buying prescription painkillers, but the pill they swallowed was pure fentanyl — a synthetic opioid 100 times more potent than morphine.

On Wednesday, the involvement of technology companies in the ongoing fentanyl crisis will be discussed on Capitol Hill at a House Energy and Commerce Committee roundtable. One of the listed speakers, Laura Marquez-Garrett, an attorney with the Social Media Victims Law Center, said Snapchat will be the focus. “The death of American children by fentanyl poisoning is not a social media issue — it’s a Snapchat issue,” she said. […] While dealers use many social media platforms to advertise their drugs, experts, lawyers and families say Snapchat is the platform of choice for arranging sales. Dealers prefer to use Snapchat because of its encrypted technology and disappearing messages — features that have given the platform an edge over its rivals for fully legitimate reasons and helped it become one of the world’s most popular social media apps for teens.

Former White House drug czar Jim Carroll said drug traffickers are always going to flock to where the young people are. “From everything I have read, I do believe that Snapchat has been more widely used for facilitating drug sales,” than other platforms, said Carroll, who serves on Snap’s safety advisory council and now works for Michael Best Consulting. “I think that’s because of its popularity among the young.” In December, Snap reported 363 million daily active users in its quarterly earnings report. That same month, the National Crime Prevention Council wrote a letter to Atty. Gen. Merrick Garland, urging the Justice Department to investigate Snap and its business practices. “Snapchat has become a digital open-air drug market allowing drug dealers to market and to sell fake pills to unsuspecting tweens and teens,” the letter said. Garland didn’t respond, but federal investigators have started to ask questions, multiple people said. Santa Monica-based Snap, which makes Snapchat, said it has worked with law enforcement for years to clamp down on illegal activity on its platform and has boosted moderation efforts to detect illegal drug sales. Last year, Snap said it removed more than 400,000 user accounts that posted drug-related content.

“We are committed to doing our part to fight the national fentanyl poisoning crisis, which includes using cutting-edge technology to help us proactively find and shut down drug dealers’ accounts,” Rachel Racusen, a Snap spokeswoman, said in an emailed statement.

Read more of this story at Slashdot.

Android 14 Set To Block Certain Outdated Apps From Being Installed

To help reduce the potential for malware, Android 14 will begin fully blocking the installation of apps that target outdated versions of Android. 9to5Google reports: For years now, the guidelines for the Google Play Store have ensured that Android developers keep their apps updated to use the latest features and safety measures of the Android platform. Just this month, the guidelines were updated, requiring newly listed Play Store apps to target Android 12 at a minimum. Up to this point, these minimum API level requirements have only applied to apps that are intended for the Google Play Store. Should a developer wish to create an app for an older version, they can do so and simply ask their users to sideload the APK file manually. Similarly, if an Android app hasn’t been updated since the guidelines changed, the Play Store will continue serving the app to those who have installed it once before.

According to a newly posted code change, Android 14 is set to make API requirements stricter, entirely blocking the installation of outdated apps. This change would block users from sideloading specific APK files and also block app stores from installing those same apps. Initially, Android 14 devices will only block apps that target especially old Android versions. Over time though, the plan is to increase the threshold to Android 6.0 (Marshmallow), with Google having a mechanism to “progressively ramp [it] up.” That said, it will likely still be up to each device maker to decide the threshold for outdated apps or whether to enable it at all. The report notes that it’ll still be possible to install an outdated version of an app “through a command shell, by using a new flag.”

Read more of this story at Slashdot.

Strava Acquires Fatmap, a 3D Mapping Platform For the Great Outdoors

Strava, the activity tracking and social community platform used by more than 100 million people globally, has acquired Fatmap, a European company that’s building a high-resolution 3D global map platform for the great outdoors. TechCrunch reports: Founded in 2009, Strava has emerged as one of the preeminent activity tracking services, proving particularly popular in the cycling and running fraternities which use the Strava app to plot routes, converse with fellow athletes, and record all their action for posterity via GPS. The company has increasingly been targeting hikers too, and last year it launched a new trail sports and routes option aimed at walkers, mountain bikers, and trail runners.

Fatmap, for its part, was founded a decade ago, with an initial focus on providing ski resorts with high-resolution digital maps. In the intervening years, the company has worked with various satellite and aerospace companies to bolster its platform with detailed maps incorporating summits, rivers, passes, paths, huts, and more, arming anyone venturing into mountainous terrain the information they need to know exactly what they’ll encounter before they arrive. With 1.6 million registered users, Fatmap’s mission, ultimately, is to be the Google Maps of the great outdoors, with a premium subscription ($30 / year) unlocking access to extra features such as downloadable maps and route planning in the mobile app.

The ultimate long-term goal for Strava is to integrate Fatmap’s core platform into Strava itself, but that will be a resource-intensive endeavor that won’t happen overnight. And that is why Strava is working to create a single sign-on (SSO) integration in the near-term, meaning that subscribers will be able to access the full Fatmap feature-set by logging into the Fatmap app with their Strava credentials. While Strava and Fatmap will remain separate products for now, Strava said that it will decide in the future whether Fatmap will live on as a standalone product once the technical integration has taken place. Terms of the deal were not disclosed. However, TechCrunch suggests the price of this deal “could comfortably be in the 9-digit range” given the $30 million Fatmap had raised in funding, “including a hitherto undisclosed $16.5 million round that it said it closed in early 2020.”

“It’s clear that the proprietary 3D mapping technology Fatmap had developed would have taken too much time and resources for Strava to replicate itself from scratch, which is why buying Fatmap outright likely made more sense in this instance,” the report added.

Read more of this story at Slashdot.

How Apple’s ‘Reality Pro’ Headset Will Work

An anonymous reader quotes a report from 9to5Mac: Apple’s first AR/VR headset could be unveiled sometime this spring, and rumors continue to offer more information about what Apple has in the works. A wide-ranging new report from Bloomberg now offers a slew of details on Apple’s “Reality Pro” headset, including that the “eye- and hand-tracking capabilities will be a major selling point” for the product. Using external cameras, the headset will be able to analyze the user’s hands, while internal sensors will be used to read the user’s eyes.

The report explains: “The headset will have several external cameras that can analyze a user’s hands, as well as sensors within the gadget’s housing to read eyes. That allows the wearer to control the device by looking at an on-screen item — whether it’s a button, app icon or list entry — to select it. Users will then pinch their thumb and index finger together to activate the task — without the need to hold anything. The approach differs from other headsets, which typically rely on a hand controller.”

More details on the hardware of the headset include that there will be a Digital Crown similar to the Apple Watch for switching between AR and VR. The VR mode will fully immerse the wearer, but when AR mode is enabled the “content fades back and becomes surrounded by the user’s real environment.” This is reportedly one of the features Apple hopes will be a “highlight of the product.” To address overheating concerns, the Reality Pro headset will use an external battery that “rests in a user’s pocket and connects over a cable.” There will also be a cooling fan to further reduce the likelihood of the headset overheating. “The headset can last about two hours per battery pack,” Bloomberg reports. The battery pack is “roughly the size of two iPhone 14 Pro Maxes stacked on top of each other, or about six inches tall and more than half an inch thick.” Another tidbit from the report is that the headset will be able to serve as an external display for Mac. “Users will be able to see their Mac’s display in virtual reality but still control the computer with their trackpad or mouse and physical keyboard,” reports Bloomberg. Apple is also “developing technology that will let users type in midair with their hands.”

Additionally, FaceTime on the headset will “realistically render a user’s face and full body in virtual reality.”

A team of more than 1,000 people have been reportedly working on the first version of the device for the past seven years. It’s slated to cost “roughly $3,000” when it debuts sometime this spring.

Read more of this story at Slashdot.

Anti-Plagiarism Service Turnitin Is Building a Tool To Detect ChatGPT-Written Essays

Turnitin, best known for its anti-plagiarism software used by tens of thousands of universities and schools around the world, is building a tool to detect text generated by AI. The Register reports: Turnitin has been quietly building the software for years ever since the release of GPT-3, Annie Chechitelli, chief product officer, told The Register. The rush to give educators the capability to identify text written by humans and computers has become more intense with the launch of its more powerful successor, ChatGPT. As AI continues to progress, universities and schools need to be able to protect academic integrity now more than ever. “Speed matters. We’re hearing from teachers just give us something,” Chechitelli said. Turnitin hopes to launch its software in the first half of this year. “It’s going to be pretty basic detection at first, and then we’ll throw out subsequent quick releases that will create a workflow that’s more actionable for teachers.” The plan is to make the prototype free for its existing customers as the company collects data and user feedback. “At the beginning, we really just want to help the industry and help educators get their legs under them and feel more confident. And to get as much usage as we can early on; that’s important to make a successful tool. Later on, we’ll determine how we’re going to productize it,” she said.

Turnitin’s VP of AI, Eric Wang, said there are obvious patterns in AI writing that computers can detect. “Even though it feels human-like to us, [machines write using] a fundamentally different mechanism. It’s picking the most probable word in the most probable location, and that’s a very different way of constructing language [compared] to you and I,” he told The Register. […] ChatGPT, however, doesn’t have this kind of flexibility and can only generate new words based on previous sentences, he explained. Turnitin’s detector works by predicting what words AI is more likely to generate in a given text snippet. “It’s very bland statistically. Humans don’t tend to consistently use a high probability word in high probability places, but GPT-3 does so our detector really cues in on that,” he said.

Wang said Turnitin’s detector is based on the same architecture as GPT-3 and described it as a miniature version of the model. “We are in many ways I would [say] fighting fire with fire. There’s a detector component attached to it instead of a generate component. So what it’s doing is it’s reading language in the exact same way GPT-3 reads language, but instead of spitting out more language, it gives us a prediction of whether we think this passage looks like [it’s from] GPT-3.” The company is still deciding how best to present its detector’s results to teachers using the tool. “It’s a difficult challenge. How do you tell an instructor in a small amount of space what they want to see?” Chechitelli said. They might want to see a percentage that shows how much of an essay seems to be AI-written, or they might want confidence levels showing whether the detector’s prediction confidence is low, medium, or high to assess accuracy. “I think there is a major shift in the way we create content and the way we work,” Wang added. “Certainly that extends to the way we learn. We need to be thinking long term about how we teach. How do we learn in a world where this technology exists? I think there is no putting the genie back in the bottle. Any tool that gives visibility to the use of these technologies is going to be valuable because those are the foundational building blocks of trust and transparency.”

Read more of this story at Slashdot.

GameCube and Wii Games Are Now Easier To Play On Xbox Consoles

The new standalone Dolphin emulator will let you play almost any GameCube or Wii game on your Xbox console. Windows Central reports: Dolphin Emulator for UWP first rolled out in beta on December 6, 2022. It has since received a couple of updates, bringing it to version 1.02. The standalone Dolphin emulator is capable of upscaling games to up to 1440p. You can also play titles at their original resolution if you prefer. With mods, you can use HD texture packs to make games look more modern and have higher resolution. The emulator also supports a broadband adapter, but the usefulness of that varies greatly depending on the game you want to play online. For example, Mario Kart Double Dash would require tunnelling software to access online play.

Of course, you can’t just download the Dolphin emulator through the Microsoft Store. The easiest way to install the emulator is by enabling Developer Mode on your Xbox console. It’s also possible to set up by using retail mode. A computer is needed to configure your Xbox controller and other parts of your system. You should also have a USB drive handy. Modern Vintage Gamer walks through the entire process in their video. It’s possible to run Dolphin Emulator for UWP on older Xbox consoles, such as the Xbox One X, but performance will see a significant drop compared to playing on the Series X or Series S. Modern Vintage Gamer walks through the setup, testing, and “other neat things” on YouTube.

Read more of this story at Slashdot.

Automation Caused More than Half America’s Income Inequality Since 1980, Study Claims

A newly published study co-authored by MIT economist Daron Acemoglu “quantifies the extent to which automation has contributed to income inequality in the U.S.,” reports SciTechDaily, “simply by replacing workers with technology — whether self-checkout machines, call-center systems, assembly-line technology, or other devices.”

Over the last four decades, the income gap between more- and less-educated workers has grown significantly; the study finds that automation accounts for more than half of that increase. “This single one variable … explains 50 to 70 percent of the changes or variation between group inequality from 1980 to about 2016,” Acemoglu says….

Acemoglu and Pascual Restrepo, an assistant professor of economics at Boston University, used U.S. Bureau of Economic Analysis statistics on the extent to which human labor was used in 49 industries from 1987 to 2016, as well as data on machinery and software adopted in that time. The scholars also used data they had previously compiled about the adoption of robots in the U.S. from 1993 to 2014. In previous studies, Acemoglu and Restrepo have found that robots have by themselves replaced a substantial number of workers in the U.S., helped some firms dominate their industries, and contributed to inequality.

At the same time, the scholars used U.S. Census Bureau metrics, including its American Community Survey data, to track worker outcomes during this time for roughly 500 demographic subgroups… By examining the links between changes in business practices alongside changes in labor market outcomes, the study can estimate what impact automation has had on workers.

Ultimately, Acemoglu and Restrepo conclude that the effects have been profound. Since 1980, for instance, they estimate that automation has reduced the wages of men without a high school degree by 8.8 percent and women without a high school degree by 2.3 percent, adjusted for inflation.

Thanks to long-time Slashdot reader schwit1 for sharing the article.

Read more of this story at Slashdot.