BMW Will Employ Figure’s Humanoid Robot At South Carolina Plant

Figure’s first humanoid robot will be coming to a BMW manufacturing facility in South Carolina. TechCrunch reports: BMW has not disclosed how many Figure 01 models it will deploy initially. Nor do we know precisely what jobs the robot will be tasked with when it starts work. Figure did, however, confirm with TechCrunch that it is beginning with an initial five tasks, which will be rolled out one at a time. While folks in the space have been cavalierly tossing out the term “general purpose” to describe these sorts of systems, it’s important to temper expectations and point out that they will all arrive as single- or multi-purpose systems, growing their skillset over time. Figure CEO Brett Adcock likens the approach to an app store — something that Boston Dynamics currently offers with its Spot robot via SDK.

Likely initial applications include standard manufacturing tasks such as box moving, pick and place and pallet unloading and loading — basically the sort of repetitive tasks for which factory owners claim to have difficulty retaining human workers. Adcock says that Figure expects to ship its first commercial robot within a year, an ambitious timeline even for a company that prides itself on quick turnaround times. The initial batch of applications will be largely determined by Figure’s early partners like BMW. The system will, for instance, likely be working with sheet metal to start. Adcock adds that the company has signed up additional clients, but declined to disclose their names. It seems likely Figure will instead opt to announce each individually to keep the news cycle spinning in the intervening 12 months.

Read more of this story at Slashdot.

Sheryl Sandberg To Exit Meta’s Board After 12 Years

According to Axios, former Meta chief operating officer Sheryl Sandberg plans to leave Meta’s board of directors after holding a seat for the past 12 years. From the report: “With a heart filled with gratitude and a mind filled with memories, I let the Meta board know that I will not stand for reelection this May,” Sandberg wrote In a Facebook post announcing her departure. “After I left my role as COO, I remained on the board to help ensure a successful transition,” Sandberg said. Acknowledging CEO Mark Zuckerberg’s leadership, Sandberg said he and Meta’s current leadership team “have proven beyond a doubt that the Meta business is strong and well-positioned for the future, so this feels like the right time to step away.” “I will always be grateful to Mark for believing in me and for his partnership and friendship; he is that truly once-in-a-generation visionary leader and he is equally amazing as a friend who stays by your side through the good times and the bad,” Sandberg added. She also expressed gratitude to her colleagues and teammates at Meta as well as Meta’s board members.

Sandberg left the company she helped build as an executive in September 2022 after 14 years. She remained on Meta’s board following her departure, a seat she held for the past 12 years. In announcing her departure, Sandberg said she aimed to focus on more philanthropic work. In the time since leaving the company as an executive, she has focused more her time on her women’s leadership philanthropy, Lean In. More recently, Sandberg has also focused on the conversation around rape and sexual violence as a weapon of war, particularly as it pertains to the Israel-Hamas conflict. Axios notes that Meta’s revenue “grew 43,000% from $272 million in 2008 to nearly $118 billion in 2021” under Sandberg’s business leadership.

Read more of this story at Slashdot.

Mobile Device Ambient Light Sensors Can Be Used To Spy On Users

“The ambient light sensors present in most mobile devices can be accessed by software without any special permissions, unlike permissions required for accessing the microphone or the cameras,” writes longtime Slashdot reader BishopBerkeley. “When properly interrogated, the data from the light sensor can reveal much about the user.” IEEE Spectrum reports: While that may not seem to provide much detailed information, researchers have already shown these sensors can detect light intensity changes that can be used to infer what kind of TV programs someone is watching, what websites they are browsing or even keypad entries on a touchscreen. Now, [Yang Liu, a PhD student at MIT] and colleagues have shown in a paper in Science Advances that by cross-referencing data from the ambient light sensor on a tablet with specially tailored videos displayed on the tablet’s screen, it’s possible to generate images of a user’s hands as they interact with the tablet. While the images are low-resolution and currently take impractically long to capture, he says this kind of approach could allow a determined attacker to infer how someone is using the touchscreen on their device. […]

“The acquisition time in minutes is too cumbersome to launch simple and general privacy attacks on a mass scale,” says Lukasz Olejnik, an independent security researcher and consultant who has previously highlighted the security risks posed by ambient light sensors. “However, I would not rule out the significance of targeted collections for tailored operations against chosen targets.” But he also points out that, following his earlier research, the World Wide Web Consortium issued a new standard that limited access to the light sensor API, which has already been adopted by browser vendors.

Liu notes, however, that there are still no blanket restrictions for Android apps. In addition, the researchers discovered that some devices directly log data from the light sensor in a system file that is easily accessible, bypassing the need to go through an API. The team also found that lowering the resolution of the images could bring the acquisition times within practical limits while still maintaining enough detail for basic recognition tasks. Nonetheless, Liu agrees that the approach is too complicated for widespread attacks. And one saving grace is that it is unlikely to ever work on a smartphone as the displays are simply too small. But Liu says their results demonstrate how seemingly harmless combinations of components in mobile devices can lead to surprising security risks.

Read more of this story at Slashdot.

Apple Vision Pro Will Launch With 3D Movies From Disney Plus

Apple has announced several new experiences launching with their upcoming Vision Pro spatial computing headset, including 3D content from Disney Plus. “Other apps announced with Vision Pro support include ESPN, MLB, PGA Tour, Max, Discovery Plus, Amazon Prime Video, Paramount Plus, Peacock, Pluto TV, Tubi, Fubo, Crunchyroll, Red Bull TV, IMAX, TikTok, and MUBI,” reports The Verge, noting that Netflix’s existing app “will work unmodified on Apple’s new headset.” From the report: The announcement lists some of the movies that will be in 3D, and naturally, Avatar: The Way of Water is among them. But Vision Pro owners will also get 3D versions of movies like Avengers: Endgame, Star Wars: The Force Awakens, and Encanto. The movies will be available to rent through the Apple TV app, and the company says that anyone who has already bought the movies will now get 3D versions without paying extra. Otherwise, “more titles, including those available exclusively to Disney Plus subscribers, will be announced at a later date.”

Among the four screening environments for Disney Plus subscribers, one is called the Disney Plus Theater, which the company says takes inspiration from Hollywood’s El Capitan Theatre, as well as others based on Pixar’s Monsters, Inc., the fictional Avengers Tower from Marvel Avengers films, and one set in the cockpit of a landspeeder sitting in Star Wars’ Tatooine desert. Besides Disney content, Apple mentioned the Apple TV app will have some free “immersive entertainment” that includes Alicia Keys: Rehearsal Room and a film from Planet Earth producers called Prehistoric Planet Immersive. The $3,499 Vision Pro headset will start shipping on February 2nd. Pre-orders begin January 19th at 8AM ET.

Read more of this story at Slashdot.

LG Washing Machine Found Sending 3.7 GB of Data a Day

An LG washing machine owner discovered that his smart home appliance was uploading an average of 3.66GB of data daily. “Concerned about the washer’s internet addiction, Johnie forced the device to go cold turkey and blocked it using his router UI,” reports Tom’s Hardware. From the report: Johnie’s initial screenshot showed that on a chosen day, the device uploaded 3.57GB and downloaded about 100MB, and the data traffic was almost constant. Meanwhile, according to the Asus router interface screenshot, the washing machine accounted for just shy of 5% of Johnie’s internet traffic daily. The LG washing machine owner saw the fun in his predicament and joked that the device might use Wi-Fi for “DLCs (Downloadable Laundry Cycles).” He wasn’t entirely kidding: The machine does download presets for various types of apparel. However, the lion’s share of the data transferred was uploaded.

Working through the thread, we note that Johnie also pondered the possibility of someone using his washing machine for crypto mining. “I’d gladly rent our LPU (Laundry Processing Unit) by the hour,” he quipped. Again, there was the glimmer of a possibility that there could be truth behind this joke. Another social media user highlighted a history of hackers taking over LG smart-connected appliances. The SmartThinQ home appliances HomeHack vulnerability was patched several weeks after being made public. A similar modern hack might use the washing machine’s computer resources as part of a botnet. Taking control of an LG washing machine as part of a large botnet for cryptocurrency mining or nefarious networking purposes wouldn’t be as far-fetched as it sounds. Large numbers of relatively low-power devices can be formidable together. One of the more innocent theories regarding the significant data uploads suggested laundry data was being uploaded to LG so it could improve its LLM (Large Laundry Model). It sought to do this to prepare for the launch of its latest “AI washer-dryer combo” at CES, joked Johnie.

For now, it looks like the favored answer to the data mystery is to blame Asus for misreporting it. We may never know what happened with Johnie, who is now running his LG washing machine offline. Another relatively innocent reason for the supposed high volume of uploads could be an error in the Asus router firmware. In a follow-up post a day after his initial Tweet, Johnie noted “inaccuracy in the ASUS router tool,” with regard to Apple iMessage data use. Other LG smart washing machine users showed device data use from their router UIs. It turns out that these appliances more typically use less than 1MB per day.

Read more of this story at Slashdot.

Google Is No Longer Bringing the Full Chrome Browser To Fuchsia

Google has formally discontinued its efforts to bring the full Chrome browser experience to its Fuchsia operating system. 9to5Google reports: In 2021, we reported that the Chromium team had begun an effort to get the full Chrome/Chromium browser running on Google’s in-house Fuchsia operating system. Months later, in early 2022, we were even able to record a video of the progress, demonstrating that Chromium (the open-source-only variant of Chrome) could work relatively well on a Fuchsia-powered device. This was far from the first time that the Chromium project had been involved with Fuchsia. Google’s full lineup of Nest Hub smart displays is currently powered by Fuchsia under the hood, and those displays have limited web browsing capabilities through an embedded version of the browser.

In contrast to that minimal experience, Google was seemingly working to bring the full might of Chrome to Fuchsia. To observers, this was yet another signal that Google intended for Fuchsia to grow beyond the smart home and serve as a full desktop operating system. After all, what good is a laptop or desktop without a web browser? Fans of the Fuchsia project have anticipated its eventual expansion to desktop since Fuchsia was first shown to run on Google’s Pixelbook hardware. However, in the intervening time — a period that also saw significant layoffs in the Fuchsia division — it seems that Google has since shifted Fuchsia in a different direction. The clearest evidence of that move comes from a Chromium code change (and related bug tracker post) published last month declaring that the “Chrome browser on fuchsia won’t be maintained.”

Read more of this story at Slashdot.

OpenAI Unveils Plans For Tackling Abuse Ahead of 2024 Elections

Sara Fischer reports via Axios: ChatGPT maker OpenAI says it’s rolling out new policies and tools meant to combat misinformation and abuse ahead of 2024 elections worldwide. 2024 is one of the biggest election years in history — with high-stakes races in over 50 countries globally. It’s also the first major election cycle where generative AI tools will be widely available to voters, governments and political campaigns. In a statement published Monday, OpenAI said it will lean into verified news and image authenticity programs to ensure users get access to high-quality information throughout elections.

The company will add digital credentials set by a third-party coalition of AI firms that encode details about the origin of images created using its image generator tool, DALL-E 3. The firm says it’s experimenting with a new “provenance classifier” tool that can detect AI-generated images that have been made using DALL-E. It hopes to make that tool available to its first group of testers, including journalists, researchers and other tech platforms, for feedback. OpenAI will continue integrating its ChatGPT platform with real-time news reporting globally, “including attribution and links,” it said. That effort builds on a first-of-its-kind deal announced with German media giant Axel Springer last year that offers ChatGPT users summaries of select global news content from the company’s outlets. In the U.S., OpenAI says it’s working with the nonpartisan National Association of Secretaries of State to direct ChatGPT users to CanIVote.org for authoritative information on U.S. voting and elections.

Because generative AI technology is so new, OpenAI says it’s still working to understand how effective its tools might be for political persuasion. To hedge against abuse, the firm doesn’t allow people to build applications for political campaigning and lobbying, and it doesn’t allow engineers to create chatbots that pretend to be real people, such as candidates. Like most tech firms, it doesn’t allow users to build applications for its ChatGPT platform that deter people from participating in the democratic process.

Read more of this story at Slashdot.

GTA 5 Actor Goes Nuclear On AI Company That Made Voice Chatbot of Him

Rich Stanton reports via PC Gamer: Ned Luke, the actor whose roles include GTA 5’s Michael De Santa, has gone off on an AI company that released an unlicensed voice chatbot based on the character, and succeeded in having the offending bot nuked from the internet. AI company WAME had tweeted a link to its Michael chatbot on January 14 along with the text: “Any GTA fans around here? Now take your gaming experience to another level. Try having a realistic voice conversation with Michael De Santa, the protagonist of GTA 5, right now!”

Unfortunately for WAME, it quickly attracted the attention of Luke, who does not seem like the type of man to hold back. “This is fucking bullshit WAME,” says Luke (though I am definitely hearing all this in Michael’s voice). “Absolutely nothing cool about ripping people off with some lame computer estimation of my voice. Don’t waste your time on this garbage.” Luke also tagged Rockstar Games and the SAG-AFTRA union, since when the chatbot and tweets promoting it have been deleted. Fellow actors including Roger Clark weighed in with sympathy about how much this kind of stuff sucks, and our boy wasn’t done by a long shot:

“I’m not worried about being replaced, Roger,” says Luke. “I just hate these fuckers, and am pissed as fuck that our shitty union is so damn weak that this will soon be an issue on legit work, not just some lame douchebag tryna make $$ off of our voices.” Luke is here referring to a recent SAG-AFTRA “ethical AI” agreement which has gone down with some of its members like a cup of cold sick. Musician Marilou A. Burnel (not affiliated with WAME) pops up to suggest that “creative people make remarkable things with AI, and the opposite is also true.” “Not using my voice they don’t,” says Luke. WAME issued a statement expressing their “profound understanding and concern.”

“This incident has highlighted the intricate interplay between the advancement of AI technology and the ethical and legal realms,” says WAME. “WAME commits to protecting the rights of voice actors and creators while advancing ethical AI practices.”

Read more of this story at Slashdot.