Walmart US CEO Says Automation At Stores Won’t Displace Workers

An anonymous reader quotes a report from Insider: Walmart will be increasingly relying on automation at its stores in the coming years — but that won’t diminish the country’s largest private employer’s workforce, company leaders said during an investor event this week. The Bentonville, Arkansas-based retail giant recently made headlines when it announced that 65% of its stores will be “serviced by automation” by the end of fiscal year 2026. Walmart currently has more than 4,700 stores throughout the US and employs roughly 1.6 million people nationwide.

More specifically, one area where Walmart is seeking to increase investment is in market fulfillment centers (MFCs), which are automated fulfillment centers built within, or added to, a store. Walmart piloted this concept at a store in Salem, New Jersey, in 2019, using automated robot technology from Alert Innovation — a robotics company Walmart acquired in October 2022. Since then, Walmart has built MFCs at several stores, such as in Jacksonville, Florida, and Dallas, Texas. Those include “manual MFCs,” where associates pick items for online orders but in a separate area from the sales floor.

Walmart will still need at least the same level of workers to help in stores even as automation picks up, company leaders say. John Furner, Walmart US president and CEO, told investors this week that automation “helps” employees, as it will result in less manual labor. “Over time, we believe we’ll have the same or more associates and a larger business overall,” Furner said. “There will be new roles emerging that are less manual, better designed to serve customers, and pay more.”

Read more of this story at Slashdot.

Researchers Built Sonar Glasses That Track Facial Movements For Silent Communication

A Cornell University researcher has developed sonar glasses that “hear” you without speaking. Engadget reports: The eyeglass attachment uses tiny microphones and speakers to read the words you mouth as you silently command it to pause or skip a music track, enter a passcode without touching your phone or work on CAD models without a keyboard. Cornell Ph.D. student Ruidong Zhang developed the system, which builds off a similar project the team created using a wireless earbud — and models before that which relied on cameras. The glasses form factor removes the need to face a camera or put something in your ear. “Most technology in silent-speech recognition is limited to a select set of predetermined commands and requires the user to face or wear a camera, which is neither practical nor feasible,” said Cheng Zhang, Cornell assistant professor of information science. “We’re moving sonar onto the body.”

The researchers say the system only requires a few minutes of training data (for example, reading a series of numbers) to learn a user’s speech patterns. Then, once it’s ready to work, it sends and receives sound waves across your face, sensing mouth movements while using a deep learning algorithm to analyze echo profiles in real time “with about 95 percent accuracy.” The system does this while offloading data processing (wirelessly) to your smartphone, allowing the accessory to remain small and unobtrusive. The current version offers around 10 hours of battery life for acoustic sensing. Additionally, no data leaves your phone, eliminating privacy concerns. “We’re very excited about this system because it really pushes the field forward on performance and privacy,” said Cheng Zhang. “It’s small, low-power and privacy-sensitive, which are all important features for deploying new, wearable technologies in the real world.” “The team at Cornell’s Smart Computer Interfaces for Future Interactions (SciFi) Lab is exploring commercializing the tech using a Cornell funding program,” adds Engadget. “They’re also looking into smart-glasses applications to track facial, eye and upper body movements.”

A video of the eyeglasses can be viewed here.

Read more of this story at Slashdot.

Crooks Are Using CAN Injection Attacks To Steal Cars

“Thieves has discovered new ways to steal cars by pulling off smart devices (like smart headlights) to get at and attack via the Controller Area Network (CAN) bus,” writes longtime Slashdot reader KindMind. The Register reports: A Controller Area Network (CAN) bus is present in nearly all modern cars, and is used by microcontrollers and other devices to talk to each other within the vehicle and carry out the work they are supposed to do. In a CAN injection attack, thieves access the network, and introduce bogus messages as if it were from the car’s smart key receiver. These messages effectively cause the security system to unlock the vehicle and disable the engine immobilizer, allowing it to be stolen. To gain this network access, the crooks can, for instance, break open a headlamp and use its connection to the bus to send messages. From that point, they can simply manipulate other devices to steal the vehicle.

“In most cars on the road today, these internal messages aren’t protected: the receivers simply trust them,” [Ken Tindell, CTO of Canis Automotive Labs] detailed in a technical write-up this week. The discovery followed an investigation by Ian Tabor, a cybersecurity researcher and automotive engineering consultant working for EDAG Engineering Group. It was driven by the theft of Tabor’s RAV4. Leading up to the crime, Tabor noticed the front bumper and arch rim had been pulled off by someone, and the headlight wiring plug removed. The surrounding area was scuffed with screwdriver markings, which, together with the fact the damage was on the kerbside, seemed to rule out damage caused by a passing vehicle. More vandalism was later done to the car: gashes in the paint work, molding clips removed, and malfunctioning headlamps. A few days later, the Toyota was stolen.

Refusing to take the pilfering lying down, Tabor used his experience to try to figure out how the thieves had done the job. The MyT app from Toyota — which among other things allows you to inspect the data logs of your vehicle — helped out. It provided evidence that Electronic Control Units (ECUs) in the RAV4 had detected malfunctions, logged as Diagnostic Trouble Codes (DTCs), before the theft. According to Tindell, “Ian’s car dropped a lot of DTCs.” Various systems had seemingly failed or suffered faults, including the front cameras and the hybrid engine control system. With some further analysis it became clear the ECUs probably hadn’t failed, but communication between them had been lost or disrupted. The common factor was the CAN bus.

Read more of this story at Slashdot.

Mario Is Moving Away From Mobile Games

In an exclusive interview with Variety, legendary video game designer, Nintendo fellow and self-proclaimed “Mario’s mom”, Shigeru Miyamoto, said: “Mobile apps will not be the primary path of future Mario games.” From the report: After two moderately successful but dwindling iOS games, plus another that shuttered after two years, Nintendo is pulling Mario away from the mobile market. Released in 2016, Super Mario Run grossed $60 million in its first year, while 2019’s Mario Kart Tour has generated $300 million (compared to Mario Kart 8’s $3 billion and counting). Without explanation, Nintendo removed 2019’s Dr. Mario World from app markets two years after its release.

“First and foremost, Nintendo’s core strategy is a hardware and software integrated gaming experience,” said Miyamoto, who played a pivotal role in designing the Wii, among other Nintendo consoles. “The intuitiveness of the control is a part of the gaming experience. When we explored the opportunity of making Mario games for the mobile phone — which is a more common, generic device — it was challenging to determine what that game should be. That is why I played the role of director for Super Mario Run, to be able to translate that Nintendo hardware experience into the smart devices.”

Elaborating on the merits of Run and Tour, Miyamoto continued, “Having Mario games as mobile apps expands the doorway for far more audience to experience the game, and also expands the Mario gaming experience, where you only need your thumb on one hand.” Referencing the innovation of the Super Mario Maker series and Super Mario Odyssey, which Miyamoto called “the ultimate evolution of a Mario adventure game on a typical 3D platformer,” the Nintendo exec laid out how the company begins to develop a Mario game: “We try to define what is the gameplay, what is the method, and then define what devices we go on.” When asked when fans can expect the next mainline Mario game, Miyamoto chuckled and said: “All I can say is please stay tuned for future Nintendo Directs.”

Read more of this story at Slashdot.

Sony Worries Microsoft Will Only Give It a ‘Degraded’ Call of Duty

An anonymous reader quotes a report from Ars Technica: Late last month, UK regulators said they no longer believed a proposed Microsoft-owned Activision would bar Call of Duty games from PlayStation platforms, a reversal of earlier preliminary findings. Even if you grant that premise, though, Sony says that it’s still worried Microsoft could give PlayStation owners a “degraded” version of new Call of Duty games in an effort to make the Xbox versions look better.

In a newly published response (PDF) to the UK’s Competition and Markets Authority, Sony says the regulators’ recent turnaround is “surprising, unprecedented, and irrational.” The company takes specific issue with the regulators’ “lifetime value” modeling, which Sony says heavily undervalues what an Xbox-exclusive Call of Duty would be worth to Microsoft. Beyond those technical concerns, though, Sony says it worries that Microsoft might subtly undermine PlayStation “simply by not making it as good as it could be.” That could include small changes to the game’s “performance [or] quality of play,” but also secondary moves to “raise [Call of Duty’s] price [on PlayStation], release the game at a later date, or make it available only on Game Pass.” Microsoft would also “have no incentive to make use of the advanced features in PlayStation not found in Xbox,” Sony says, an apparent reference to the PS5 controller’s advanced haptics and built-in audio capabilities.

In its own newly filed response (PDF), Microsoft reiterated that it has “no intention to withhold or degrade access to Call of Duty or any other Activision content on PlayStation.” That follows on a March filing where Microsoft promised Sony parity on Call of Duty’s “release date, content, features, upgrades, quality, and playability.” But Sony’s response reflects a continued lack of trust in such promises. The company cites detailed analyses from the likes of Digital Foundry in saying that “the technical quality of Modern Warfare II was similar across platforms” in today’s market. After a merger, though, Sony argues that “Microsoft would have different incentives because degrading the experience on PlayStation would benefit Xbox, PlayStation’s ‘closest rival.'” “This kind of ‘partial foreclosure’ strategy might ‘trigger fewer gamer complaints’ than full Xbox exclusivity for Call of Duty, Sony says, while also allowing Microsoft to ‘still secure revenues from sales of Call of Duty on PlayStation for a transitional period,'” reports Ars. “But Sony says the long-term results of this kind of ‘degraded’ PlayStation version would be the same as a full PlayStation ban: Call of Duty players abandoning Sony and moving to Microsoft’s platforms.”

“Such a move would ‘seriously damage our reputation,’ Sony Interactive Entertainment CEO Jim Ryan told the CMA in a recent hearing. ‘Our gamers would desert our platform in droves and network effects would exacerbate the problem. Our business would never recover.'”

Read more of this story at Slashdot.

False Memories Can Form Within Seconds, Study Finds

In a new study, scientists found that it’s possible for people to form false memories of an event within seconds of it occurring. This almost-immediate misremembering seems to be shaped by our expectations of what should happen, the team says. Gizmodo reports: “This study is unique in two ways, in our opinion. First, it explores memory for events that basically just happened, between 0.3 and 3 seconds ago. Intuitively, we would think that these memories are pretty reliable,” lead author Marte Otten, a neuroscientist at the University of Amsterdam, told Gizmodo in an email. “As a second unique feature, we explicitly asked people whether they thought their memories are reliable — so how confident are they about their response?” To do this, they recruited hundreds of volunteers over a series of four experiments to complete a task: They would look at certain letters and then be asked to recall one highlighted letter right after. However, the scientists used letters that were sometimes reversed in orientation, so the volunteers had to remember whether their selection was mirrored or not. They also focused on the volunteers who were highly confident about their choices during the task.

Overall, the participants regularly misremembered the letters, but in a specific way. People were generally good at remembering when a typical letter was shown, with their inaccuracy rates hovering around 10%. But they were substantially worse at remembering a mirrored letter, with inaccuracy rates up to 40% in some experiments. And, interestingly enough, their memory got worse the longer they had to wait before recalling it. When they were asked to recall what they saw a half second later, for instance, they were wrong less than 20% of the time, but when they were asked three seconds later, the rate rose as high as 30%.

According to Otten, the findings — published Wednesday in PLOS One — indicate that our memory starts being shaped almost immediately by our preconceptions. People expect to see a regular letter, and don’t get easily fooled into misremembering a mirrored letter. But when the unexpected happens, we might often still default to our missed prediction. This bias doesn’t seem to kick in instantaneously, though, since people’s short-term memory was better when they had to be especially quick on their feet. “It is only when memory becomes less reliable through the passage of a tiny bit of time, or the addition of extra visual information, that internal expectations about the world start playing a role,” Otten said.

Read more of this story at Slashdot.