How Apple’s ‘Reality Pro’ Headset Will Work

An anonymous reader quotes a report from 9to5Mac: Apple’s first AR/VR headset could be unveiled sometime this spring, and rumors continue to offer more information about what Apple has in the works. A wide-ranging new report from Bloomberg now offers a slew of details on Apple’s “Reality Pro” headset, including that the “eye- and hand-tracking capabilities will be a major selling point” for the product. Using external cameras, the headset will be able to analyze the user’s hands, while internal sensors will be used to read the user’s eyes.

The report explains: “The headset will have several external cameras that can analyze a user’s hands, as well as sensors within the gadget’s housing to read eyes. That allows the wearer to control the device by looking at an on-screen item — whether it’s a button, app icon or list entry — to select it. Users will then pinch their thumb and index finger together to activate the task — without the need to hold anything. The approach differs from other headsets, which typically rely on a hand controller.”

More details on the hardware of the headset include that there will be a Digital Crown similar to the Apple Watch for switching between AR and VR. The VR mode will fully immerse the wearer, but when AR mode is enabled the “content fades back and becomes surrounded by the user’s real environment.” This is reportedly one of the features Apple hopes will be a “highlight of the product.” To address overheating concerns, the Reality Pro headset will use an external battery that “rests in a user’s pocket and connects over a cable.” There will also be a cooling fan to further reduce the likelihood of the headset overheating. “The headset can last about two hours per battery pack,” Bloomberg reports. The battery pack is “roughly the size of two iPhone 14 Pro Maxes stacked on top of each other, or about six inches tall and more than half an inch thick.” Another tidbit from the report is that the headset will be able to serve as an external display for Mac. “Users will be able to see their Mac’s display in virtual reality but still control the computer with their trackpad or mouse and physical keyboard,” reports Bloomberg. Apple is also “developing technology that will let users type in midair with their hands.”

Additionally, FaceTime on the headset will “realistically render a user’s face and full body in virtual reality.”

A team of more than 1,000 people have been reportedly working on the first version of the device for the past seven years. It’s slated to cost “roughly $3,000” when it debuts sometime this spring.

Read more of this story at Slashdot.

Anti-Plagiarism Service Turnitin Is Building a Tool To Detect ChatGPT-Written Essays

Turnitin, best known for its anti-plagiarism software used by tens of thousands of universities and schools around the world, is building a tool to detect text generated by AI. The Register reports: Turnitin has been quietly building the software for years ever since the release of GPT-3, Annie Chechitelli, chief product officer, told The Register. The rush to give educators the capability to identify text written by humans and computers has become more intense with the launch of its more powerful successor, ChatGPT. As AI continues to progress, universities and schools need to be able to protect academic integrity now more than ever. “Speed matters. We’re hearing from teachers just give us something,” Chechitelli said. Turnitin hopes to launch its software in the first half of this year. “It’s going to be pretty basic detection at first, and then we’ll throw out subsequent quick releases that will create a workflow that’s more actionable for teachers.” The plan is to make the prototype free for its existing customers as the company collects data and user feedback. “At the beginning, we really just want to help the industry and help educators get their legs under them and feel more confident. And to get as much usage as we can early on; that’s important to make a successful tool. Later on, we’ll determine how we’re going to productize it,” she said.

Turnitin’s VP of AI, Eric Wang, said there are obvious patterns in AI writing that computers can detect. “Even though it feels human-like to us, [machines write using] a fundamentally different mechanism. It’s picking the most probable word in the most probable location, and that’s a very different way of constructing language [compared] to you and I,” he told The Register. […] ChatGPT, however, doesn’t have this kind of flexibility and can only generate new words based on previous sentences, he explained. Turnitin’s detector works by predicting what words AI is more likely to generate in a given text snippet. “It’s very bland statistically. Humans don’t tend to consistently use a high probability word in high probability places, but GPT-3 does so our detector really cues in on that,” he said.

Wang said Turnitin’s detector is based on the same architecture as GPT-3 and described it as a miniature version of the model. “We are in many ways I would [say] fighting fire with fire. There’s a detector component attached to it instead of a generate component. So what it’s doing is it’s reading language in the exact same way GPT-3 reads language, but instead of spitting out more language, it gives us a prediction of whether we think this passage looks like [it’s from] GPT-3.” The company is still deciding how best to present its detector’s results to teachers using the tool. “It’s a difficult challenge. How do you tell an instructor in a small amount of space what they want to see?” Chechitelli said. They might want to see a percentage that shows how much of an essay seems to be AI-written, or they might want confidence levels showing whether the detector’s prediction confidence is low, medium, or high to assess accuracy. “I think there is a major shift in the way we create content and the way we work,” Wang added. “Certainly that extends to the way we learn. We need to be thinking long term about how we teach. How do we learn in a world where this technology exists? I think there is no putting the genie back in the bottle. Any tool that gives visibility to the use of these technologies is going to be valuable because those are the foundational building blocks of trust and transparency.”

Read more of this story at Slashdot.

GameCube and Wii Games Are Now Easier To Play On Xbox Consoles

The new standalone Dolphin emulator will let you play almost any GameCube or Wii game on your Xbox console. Windows Central reports: Dolphin Emulator for UWP first rolled out in beta on December 6, 2022. It has since received a couple of updates, bringing it to version 1.02. The standalone Dolphin emulator is capable of upscaling games to up to 1440p. You can also play titles at their original resolution if you prefer. With mods, you can use HD texture packs to make games look more modern and have higher resolution. The emulator also supports a broadband adapter, but the usefulness of that varies greatly depending on the game you want to play online. For example, Mario Kart Double Dash would require tunnelling software to access online play.

Of course, you can’t just download the Dolphin emulator through the Microsoft Store. The easiest way to install the emulator is by enabling Developer Mode on your Xbox console. It’s also possible to set up by using retail mode. A computer is needed to configure your Xbox controller and other parts of your system. You should also have a USB drive handy. Modern Vintage Gamer walks through the entire process in their video. It’s possible to run Dolphin Emulator for UWP on older Xbox consoles, such as the Xbox One X, but performance will see a significant drop compared to playing on the Series X or Series S. Modern Vintage Gamer walks through the setup, testing, and “other neat things” on YouTube.

Read more of this story at Slashdot.

Automation Caused More than Half America’s Income Inequality Since 1980, Study Claims

A newly published study co-authored by MIT economist Daron Acemoglu “quantifies the extent to which automation has contributed to income inequality in the U.S.,” reports SciTechDaily, “simply by replacing workers with technology — whether self-checkout machines, call-center systems, assembly-line technology, or other devices.”

Over the last four decades, the income gap between more- and less-educated workers has grown significantly; the study finds that automation accounts for more than half of that increase. “This single one variable … explains 50 to 70 percent of the changes or variation between group inequality from 1980 to about 2016,” Acemoglu says….

Acemoglu and Pascual Restrepo, an assistant professor of economics at Boston University, used U.S. Bureau of Economic Analysis statistics on the extent to which human labor was used in 49 industries from 1987 to 2016, as well as data on machinery and software adopted in that time. The scholars also used data they had previously compiled about the adoption of robots in the U.S. from 1993 to 2014. In previous studies, Acemoglu and Restrepo have found that robots have by themselves replaced a substantial number of workers in the U.S., helped some firms dominate their industries, and contributed to inequality.

At the same time, the scholars used U.S. Census Bureau metrics, including its American Community Survey data, to track worker outcomes during this time for roughly 500 demographic subgroups… By examining the links between changes in business practices alongside changes in labor market outcomes, the study can estimate what impact automation has had on workers.

Ultimately, Acemoglu and Restrepo conclude that the effects have been profound. Since 1980, for instance, they estimate that automation has reduced the wages of men without a high school degree by 8.8 percent and women without a high school degree by 2.3 percent, adjusted for inflation.

Thanks to long-time Slashdot reader schwit1 for sharing the article.

Read more of this story at Slashdot.

Hobbyist’s Experiment Creates a Self-Soldering Circuit Board

Long-time Slashdot reader wonkavader found a video on YouTube where, at the 2:50 mark, there’s time-lapse footage of soldering paste magically melting into place. The secret?
Many circuit boards include a grounded plane as a layer. This doesn’t have to be a big unbroken expanse of copper — it can be a long snake to reduce the copper used. Well, if you run 9 volts through that long snake, it acts as a resistor and heats up the board enough to melt solder paste. Electronics engineer Carl Bugeja has made a board which controls the 9 volt input to keep the temperature on the desired curve for the solder.

This is an interesting home-brew project which seems like it might someday make a pleasant, expected feature in kits.

Hackaday is impressed by the possibilities too:
Surface mount components have been a game changer for the electronics hobbyist, but doing reflow soldering right requires some way to evenly heat the board. You might need to buy a commercial reflow oven — you can cobble one together from an old toaster oven, after all — but you still need something, because it’s not like a PCB is going to solder itself. Right?

Wrong. At least if you’re Carl Bugeja, who came up with a clever way to make his PCBs self-soldering…. The quality of the soldering seems very similar to what you’d see from a reflow oven…. After soldering, the now-useless heating element is converted into a ground plane for the circuit by breaking off the terminals and soldering on a couple of zero ohm resistors to short the coil to ground.

It’s an open source project, with all files available on GitHub. “This is really clever,” tweeted Adrian Bowyer, inventor of the open source 3D printer the RepRap Project.

In the video Bugeja compares reflow soldering to pizza-making. (If the circuit board is the underlying dough, then the electronics on top are the toppings, with the solder paste representing the sauce that keeps them in place. “The oven’s heat is what bonds these individual items together.”)

But by that logic making a self-soldering circuit is “like putting the oven in the dough and making it edible.”

Read more of this story at Slashdot.

Boston Dynamics’ Latest Atlas Video Demos a Robot That Can Run, Jump and Now Grab and Throw

Boston Dynamics released a demo of its humanoid robot Atlas, showing it pick up and deliver a bag of tools to a construction worker. While Atlas could already run and jump over complex terrain, the new hands, or rudimentary grippers, “give the robot new life,” reports TechCrunch. From the report: The claw-like gripper consists of one fixed finger and one moving finger. Boston Dynamics says the grippers were designed for heavy lifting tasks and were first demonstrated in a Super Bowl commercial where Atlas held a keg over its head. The videos released today show the grippers picking up construction lumber and a nylon tool bag. Next, the Atlas picks up a 2×8 and places it between two boxes to form a bridge. The Atlas then picks up a bag of tools and dashes over the bridge and through construction scaffolding. But the tool bag needs to go to the second level of the structure — something Atlas apparently realized and quickly throws the bag a considerable distance. Boston Dynamics describes this final maneuver: ‘Atlas’ concluding move, an inverted 540-degree, multi-axis flip, adds asymmetry to the robot’s movement, making it a much more difficult skill than previously performed parkour.” A behind the scenes video describing how Atlas is able to recognize and interact with objects is also available on YouTube.

Read more of this story at Slashdot.

Intel, AMD Just Created a Headache for Datacenters

An anonymous reader shares a report: In pursuit of ever-higher compute density, chipmakers are juicing their chips with more and more power, and according to the Uptime Institute, this could spell trouble for many legacy datacenters ill equipped to handle new, higher wattage systems. AMD’s Epyc 4 Genoa server processors announced late last year, and Intel’s long-awaited fourth-gen Xeon Scalable silicon released earlier this month, are the duo’s most powerful and power-hungry chips to date, sucking down 400W and 350W respectively, at least at the upper end of the product stack. The higher TDP arrives in lock step with higher core counts and clock speeds than previous CPU cores from either vendor.

It’s now possible to cram more than 192 x64 cores into your typical 2U dual socket system, something that just five years ago would have required at least three nodes. However, as Uptime noted, many legacy datacenters were not designed to accommodate systems this power dense. A single dual-socket system from either vendor can easily exceed a kilowatt, and depending on the kinds of accelerators being deployed in these systems, boxen can consume well in excess of that figure. The rapid trend towards hotter, more power dense systems upends decades-old assumptions about datacenter capacity planning, according to Uptime, which added: “This trend will soon reach a point when it starts to destabilize existing facility design assumptions.”

A typical rack remains under 10kW of design capacity, the analysts note. But with modern systems trending toward higher compute density and by extension power density, that’s no longer adequate. While Uptime notes that for new builds, datacenter operators can optimize for higher rack power densities, they still need to account for 10 to 15 years of headroom. As a result, datacenter operators must speculate as the long-term power and cooling demands which invites the risk of under or over building. With that said, Uptime estimates that within a few years a quarter rack will reach 10kW of consumption. That works out to approximately 1kW per rack unit for a standard 42U rack.

Read more of this story at Slashdot.

First Small Modular Nuclear Reactor Certified For Use In US

The U.S. Nuclear Regulatory Commission has certified the design for what will be the United States’ first small modular nuclear reactor. The Associated Press reports: The rule that certifies the design was published Thursday in the Federal Register. It means that companies seeking to build and operate a nuclear power plant can pick the design for a 50-megawatt, advanced light-water small modular nuclear reactor by Oregon-based NuScale Power and apply to the NRC for a license. It’s the final determination that the design is acceptable for use, so it can’t be legally challenged during the licensing process when someone applies to build and operate a nuclear power plant, NRC spokesperson Scott Burnell said Friday. The rule becomes effective in late February.

The U.S. Energy Department said the newly approved design “equips the nation with a new clean power source to help drive down” planet-warming greenhouse gas emissions. It’s the seventh nuclear reactor design cleared for use in the United States. The rest are for traditional, large, light-water reactors. Diane Hughes, NuScale’s vice president of marketing and communications, said the design certification is a historic step forward toward a clean energy future and makes the company’s VOYGR power plant a near-term deployable solution for customers. The first small modular reactor design application package included over 2 million pages of supporting materials, Hughes added. “NuScale has also applied to the NRC for approval of a larger design, at 77 megawatts per module, and the agency is checking the application for completeness before starting a full review,” adds the report.

Read more of this story at Slashdot.