Can a ‘Virtual’ Manual Transmission Bring the Stick Shift to Electric Cars?

Lexus is apparently working on a “virtual” manual transmission, reports the Verge, “to find out if the stick shift can survive the electric revolution…”
British car enthusiast publication Evo reported this week that Lexus, which now leads Toyota’s high-performance EV efforts, is developing a kind of shifting system that mimics the feel of a clutch and a stick shift in an electric car. Of course, it comes without the traditional mechanical connections for such a transmission because an EV doesn’t need those things, but it mimics the motions involved with three-pedal driving. The company has even been showing it off on a special version of the Lexus UX 300e, an electric crossover not sold in the U.S.

Evo reports the “transmission” has an unconnected gear stick and clutch coupled to the electric powertrain, with fake internal combustion sounds and software that augments the electric torque output. In other words, it’s a full-on pretend manual in an EV, complete with the “vroom vroom” sounds…. If this electric transformation really happens, being an enthusiast in the future could mean paying big bucks to simulate the things that got lost along the way.
Their headline puts it less charitably. (“Lexus could save the stick shift for EVs, if drivers are willing to pretend.”)

But Evo writes that Toyota’s ultimate goal is “making EVs more engaging to drive,” noting it’s also equipped with haptic drivers “to generate ‘feel.'”
Clumsy shifts will be accurately translated; you’ll even be able to stall it. Toyota says it’ll be able to theoretically recreate any engine and transmission combination through both sound and torque deliveries from the powertrain…. Takashi Watanabe, Lexus Electrified Chief Engineer, explained: “It is a software-based system, so it can be programmed to reproduce the driving experience of different vehicle types, letting the driver choose their preferred mapping….”

The sound being created from this sort of system is bound to only get better too, as other factors like vibrations through the cabin could be recreated by motors in the seats. This is a system used in BMW’s latest high-end Bowers & Wilkins sound systems, which use vibrating motors in the seats to create more depth to the bass coming from its speakers…. It might not be the real thing, but in a future where we don’t have a choice on the matter and have to drive an EV, it might be the next best thing…

Read more of this story at Slashdot.

Apple Sued By Stalking Victims Over Alleged AirTag Tracking

schwit1 shares a report from Popular Science: [T]wo women filed a potential class action lawsuit against Apple, alleging the company has ignored critics’ and security experts’ repeated warnings that the company’s AirTag devices are being repeatedly used to stalk and harass people. Both individuals were targets of past abuse from ex-partners and argued in the filing that Apple’s subsequent safeguard solutions remain wholly inadequate for consumers. “With a price point of just $29, it has become the weapon of choice of stalkers and abusers,” reads a portion of the lawsuit, as The New York Times reported […].

Apple first debuted AirTags in April 2021. Within the ensuing eight months, at least 150 police reports from just eight precincts reviewed by Motherboard explicitly mentioned abusers utilizing the tracking devices to stalk and harass women. In the new lawsuit, plaintiffs allege that one woman’s abuser hid the location devices within her car’s wheel well. At the same time, the other woman’s abuser placed one in their child’s backpack following a contentious divorce, according to the suit. Security experts have since cautioned that hundreds more similar situations likely remain unreported or even undetected.

The lawsuit (PDF), published by Ars Technica, cites them as “one of the products that has revolutionized the scope, breadth, and ease of location-based stalking,” arguing that “what separates the AirTag from any competitor product is its unparalleled accuracy, ease of use (it fits seamlessly into Apple’s existing suite of products), and affordability.” The proposed class action lawsuit seeks unspecified damages for owners of iOS or Android devices which have been tracked with an AirTag or are at risk of being stalked. Since AirTags’ introduction last year, at least two murders have occurred directly involving using Apple’s surveillance gadget, according to the lawsuit.

Read more of this story at Slashdot.

DeepMind Created An AI Tool That Can Help Generate Rough Film and Stage Scripts

Alphabet’s DeepMind has built an AI tool that can help generate rough film and stage scripts Engadget’s Kris Holt reports: Dramatron is a so-called “co-writing” tool that can generate character descriptions, plot points, location descriptions and dialogue. The idea is that human writers will be able to compile, edit and rewrite what Dramatron comes up with into a proper script. Think of it like ChatGPT, but with output that you can edit into a blockbuster movie script. To get started, you’ll need an OpenAI API key and, if you want to reduce the risk of Dramatron outputting “offensive text,” a Perspective API key. To test out Dramatron, I fed in the log line for a movie idea I had when I was around 15 that definitely would have been a hit if Kick-Ass didn’t beat me to the punch. Dramatron quickly whipped up a title that made sense, and character, scene and setting descriptions. The dialogue that the AI generated was logical but trite and on the nose. Otherwise, it was almost as if Dramatron pulled the descriptions straight out of my head, including one for a scene that I didn’t touch on in the log line.

Playwrights seemed to agree, according to a paper (PDF) that the team behind Dramatron presented today. To test the tool, the researchers brought in 15 playwrights and screenwriters to co-write scripts. According to the paper, playwrights said they wouldn’t use the tool to craft a complete play and found that the AI’s output can be formulaic. However, they suggested Dramatron would be useful for world building or to help them explore other approaches in terms of changing plot elements or characters. They noted that the AI could be handy for “creative idea generation” too. That said, a playwright staged four plays that used “heavily edited and rewritten scripts” they wrote with the help of Dramatron. DeepMind said that in the performance, experienced actors with improv skills “gave meaning to Dramatron scripts through acting and interpretation.”

Read more of this story at Slashdot.

Saudi Arabia’s Sci-Fi Megacity Is Well Underway

Mark Harris writes via MIT Technology Review: In early 2021, Crown Prince Mohammed bin Salman of Saudi Arabia announced The Line: a “civilizational revolution” that would house up to 9 million people in a zero-carbon megacity, 170 kilometers long and half a kilometer high but just 200 meters wide. Within its mirrored, car-free walls, residents would be whisked around in underground trains and electric air taxis. Satellite images of the $500 billion project obtained exclusively by MIT Technology Review show that the Line’s vast linear building site is already taking shape, running as straight as an arrow across the deserts and through the mountains of northern Saudi Arabia. The site, tens of meters deep in places, is teeming with many hundreds of construction vehicles and likely thousands of workers, themselves housed in sprawling bases nearby.

Analysis of the satellite images by Soar Earth, an Australian startup that aggregates satellite imagery and crowdsourced maps into an online digital atlas, suggests that the workers have already excavated around 26 million cubic meters of earth and rock — 78 times the volume of the world’s tallest building, the Burj Khalifa. Official drone footage of The Line’s construction site, released in October, indeed showed fleets of bulldozers, trucks, and diggers excavating its foundations. Visit The Line’s location on Google Maps and Google Earth, however, and you will see little more than bare rock and sand.

Read more of this story at Slashdot.

AI Learns To Write Computer Code In ‘Stunning’ Advance

DeepMind’s new artificial intelligence system called AlphaCode was able to “achieve approximately human-level performance” in a programming competition. The findings have been published in the journal Science. Slashdot reader sciencehabit shares a report from Science Magazine:
AlphaCode’s creators focused on solving those difficult problems. Like the Codex researchers, they started by feeding a large language model many gigabytes of code from GitHub, just to familiarize it with coding syntax and conventions. Then, they trained it to translate problem descriptions into code, using thousands of problems collected from programming competitions. For example, a problem might ask for a program to determine the number of binary strings (sequences of zeroes and ones) of length n that don’t have any consecutive zeroes. When presented with a fresh problem, AlphaCode generates candidate code solutions (in Python or C++) and filters out the bad ones. But whereas researchers had previously used models like Codex to generate tens or hundreds of candidates, DeepMind had AlphaCode generate up to more than 1 million.

To filter them, AlphaCode first keeps only the 1% of programs that pass test cases that accompany problems. To further narrow the field, it clusters the keepers based on the similarity of their outputs to made-up inputs. Then, it submits programs from each cluster, one by one, starting with the largest cluster, until it alights on a successful one or reaches 10 submissions (about the maximum that humans submit in the competitions). Submitting from different clusters allows it to test a wide range of programming tactics. That’s the most innovative step in AlphaCode’s process, says Kevin Ellis, a computer scientist at Cornell University who works AI coding.

After training, AlphaCode solved about 34% of assigned problems, DeepMind reports this week in Science. (On similar benchmarks, Codex achieved single-digit-percentage success.) To further test its prowess, DeepMind entered AlphaCode into online coding competitions. In contests with at least 5000 participants, the system outperformed 45.7% of programmers. The researchers also compared its programs with those in its training database and found it did not duplicate large sections of code or logic. It generated something new — a creativity that surprised Ellis. The study notes the long-term risk of software that recursively improves itself. Some experts say such self-improvement could lead to a superintelligent AI that takes over the world. Although that scenario may seem remote, researchers still want the field of AI coding to institute guardrails, built-in checks and balances.

Read more of this story at Slashdot.

Did Sam Bankman-Fried Finally Admit the Obvious?

CoinDesk’s Daniel Kuhn writes in an opinion piece: Despite the focus on FTX following its catastrophic collapse, it’s remarkable how little we know about how the crypto exchange and its in-house trading firm Alameda Research actually operated. New CEO John Jay Ray III has called Sam Bankman-Fried’s crypto trading empire the “greatest failure of corporate controls” he’s seen. Wednesday, Coffeezilla, a YouTuber with a rising star who has made a career of shining a light on sketchy projects in and out of crypto, pressed Bankman-Fried for information related to how different customer accounts were treated at the exchange. It turns out, there wasn’t much differentiation — at the very least during the final days the exchange was in business, Bankman-Fried admitted. “At the time, we wanted to treat customers equally,” SBF said during a Twitter Spaces event. “That effectively meant that there was, you know, if you want to put it this way, like fungibility created” between the exchange’s spot and derivatives business lines. For Coffeezilla, this looks like a smoking gun that fraud was committed.

At the very least, this is a contradiction of what Bankman-Fried had said just minutes before when first asked about the exchange’s terms of service (ToS). “I do think we’re treating them differently,” Bankman-Fried said, referring to customer assets used for “margin versus staking versus spot versus futures collateral.” All of those services come with different levels of risk, different promises made to customers and different responsibilities for the exchange. According to FTX’s ToS, everyday users just looking to buy or store their cryptocurrencies on the centralized exchange could trust they were doing just that, buying and storing cryptographically unique digital assets. But now, thanks to skillful questioning by Coffeezilla, we know there were instead “omnibus” wallets and that spot and derivatives traders were essentially assuming the same level of risk.

We can also assume this was a longstanding practice at FTX. Bankman-Fried noted that during the “run on the exchange” (pardon the language), when people were attempting to get their assets off before withdrawals were shut down, FTX allowed “generalized withdrawals” from these omnibus wallets. But he also deflected, saying what, you wanted us to code up an entirely new process during a liquidity crisis? Before now, Bankman-Fried had been asked multiple times about the exchange’s ToS and often managed to derail the conversation. He would often point to other sections of the document that stated clients using margin (taking out debt from FTX) could have their funds used by the exchange. Or he would bring up a vestigial wire process in place before FTX had banking relationships. Apparently, according to SBF, customers had sent money to Alameda to fund accounts on FTX and somewhere along the lines this capital ended up in a rarely seen subaccount. This also had the benefit of inflating Alameda’s books, another dark corner of the empire. Further reading: FTX Founder Sam Bankman-Fried Is Said To Face Market Manipulation Inquiry

Read more of this story at Slashdot.

Atari Revives Unreleased Arcade Game That Was Too Damn Hard For 1982 Players

Atari is reviving Akka Arrh, a 1982 arcade game canceled because test audiences found it too difficult. Engadget reports: For the wave shooter’s remake, the publisher is teaming up with developer Jeff Minter, whose psychedelic, synthwave style seems an ideal fit for what Atari describes as “a fever dream in the best way possible.” The remake will be released on PC, PS5 and PS4, Xbox Series X/S, Nintendo Switch and Atari VCS in early 2023. The original Akka Arrh cabinet used a trackball to target enemies, as the player controls the Sentinel fixed in the center of the screen to fend off waves of incoming attackers. Surrounding the Sentinel is an octagonal field, which you need to keep clear; if enemies slip in, you can zoom in to fend them off before panning back out to fend off the rest of the wave. Given the simplicity of most games in the early 1980s, it’s unsurprising this relative complexity led to poor test-group screenings.

Since Atari pulled the plug on the arcade version before its release, only three Akka Arrh cabinets are known to exist. But the Minter collaboration isn’t the game’s first public availability. After an arcade ROM leaked online in 2019, Atari released the original this fall as part of its Atari 50: The Anniversary Celebration collection. […] Atari says the remake has two modes, 50 levels and saves, so you don’t have to start from the beginning when enemies inevitably overrun your Sentinel. Additionally, the company says it offers accessibility settings to tone down the trippy visuals for people sensitive to intense light, color and animations.

Read more of this story at Slashdot.