Cruise Says Hostility Toward Regulators Led To Grounding of Its Autonomous Cars

Cruise, the driverless car subsidiary of General Motors, said in a report on Thursday that an adversarial approach taken (non-paywalled link) by its top executives toward regulators had led to a cascade of events that ended with a nationwide suspension of Cruise’s fleet. From a report: The roughly 100-page report was compiled by a law firm that Cruise hired to investigate whether its executives had misled California regulators about an October crash in San Francisco in which a Cruise vehicle dragged a woman 20 feet. The investigation found that while the executives had not intentionally misled state officials, they had failed to explain key details about the incident. In meetings with regulators, the executives let a video of the crash “speak for itself” rather than fully explain how one of its vehicles severely injured the pedestrian. The executives later fixated on protecting Cruise’s reputation rather than giving a full account of the accident to the public and media, according to the report, which was written by the Quinn Emanuel Urquhart & Sullivan law firm.

The company said that the Justice Department and the Securities and Exchange Commission were investigating the incident, as well as state agencies and the National Highway Traffic Safety Administration. The report is central to Cruise’s efforts to regain the public’s trust and eventually restart its business. Cruise has been largely shut down since October, when the California Department of Motor Vehicles suspended its license to operate because its vehicles were unsafe. It responded by pulling its driverless cars off the road across the country, laying off a quarter of its staff and replacing Kyle Vogt, its co-founder and chief executive, who resigned in November, with new leaders.

Read more of this story at Slashdot.

FTC Launches Inquiry Into AI Deals by Tech Giants

The Federal Trade Commission launched an inquiry (non-paywalled link) on Thursday into the multibillion-dollar investments by Microsoft, Amazon and Google in the artificial intelligence start-ups OpenAI and Anthropic, broadening the regulator’s efforts to corral the power the tech giants can have over A.I. The New York Times: These deals have allowed the big companies to form deep ties with their smaller rivals while dodging most government scrutiny. Microsoft has invested billions of dollars in OpenAI, the maker of ChatGPT, while Amazon and Google have each committed billions of dollars to Anthropic, another leading A.I. start-up.

Regulators have typically focused on bringing antitrust lawsuits against deals where the tech giants are buying rivals outright or using acquisitions to expand into new businesses, leading to increased prices and other harm, and have not regularly challenged stakes that the companies buy in start-ups. The F.T.C.’s inquiry will examine how these investment deals alter the competitive landscape and could inform any investigations by federal antitrust regulators into whether the deals have broken laws.

The F.T.C. said it would ask Microsoft, OpenAI, Amazon, Google and Anthropic to describe their influence over their partners and how they worked together to make decisions. It also said it would demand that they provide any internal documents that could shed light on the deals and their potential impact on competition.

Read more of this story at Slashdot.

OpenAI Quietly Scrapped a Promise To Disclose Key Documents To the Public

From its founding, OpenAI said its governing documents were available to the public. When WIRED requested copies after the company’s boardroom drama, it declined to provide them. Wired: Wealthy tech entrepreneurs including Elon Musk launched OpenAI in 2015 as a nonprofit research lab that they said would involve society and the public in the development of powerful AI, unlike Google and other giant tech companies working behind closed doors. In line with that spirit, OpenAI’s reports to US tax authorities have from its founding said that any member of the public can review copies of its governing documents, financial statements, and conflict of interest rules. But when WIRED requested those records last month, OpenAI said its policy had changed, and the company provided only a narrow financial statement that omitted the majority of its operations.

“We provide financial statements when requested,” company spokesperson Niko Felix says. “OpenAI aligns our practices with industry standards, and since 2022 that includes not publicly distributing additional internal documents.” OpenAI’s abandonment of the long-standing transparency pledge obscures information that could shed light on the recent near-implosion of a company with crucial influence over the future of AI and could help outsiders understand its vulnerabilities. In November, OpenAI’s board fired CEO Sam Altman, implying in a statement that he was untrustworthy and had endangered its mission to ensure AI “benefits all humanity.” An employee and investor revolt soon forced the board to reinstate Altman and eject most of its own members, with an overhauled slate of directors vowing to review the crisis and enact structural changes to win back the trust of stakeholders.

Read more of this story at Slashdot.

Modder Recreates Game Boy Advance Games Using the Audio From Crash Sounds

Kevin Purdy reports via Ars Technica: Sometimes, a great song can come from great pain. The Game Boy Advance (GBA), its software having crashed nearly two hours ago, will, for example, play a tune based on the game inside it. And if you listen closely enough — using specialty hardware and code — you can tell exactly what game it was singing about. And then theoretically play that same game. This was discovered recently by TheZZAZZGlitch, whose job is to “sadistically glitch and hack the crap out of Pokemon games. It’s “hardly a ready-to-use solution,” the modder notes, as it requires a lot of tuning specific to different source formats. So while there are certainly easier ways to get GBA data from a cartridge, none make you feel quite so much like an audio datamancer.

After crashing a GBA and recording it over four hours, the modder saw some telltale waveforms in a sound file at about the 1-hour, 50-minute mark. Later in the sound-out, you can hear the actual instrument sounds and audio samples the game contains, played in sequence. Otherwise, it’s 8-bit data at 13,100 Hz, and at times, it sounds absolutely deranged. “2 days of bugfixing later,” the modder had a Python script ready that could read the audio from a clean recording of the GBA’s crash dump. Did it work? Not without more troubleshooting. One issue with audio-casting ROM data is that there are large sections of 0-byte data in the ROM, which are hard to parse as mute sounds. After running another script that realigned sections based on their location in the original ROM, the modder’s ROM was 99.76 percent accurate but “still didn’t boot tho.” TheZZAZZGlitch later disclaimed that, yes, this is technically using known ROM data to surface unknown data, or “cheating,” but there are assumptions and guesses one could make if you were truly doing this blind.

The next fix was to refine the sound recording. By recording three times and merging them with a “majority vote” algorithm, their accuracy notched up to 99.979 percent. That output ROM booted — but with glitched text and a title screen crash. After seven different recordings are meshed and filtered for blank spaces, they achieve 100 percent parity. You can watch the video describing this feat here. Used source code is also available under the file name “gbacrashsound_dumper.zip.”

Read more of this story at Slashdot.