Feds Add Nine More Incidents To Waymo Robotaxi Investigation

Nine more accidents have been discovered by federal safety regulators during their safety investigation of Waymo’s self-driving vehicles in Phoenix and San Francisco. TechCrunch reports: The National Highway Traffic Safety Administration Office of Defects Investigation (ODI) opened an investigation earlier this month into Waymo’s autonomous vehicle software after receiving 22 reports of robotaxis making unexpected moves that led to crashes and potentially violated traffic safety laws. The investigation, which has been designated a “preliminary evaluation,” is examining the software and its ability to avoid collisions with stationary objects and how well it detects and responds to “traffic safety control devices” like cones. The agency said Friday it has added (PDF) another nine incidents since the investigation was opened.

Waymo reported some of these incidents. The others were discovered by regulators via public postings on social media and forums like Reddit, YouTube and X. The additional nine incidents include reports of Waymo robotaxis colliding with gates, utility poles, and parked vehicles, driving in the wrong lane with nearby oncoming traffic and into construction zones. The ODI said it’s concerned the robotaxis “exhibiting such unexpected driving behaviors may increase the risk of crash, property damage, and injury.” The agency said that while it’s not aware of any injuries from these incidents, several involved collisions with visible objects that “a competent driver would be expected to avoid.” The agency also expressed concern that some of these occurred near pedestrians. NHTSA has given Waymo until June 11 to respond to a series of questions regarding the investigation.

Read more of this story at Slashdot.

Political Consultant Behind Fake Biden Robocalls Faces $6 Million Fine, Criminal Charges

Political consultant Steven Kramer faces a $6 million fine and over two dozen criminal charges for using AI-generated robocalls mimicking President Joe Biden’s voice to mislead New Hampshire voters ahead of the presidential primary. The Associated Press reports: The Federal Communications Commission said the fine it proposed Thursday for Steven Kramer is its first involving generative AI technology. The company accused of transmitting the calls, Lingo Telecom, faces a $2 million fine, though in both cases the parties could settle or further negotiate, the FCC said. Kramer has admitted orchestrating a message that was sent to thousands of voters two days before the first-in-the-nation primary on Jan. 23. The message played an AI-generated voice similar to the Democratic president’s that used his phrase “What a bunch of malarkey” and falsely suggested that voting in the primary would preclude voters from casting ballots in November.

Kramer is facing 13 felony charges alleging he violated a New Hampshire law against attempting to deter someone from voting using misleading information. He also faces 13 misdemeanor charges accusing him of falsely representing himself as a candidate by his own conduct or that of another person. The charges were filed in four counties and will be prosecuted by the state attorney general’s office. Attorney General John Formella said New Hampshire was committed to ensuring that its elections “remain free from unlawful interference.”

Kramer, who owns a firm that specializes in get-out-the-vote projects, did not respond to an email seeking comment Thursday. He told The Associated Press in February that he wasn’t trying to influence the outcome of the election but rather wanted to send a wake-up call about the potential dangers of artificial intelligence when he paid a New Orleans magician $150 to create the recording. “Maybe I’m a villain today, but I think in the end we get a better country and better democracy because of what I’ve done, deliberately,” Kramer said in February.

Read more of this story at Slashdot.

Mark Zuckerberg Assembles Team of Tech Execs For AI Advisory Council

An anonymous reader quotes a report from Quartz: Mark Zuckerberg has assembled some of his fellow tech chiefs into an advisory council to guide Meta on its artificial intelligence and product developments. The Meta Advisory Group will periodically meet with Meta’s management team, Bloomberg reported. Its members include: Stripe CEO and co-founder Patrick Collison, former GitHub CEO Nat Friedman, Shopify CEO Tobi Lutke, and former Microsoft executive and investor Charlie Songhurst.

“I’ve come to deeply respect this group of people and their achievements in their respective areas, and I’m grateful that they’re willing to share their perspectives with Meta at such an important time as we take on new opportunities with AI and the metaverse,” Zuckerberg wrote in an internal note to Meta employees, according to Bloomberg. The advisory council differs from Meta’s 11-person board of directors because its members are not elected by shareholders, nor do they have fiduciary duty to Meta, a Meta spokesperson told Bloomberg. The spokesperson said that the men will not be paid for their roles on the advisory council. TechCrunch notes that the council features “only white men on it.” This “differs from Meta’s actual board of directors and its Oversight Board, which is more diverse in gender and racial representation,” reports TechCrunch.

“It’s telling that the AI advisory council is composed entirely of businesspeople and entrepreneurs, not ethicists or anyone with an academic or deep research background. … it’s been proven time and time again that AI isn’t like other products. It’s a risky business, and the consequences of getting it wrong can be far-reaching, particularly for marginalized groups.”

Read more of this story at Slashdot.