FTC Chair: AI Models Could Violate Antitrust Laws

An anonymous reader quotes a report from The Hill: Federal Trade Commission (FTC) Chair Lina Khan said Wednesday that companies that train their artificial intelligence (A) models on data from news websites, artists’ creations or people’s personal information could be in violation of antitrust laws. At The Wall Street Journal’s “Future of Everything Festival,” Khan said the FTC is examining ways in which major companies’ data scraping could hinder competition or potentially violate people’s privacy rights. “The FTC Act prohibits unfair methods of competition and unfair or deceptive acts or practices,” Khan said at the event. “So, you can imagine, if somebody’s content or information is being scraped that they have produced, and then is being used in ways to compete with them and to dislodge them from the market and divert businesses, in some cases, that could be an unfair method of competition.”

Khan said concern also lies in companies using people’s data without their knowledge or consent, which can also raise legal concerns. “We’ve also seen a lot of concern about deception, about unfairness, if firms are making one set of representations when you’re signing up to use them, but then are secretly or quietly using the data you’re feeding them — be it your personal data, be it, if you’re a business, your proprietary data, your competitively significant data — if they’re then using that to feed their models, to compete with you, to abuse your privacy, that can also raise legal concerns,” she said.

Khan also recognized people’s concerns about companies retroactively changing their terms of service to let them use customers’ content, including personal photos or family videos, to feed into their AI models. “I think that’s where people feel a sense of violation, that that’s not really what they signed up for and oftentimes, they feel that they don’t have recourse,” Khan said. “Some of these services are essential for navigating day to day life,” she continued, “and so, if the choice — ‘choice’ — you’re being presented with is: sign off on not just being endlessly surveilled, but all of that data being fed into these models, or forego using these services entirely, I think that’s a really tough spot to put people in.” Khan said she thinks many government agencies have an important role to play as AI continues to develop, saying, “I think in Washington, there’s increasingly a recognition that we can’t, as a government, just be totally hands off and stand out of the way.” You can watch the interview with Khan here.

Read more of this story at Slashdot.

Google Threatens To Pause Google News Initiative Funding In US

Google has warned nonprofit newsrooms that a new California bill taxing Big Tech for digital ad transactions would jeopardize future investments in the U.S. news industry. “This is the second time this year Google has threatened to pull investment in news in response to a regulatory threat in California — but this time, hundreds of publishers outside of California would also feel the impact,” reports Axios. From the report: Google’s new outreach to smaller news outlets is happening in response to a different bill, introduced this year by State Sen. Steve Glazer, that would tax Big Tech companies like Google and Meta for “data extraction transactions,” or digital ad transactions. Tax revenue would fund tax credits meant to support the hiring of more journalists in California by eligible nonprofit local news organizations. With the link tax bill, Google only threatened to pull news investments in California. But the company is telling partners that the ad tax proposal will threaten consideration of new grants nationwide by the Google News Initiative, which funds hundreds of smaller news outlets, sources told Axios. Previous commitments, however, should be secure. A spokesperson for the Institute for Nonprofit News said the organization believes that grants previously committed through GNI as described here “are secure, so INN members should continue to benefit through this particular Fundamentals Labs program.”

Google’s concern, sources familiar with the company’s thinking told Axios, is that the new California ad tax bill could set a troubling wider precedent for other states. California’s Senate tax committee approved the “ad tax” bill May 8. Days after that, Google started making calls to nonprofits about potentially pausing future Google News Initiative funding, sources told Axios. Opponents argue (PDF) the ad tax burden would get passed down to consumers and businesses. They also say the measure would face legal challenges, similar to a digital ad tax introduced in Maryland last year.

Read more of this story at Slashdot.

Feds Add Nine More Incidents To Waymo Robotaxi Investigation

Nine more accidents have been discovered by federal safety regulators during their safety investigation of Waymo’s self-driving vehicles in Phoenix and San Francisco. TechCrunch reports: The National Highway Traffic Safety Administration Office of Defects Investigation (ODI) opened an investigation earlier this month into Waymo’s autonomous vehicle software after receiving 22 reports of robotaxis making unexpected moves that led to crashes and potentially violated traffic safety laws. The investigation, which has been designated a “preliminary evaluation,” is examining the software and its ability to avoid collisions with stationary objects and how well it detects and responds to “traffic safety control devices” like cones. The agency said Friday it has added (PDF) another nine incidents since the investigation was opened.

Waymo reported some of these incidents. The others were discovered by regulators via public postings on social media and forums like Reddit, YouTube and X. The additional nine incidents include reports of Waymo robotaxis colliding with gates, utility poles, and parked vehicles, driving in the wrong lane with nearby oncoming traffic and into construction zones. The ODI said it’s concerned the robotaxis “exhibiting such unexpected driving behaviors may increase the risk of crash, property damage, and injury.” The agency said that while it’s not aware of any injuries from these incidents, several involved collisions with visible objects that “a competent driver would be expected to avoid.” The agency also expressed concern that some of these occurred near pedestrians. NHTSA has given Waymo until June 11 to respond to a series of questions regarding the investigation.

Read more of this story at Slashdot.

Political Consultant Behind Fake Biden Robocalls Faces $6 Million Fine, Criminal Charges

Political consultant Steven Kramer faces a $6 million fine and over two dozen criminal charges for using AI-generated robocalls mimicking President Joe Biden’s voice to mislead New Hampshire voters ahead of the presidential primary. The Associated Press reports: The Federal Communications Commission said the fine it proposed Thursday for Steven Kramer is its first involving generative AI technology. The company accused of transmitting the calls, Lingo Telecom, faces a $2 million fine, though in both cases the parties could settle or further negotiate, the FCC said. Kramer has admitted orchestrating a message that was sent to thousands of voters two days before the first-in-the-nation primary on Jan. 23. The message played an AI-generated voice similar to the Democratic president’s that used his phrase “What a bunch of malarkey” and falsely suggested that voting in the primary would preclude voters from casting ballots in November.

Kramer is facing 13 felony charges alleging he violated a New Hampshire law against attempting to deter someone from voting using misleading information. He also faces 13 misdemeanor charges accusing him of falsely representing himself as a candidate by his own conduct or that of another person. The charges were filed in four counties and will be prosecuted by the state attorney general’s office. Attorney General John Formella said New Hampshire was committed to ensuring that its elections “remain free from unlawful interference.”

Kramer, who owns a firm that specializes in get-out-the-vote projects, did not respond to an email seeking comment Thursday. He told The Associated Press in February that he wasn’t trying to influence the outcome of the election but rather wanted to send a wake-up call about the potential dangers of artificial intelligence when he paid a New Orleans magician $150 to create the recording. “Maybe I’m a villain today, but I think in the end we get a better country and better democracy because of what I’ve done, deliberately,” Kramer said in February.

Read more of this story at Slashdot.

Mark Zuckerberg Assembles Team of Tech Execs For AI Advisory Council

An anonymous reader quotes a report from Quartz: Mark Zuckerberg has assembled some of his fellow tech chiefs into an advisory council to guide Meta on its artificial intelligence and product developments. The Meta Advisory Group will periodically meet with Meta’s management team, Bloomberg reported. Its members include: Stripe CEO and co-founder Patrick Collison, former GitHub CEO Nat Friedman, Shopify CEO Tobi Lutke, and former Microsoft executive and investor Charlie Songhurst.

“I’ve come to deeply respect this group of people and their achievements in their respective areas, and I’m grateful that they’re willing to share their perspectives with Meta at such an important time as we take on new opportunities with AI and the metaverse,” Zuckerberg wrote in an internal note to Meta employees, according to Bloomberg. The advisory council differs from Meta’s 11-person board of directors because its members are not elected by shareholders, nor do they have fiduciary duty to Meta, a Meta spokesperson told Bloomberg. The spokesperson said that the men will not be paid for their roles on the advisory council. TechCrunch notes that the council features “only white men on it.” This “differs from Meta’s actual board of directors and its Oversight Board, which is more diverse in gender and racial representation,” reports TechCrunch.

“It’s telling that the AI advisory council is composed entirely of businesspeople and entrepreneurs, not ethicists or anyone with an academic or deep research background. … it’s been proven time and time again that AI isn’t like other products. It’s a risky business, and the consequences of getting it wrong can be far-reaching, particularly for marginalized groups.”

Read more of this story at Slashdot.