US Fines T-Mobile $60 Million, Its Largest Penalty Ever, Over Unauthorized Data Access

The U.S. Committee on Foreign Investment (CFIUS) fined T-Mobile $60 million, its largest penalty ever, for failing to prevent and report unauthorized access to sensitive data tied to violations of a mitigation agreement from its 2020 merger with Sprint. “The size of the fine, and CFIUS’s unprecedented decision to make it public, show the committee is taking a more muscular approach to enforcement as it seeks to deter future violations,” reports Reuters. From the report: T-Mobile said in a statement that it experienced technical issues during its post-merger integration with Sprint that affected “information shared from a small number of law enforcement information requests.” It stressed that the data never left the law enforcement community, was reported “in a timely manner” and was “quickly addressed.” The failure of T-Mobile to report the incidents promptly delayed CFIUS’ efforts to investigate and mitigate any potential harm to U.S. national security, they added, without providing further details. “The $60 million penalty announcement highlights the committee’s commitment to ramping up CFIUS enforcement by holding companies accountable when they fail to comply with their obligations,” one of the U.S. officials said, adding that transparency around enforcement actions incentivizes other companies to comply with their obligations.

Read more of this story at Slashdot.

IRS Has Loads of Legacy IT, Still Has No Firm Plans To Replace It

The IRS should reopen its Technology Retirement Office to effectively manage the retirement and replacement of legacy systems, according to a Treasury Inspector General for Tax Administration (TIGTA) audit. The Register reports: The report (PDF), from the Treasury Inspector General for Tax Administration (TIGTA), credits the IRS with fully implementing two out of four previous tech modernization recommendations, though argues the other two recommendations were ineffectively implemented. Those failures include the agency’s decision in 2023 to scrap its own Technology Retirement Office, which stood up in 2021 “to strategically reduce the [IRS’ IT] footprint.” Without that office, “there is no enterprise-wide program to identify, prioritize, and execute the updating, replacing, or retiring of legacy systems” at the IRS, the inspector general declared, adding the unit should be reestablished or brought back in some similar form.

The closure of the retirement office, in the eyes of the TIGTA, is part of the IRS’s failure to properly identify and plan for shutting down legacy systems and possibly replacing them with something modern. According to the audit report, the IRS identified 107 of its 334 legacy systems as up for retirement, yet only two of those 107 have specific decommissioning plans. The TIGTA would like to see clear plans for all of those identified systems, and had hoped the retirement office (or similar) would provide them. Then there’s the second incomplete recommendation, which the IG said is the IRS’ failure to properly apply its own definition of a legacy system to all of its tech. […] In its response to the IG report, the IRS said it had largely addressed the two incomplete recommendations, though not entirely as the Inspector General might want.

Read more of this story at Slashdot.

FTC Finalizes Rule Banning Fake Reviews, Including Those Made With AI

TechCrunch’s Lauren Forristal reports: The U.S. Federal Trade Commission (FTC) announced on Wednesday a final rule that will tackle several types of fake reviews and prohibit marketers from using deceptive practices, such as AI-generated reviews, censoring honest negative reviews and compensating third parties for positive reviews. The decision was the result of a 5-to-0 vote. The new rule will start being enforced 60 days after it’s published in the official government publication called Federal Register. […]

According to the final rule, the maximum civil penalty for fake reviews is $51,744 per violation. However, the courts could impose lower penalties depending on the specific case. “Ultimately, courts will also decide how to calculate the number of violations in a given case,” the Commission wrote. […] The FTC initially proposed the rule on June 30, 2023, following an advanced notice of proposed rulemaking issued in November 2022. You can read the finalized rule here (PDF), but we also included a summary of it below:

– No fake or disingenuous reviews. This includes AI-generated reviews and reviews from anyone who doesn’t have experience with the actual product.
– Businesses can’t sell or buy reviews, whether negative or positive.
– Company insiders writing reviews need to clearly disclose their connection to the business. Officers or managers are prohibited from giving testimonials and can’t ask employees to solicit reviews from relatives.
– Company-controlled review websites that claim to be independent aren’t allowed.
– No using legal threats, physical threats or intimidation to forcefully delete or prevent negative reviews. Businesses also can’t misrepresent that the review portion of their website comprises all or most of the reviews when it’s suppressing the negative ones.
– No selling or buying fake engagement like social media followers, likes or views obtained through bots or hacked accounts.

Read more of this story at Slashdot.

Artists Claim ‘Big’ Win In Copyright Suit Fighting AI Image Generators

Ars Technica’s Ashley Belanger reports: Artists defending a class-action lawsuit are claiming a major win this week in their fight to stop the most sophisticated AI image generators from copying billions of artworks to train AI models and replicate their styles without compensating artists. In an order on Monday, US district judge William Orrick denied key parts of motions to dismiss from Stability AI, Midjourney, Runway AI, and DeviantArt. The court will now allow artists to proceed with discovery on claims that AI image generators relying on Stable Diffusion violate both the Copyright Act and the Lanham Act, which protects artists from commercial misuse of their names and unique styles.

“We won BIG,” an artist plaintiff, Karla Ortiz, wrote on X (formerly Twitter), celebrating the order. “Not only do we proceed on our copyright claims,” but “this order also means companies who utilize” Stable Diffusion models and LAION-like datasets that scrape artists’ works for AI training without permission “could now be liable for copyright infringement violations, amongst other violations.” Lawyers for the artists, Joseph Saveri and Matthew Butterick, told Ars that artists suing “consider the Court’s order a significant step forward for the case,” as “the Court allowed Plaintiffs’ core copyright-infringement claims against all four defendants to proceed.”

Read more of this story at Slashdot.