AI Models Face Collapse If They Overdose On Their Own Output

According to a new study published in Nature, researchers found that training AI models using AI-generated datasets can lead to “model collapse,” where models produce increasingly nonsensical outputs over generations. “In one example, a model started with a text about European architecture in the Middle Ages and ended up — in the ninth generation — spouting nonsense about jackrabbits,” writes The Register’s Lindsay Clark. From the report: [W]ork led by Ilia Shumailov, Google DeepMind and Oxford post-doctoral researcher, found that an AI may fail to pick up less common lines of text, for example, in training datasets, which means subsequent models trained on the output cannot carry forward those nuances. Training new models on the output of earlier models in this way ends up in a recursive loop. In an accompanying article, Emily Wenger, assistant professor of electrical and computer engineering at Duke University, illustrated model collapse with the example of a system tasked with generating images of dogs. “The AI model will gravitate towards recreating the breeds of dog most common in its training data, so might over-represent the Golden Retriever compared with the Petit Basset Griffon Vendéen, given the relative prevalence of the two breeds,” she said.

“If subsequent models are trained on an AI-generated data set that over-represents Golden Retrievers, the problem is compounded. With enough cycles of over-represented Golden Retriever, the model will forget that obscure dog breeds such as Petit Basset Griffon Vendeen exist and generate pictures of just Golden Retrievers. Eventually, the model will collapse, rendering it unable to generate meaningful content.” While she concedes an over-representation of Golden Retrievers may be no bad thing, the process of collapse is a serious problem for meaningful representative output that includes less-common ideas and ways of writing. “This is the problem at the heart of model collapse,” she said.

Read more of this story at Slashdot.

World of Warcraft Developers Form Blizzard’s Largest and Most Inclusive Union

Ash Parrish reports via The Verge: More than 500 developers at Blizzard Entertainment who work on World of Warcraft have voted to form a union. The World of Warcraft GameMakers Guild, formed with the assistance of the Communication Workers of America (CWA), is composed of employees across every department, including designers, engineers, artists, producers, and more. Together, they have formed the largest wall-to-wall union — or a union inclusive of multiple departments and disciplines — at Microsoft. This news comes less than a week after the formation of the Bethesda Game Studios union, which, at the time of the announcement, was itself the largest wall-to-wall Microsoft union. […]

The World of Warcraft GameMakers Guild is made up of over 500 members across Blizzard offices in California and Massachusetts. Despite its size — it is the second largest union at Microsoft overall behind Activision’s 600-member QA union — [Paul Cox, senior quest designer and Blizzard veteran] said that Microsoft’s labor neutrality agreement helped get the organization ball rolling. In a statement to The Verge, Microsoft spokesperson Delaney Simmons said, “We continue to support our employees’ right to choose how they are represented in the workplace, and we will engage in good faith negotiations with the CWA as we work towards a collective bargaining agreement.”

Read more of this story at Slashdot.

Cyber Firm KnowBe4 Hired a Fake IT Worker From North Korea

In a blog post on Tuesday, security firm KnowBe4 revealed that a remote software engineer hire was a North Korean threat actor using a stolen identity and AI-augmented images. “Detailing a seemingly thorough interview process that included background checks, verified references and four video conference-based interviews, KnowBe4 founder and CEO Stu Sjouwerman said the worker avoided being caught by using a valid identity that was stolen from a U.S.-based individual,” reports CyberScoop. “The scheme was further enhanced by the actor using a stock image augmented by artificial intelligence.” From the report: An internal investigation started when KnowBe4’s InfoSec Security Operations Center team detected “a series of suspicious activities” from the new hire. The remote worker was sent an Apple laptop, which was flagged by the company on July 15 when malware was loaded onto the machine. The AI-filtered photo, meanwhile, was flagged by the company’s Endpoint Detection and Response software. Later that evening, the SOC team had “contained” the fake worker’s systems after he stopped responding to outreach. During a roughly 25-minute period, “the attacker performed various actions to manipulate session history files, transfer potentially harmful files, and execute unauthorized software,” Sjouwerman wrote in the post. “He used a [single-board computer] raspberry pi to download the malware.” From there, the company shared its data and findings with the FBI and with Mandiant, the Google-owned cyber firm, and came to the conclusion that the worker was a fictional persona operating from North Korea.

KnowBe4 said the fake employee likely had his workstation connected “to an address that is basically an ‘IT mule laptop farm.'” They’d then use a VPN to work the night shift from where they actually reside — in this case, North Korea “or over the border in China.” That work would take place overnight, making it appear that they’re logged on during normal U.S. business hours. “The scam is that they are actually doing the work, getting paid well, and give a large amount to North Korea to fund their illegal programs,” Sjouwerman wrote. “I don’t have to tell you about the severe risk of this.” Despite the intrusion, Sjouwerman said “no illegal access was gained, and no data was lost, compromised, or exfiltrated on any KnowBe4 systems.” He chalked up the incident to a threat actor that “demonstrated a high level of sophistication in creating a believable cover identity” and identified “weaknesses in the hiring and background check processes.”

Read more of this story at Slashdot.

Canada Apologizes After Drone Caught Spying On New Zealand’s Olympic Practices

New Zealand has lodged a formal complaint with the International Olympic Committee (IOC) after a Canadian soccer “support staff member” allegedly flew a drone over their training session. The Canadian Olympic Committee has apologized, expressed shock and disappointment, and launched an investigation into the incident. ESPN reports: The COC said the individual has been detained by French authorities. “Team support members immediately reported the incident to police, leading to the drone operator, who has been identified as a support staff member of the wider Canadian Women’s football team, to be detained,” the NZOC said in a statement. “The NZOC has formally lodged the incident with the IOC integrity unit and has asked Canada for a full review. […]

For their part, Canada has said it was also stunned. The COC said it was made aware that a “non-accredited” member of its support team had used a drone to record the Silver Ferns’ practice. “The Canadian Olympic Committee stands for fair-play and we are shocked and disappointed. We offer our heartfelt apologies to New Zealand Football, to all the players affected, and to the New Zealand Olympic Committee.” It added it was “reviewing next steps” with the IOC, the Paris organizing committee and FIFA. The person responsible was Joseph Lombardi, an unaccredited analyst with Canada Soccer. As a result of these findings, Lombardi is being removed from the Canadian Olympic Team and sent home immediately. The same punishment will be applied to Jasmine Mander, the assistant coach to whom Mr. Lombardi sent information to.

Furthermore, Head Coach Bev Priestman has removed herself from coaching the match against New Zealand on July 25th and the entire Canada Soccer staff will undergo mandatory ethics training.

Read more of this story at Slashdot.