‘AI May Not Steal Many Jobs After All’

Alorica — which runs customer-service centers around the world — has introduced an AI translation tool that lets its representatives talk with customers in 200 different languages. But according to the Associated Press, “Alorica isn’t cutting jobs. It’s still hiring aggressively.”

The experience at Alorica — and at other companies, including furniture retailer IKEA — suggests that AI may not prove to be the job killer that many people fear. Instead, the technology might turn out to be more like breakthroughs of the past — the steam engine, electricity, the internet: That is, eliminate some jobs while creating others. And probably making workers more productive in general, to the eventual benefit of themselves, their employers and the economy. Nick Bunker, an economist at the Indeed Hiring Lab, said he thinks AI “will affect many, many jobs — maybe every job indirectly to some extent. But I don’t think it’s going to lead to, say, mass unemployment…. ”

[T]he widespread assumption that AI chatbots will inevitably replace service workers, the way physical robots took many factory and warehouse jobs, isn’t becoming reality in any widespread way — not yet, anyway. And maybe it never will. The White House Council of Economic Advisers said last month that it found “little evidence that AI will negatively impact overall employment.” The advisers noted that history shows technology typically makes companies more productive, speeding economic growth and creating new types of jobs in unexpected ways… The outplacement firm Challenger, Gray & Christmas, which tracks job cuts, said it has yet to see much evidence of layoffs that can be attributed to labor-saving AI. “I don’t think we’ve started seeing companies saying they’ve saved lots of money or cut jobs they no longer need because of this,” said Andy Challenger, who leads the firm’s sales team. “That may come in the future. But it hasn’t played out yet.”

At the same time, the fear that AI poses a serious threat to some categories of jobs isn’t unfounded. Consider Suumit Shah, an Indian entrepreneur who caused a uproar last year by boasting that he had replaced 90% of his customer support staff with a chatbot named Lina. The move at Shah’s company, Dukaan, which helps customers set up e-commerce sites, shrank the response time to an inquiry from 1 minute, 44 seconds to “instant.” It also cut the typical time needed to resolve problems from more than two hours to just over three minutes. “It’s all about AI’s ability to handle complex queries with precision,” Shah said by email. The cost of providing customer support, he said, fell by 85%….

Similarly, researchers at Harvard Business School, the German Institute for Economic Research and London’s Imperial College Business School found in a study last year that job postings for writers, coders and artists tumbled within eight months of the arrival of ChatGPT.
On the other hand, after Ikea introduced a customer-service chatbot in 2021 to handle simple inquiries, it didn’t result in massive layoffs according to the article. Instead Ikea ended up retraining 8,500 customer-service workers to handle other tasks like advising customers on interior design and fielding complicated customer calls.

Read more of this story at Slashdot.

1,000 Autonomous AI Agents Collaborating? Altera Simulates It In Minecraft

Altera AI’s home page says their mission is “to create digital human beings that live, care, and grow with us,” adding that their company builds machines “with fundamental human qualities, starting with friends that can play video games with you.”

And while their agents can function in many different games and apps, Altera used Minecraft to launch “the first-ever simulation of over 1,000 collaborating autonomous AI agents,” reports ReadWrite, “working together in a Minecraft world, all of which can operate for hours or days without intervention from humans.”

The agents have already started to develop their own economy, culture, religion, and government, with the AI already working on establishing its own systems. The CEO Robert Yang took to X to share the news and introduce Project Sid…
So far, the agents have already formed a merchant hub, have voted in a democracy, spread religions, and collected five times more distinct items than before… “Though starting in games, we’re solving the deepest issues facing agents: coherence, multi-agent collaboration, and long-term progression,” said the CEO.
According to the video, the most active trader in their simulation was the priest — because he was bribing the other townsfolk to convert to his religion. (Which apparently involved the Flying Spaghetti Monster…) “We run these worlds every day, and they’re always different,” the video’s narrator says, while pointing out that their agents had collected 32% of all the items in Minecraft — five times more than anything ever reported for an individual agent.

“Sid starts in Minecraft, but we are already going beyond,” CEO Yang says in the video, calling it “the first-ever agent civilization.”

Read more of this story at Slashdot.

OpenAI Co-Founder Raises $1 Billion For New Safety-Focused AI Startup

Safe Superintelligence (SSI), co-founded by OpenAI’s former chief scientist Ilya Sutskever, has raised $1 billion to develop safe AI systems that surpass human capabilities. The company, valued at $5 billion, plans to use the funds to hire top talent and acquire computing power, with investors including Andreessen Horowitz, Sequoia Capital, and DST Global. Reuters reports: Sutskever, 37, is one of the most influential technologists in AI. He co-founded SSI in June with Gross, who previously led AI initiatives at Apple, and Daniel Levy, a former OpenAI researcher. Sutskever is chief scientist and Levy is principal scientist, while Gross is responsible for computing power and fundraising. Sutskever said his new venture made sense because he “identified a mountain that’s a bit different from what I was working on.”

SSI is currently very much focused on hiring people who will fit in with its culture. Gross said they spend hours vetting if candidates have “good character”, and are looking for people with extraordinary capabilities rather than overemphasizing credentials and experience in the field. “One thing that excites us is when you find people that are interested in the work, that are not interested in the scene, in the hype,” he added. SSI says it plans to partner with cloud providers and chip companies to fund its computing power needs but hasn’t yet decided which firms it will work with. AI startups often work with companies such as Microsoft and Nvidia to address their infrastructure needs.

Sutskever was an early advocate of scaling, a hypothesis that AI models would improve in performance given vast amounts of computing power. The idea and its execution kicked off a wave of AI investment in chips, data centers and energy, laying the groundwork for generative AI advances like ChatGPT. Sutskever said he will approach scaling in a different way than his former employer, without sharing details. “Everyone just says scaling hypothesis. Everyone neglects to ask, what are we scaling?” he said. “Some people can work really long hours and they’ll just go down the same path faster. It’s not so much our style. But if you do something different, then it becomes possible for you to do something special.”

Read more of this story at Slashdot.

NaNoWriMo Is In Disarray After Organizers Defend AI Writing Tools

The Verge’s Jess Weatherbed reports: The organization behind National Novel Writing Month (NaNoWriMo) is being slammed online after it claimed that opposing the use of AI writing tools is “classist and ableist.” On Saturday, NaNoWriMo published its stance on the technology, announcing that it doesn’t explicitly support or condemn any approach to writing. “We believe that to categorically condemn AI would be to ignore classist and ableist issues surrounding the use of the technology, and that questions around the use of AI tie to questions around privilege,” NaNoWriMo said, arguing that “not all brains” have the “same abilities” and that AI tools can reduce the financial burden of hiring human writing assistants.

NaNoWriMo’s annual creative writing event is the organization’s flagship program that challenges participants to create a 50,000-word manuscript every November. Last year, the organization said that it accepts novels written with the help of AI apps like ChatGPT but noted that doing so for the entire submission “would defeat the purpose of the challenge.” This year’s post goes further. “We recognize that some members of our community stand staunchly against AI for themselves, and that’s perfectly fine,” said NaNoWriMo in its latest post advocating for AI tools. “As individuals, we have the freedom to make our own decisions.”

The post has since been lambasted by writers across platforms like X and Reddit, who, like many creatives, believe that generative AI tools are exploitive and devalue human art. Many disabled writers also criticized the statement for inferring that they need generative AI tools to write effectively. Meanwhile, Daniel Jose Older, a lead story architect for Star Wars: The High Republic, announced that he was resigning from the NaNoWriMo Writers Board due to the statement. “Generative AI empowers not the artist, not the writer, but the tech industry,” Star Wars: Aftermath author Chuck Wendig said in response to NaNoWriMo’s stance. “It steals content to remake content, graverobbing existing material to staple together its Frankensteinian idea of art and story.”

Read more of this story at Slashdot.

‘AI-Powered Remediation’: GitHub Now Offers ‘Copilot Autofix’ Suggestions for Code Vulnerabilities

InfoWorld reports that Microsoft-owned GitHub “has unveiled Copilot Autofix, an AI-powered software vulnerability remediation service.”
The feature became available Wednesday as part of the GitHub Advanced Security (or GHAS) service:

“Copilot Autofix analyzes vulnerabilities in code, explains why they matter, and offers code suggestions that help developers fix vulnerabilities as fast as they are found,” GitHub said in the announcement. GHAS customers on GitHub Enterprise Cloud already have Copilot Autofix included in their subscription. GitHub has enabled Copilot Autofix by default for these customers in their GHAS code scanning settings.

Beginning in September, Copilot Autofix will be offered for free in pull requests to open source projects.

During the public beta, which began in March, GitHub found that developers using Copilot Autofix were fixing code vulnerabilities more than three times faster than those doing it manually, demonstrating how AI agents such as Copilot Autofix can radically simplify and accelerate software development.

“Since implementing Copilot Autofix, we’ve observed a 60% reduction in the time spent on security-related code reviews,” says one principal engineer quoted in GitHub’s announcement, “and a 25% increase in overall development productivity.”

The announcement also notes that Copilot Autofix “leverages the CodeQL engine, GPT-4o, and a combination of heuristics and GitHub Copilot APIs.”
Code scanning tools detect vulnerabilities, but they don’t address the fundamental problem: remediation takes security expertise and time, two valuable resources in critically short supply. In other words, finding vulnerabilities isn’t the problem. Fixing them is…

Developers can keep new vulnerabilities out of their code with Copilot Autofix in the pull request, and now also pay down the backlog of security debt by generating fixes for existing vulnerabilities… Fixes can be generated for dozens of classes of code vulnerabilities, such as SQL injection and cross-site scripting, which developers can dismiss, edit, or commit in their pull request…. For developers who aren’t necessarily security experts, Copilot Autofix is like having the expertise of your security team at your fingertips while you review code…

As the global home of the open source community, GitHub is uniquely positioned to help maintainers detect and remediate vulnerabilities so that open source software is safer and more reliable for everyone. We firmly believe that it’s highly important to be both a responsible consumer of open source software and contributor back to it, which is why open source maintainers can already take advantage of GitHub’s code scanning, secret scanning, dependency management, and private vulnerability reporting tools at no cost. Starting in September, we’re thrilled to add Copilot Autofix in pull requests to this list and offer it for free to all open source projects…

While responsibility for software security continues to rest on the shoulders of developers, we believe that AI agents can help relieve much of the burden…. With Copilot Autofix, we are one step closer to our vision where a vulnerability found means a vulnerability fixed.

Read more of this story at Slashdot.

AI Models Face Collapse If They Overdose On Their Own Output

According to a new study published in Nature, researchers found that training AI models using AI-generated datasets can lead to “model collapse,” where models produce increasingly nonsensical outputs over generations. “In one example, a model started with a text about European architecture in the Middle Ages and ended up — in the ninth generation — spouting nonsense about jackrabbits,” writes The Register’s Lindsay Clark. From the report: [W]ork led by Ilia Shumailov, Google DeepMind and Oxford post-doctoral researcher, found that an AI may fail to pick up less common lines of text, for example, in training datasets, which means subsequent models trained on the output cannot carry forward those nuances. Training new models on the output of earlier models in this way ends up in a recursive loop. In an accompanying article, Emily Wenger, assistant professor of electrical and computer engineering at Duke University, illustrated model collapse with the example of a system tasked with generating images of dogs. “The AI model will gravitate towards recreating the breeds of dog most common in its training data, so might over-represent the Golden Retriever compared with the Petit Basset Griffon Vendéen, given the relative prevalence of the two breeds,” she said.

“If subsequent models are trained on an AI-generated data set that over-represents Golden Retrievers, the problem is compounded. With enough cycles of over-represented Golden Retriever, the model will forget that obscure dog breeds such as Petit Basset Griffon Vendeen exist and generate pictures of just Golden Retrievers. Eventually, the model will collapse, rendering it unable to generate meaningful content.” While she concedes an over-representation of Golden Retrievers may be no bad thing, the process of collapse is a serious problem for meaningful representative output that includes less-common ideas and ways of writing. “This is the problem at the heart of model collapse,” she said.

Read more of this story at Slashdot.